code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
tensorflow tf.compat.v1.tpu.batch_parallel tf.compat.v1.tpu.batch\_parallel
================================
Shards `computation` along the batch dimension for parallel execution.
```
tf.compat.v1.tpu.batch_parallel(
computation: Callable[..., Any],
inputs: Optional[List[List[Optional[core_types.Tensor]]]] = None,
num_shards: int = 1,
infeed_queue: Optional[tpu_feed.InfeedQueue] = None,
device_assignment: Optional[tf.tpu.experimental.DeviceAssignment] = None,
name: Optional[Text] = None,
xla_options: Optional[tf.tpu.XLAOptions] = None
)
```
Convenience wrapper around shard().
`inputs` must be a list of Tensors or None (equivalent to an empty list). Each input is split into `num_shards` pieces along the 0-th dimension, and computation is applied to each shard in parallel.
Tensors are broadcast to all shards if they are lexically captured by `computation`. e.g.,
x = tf.constant(7) def computation(): return x + 3 ... = shard(computation, ...)
The outputs from all shards are concatenated back together along their 0-th dimension.
Inputs and outputs of the computation must be at least rank-1 Tensors.
| Args |
| `computation` | A Python function that builds a computation to apply to each shard of the input. |
| `inputs` | A list of input tensors or None (equivalent to an empty list). The 0-th dimension of each Tensor must have size divisible by `num_shards`. |
| `num_shards` | The number of shards. |
| `infeed_queue` | If not `None`, the `InfeedQueue` from which to append a tuple of arguments as inputs to `computation`. |
| `device_assignment` | If not `None`, a `DeviceAssignment` describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if `None`. The `DeviceAssignment` may be omitted if each shard of the computation uses only one core, and there is either only one shard, or the number of shards is equal to the number of cores in the TPU system. |
| `name` | (Deprecated) Does nothing. |
| `xla_options` | An instance of [`tpu.XLAOptions`](https://www.tensorflow.org/api_docs/python/tf/tpu/XLAOptions) which indicates the options passed to XLA compiler. Use `None` for default options. |
| Returns |
| A list of output tensors. |
| Raises |
| `ValueError` | If `num_shards <= 0` |
tensorflow tf.compat.v1.tpu.rewrite tf.compat.v1.tpu.rewrite
========================
Rewrites `computation` for execution on a TPU system.
```
tf.compat.v1.tpu.rewrite(
computation: Callable[..., Any],
inputs: Optional[List[List[Optional[core_types.Tensor]]]] = None,
infeed_queue: Optional[tpu_feed.InfeedQueue] = None,
device_assignment: Optional[tf.tpu.experimental.DeviceAssignment] = None,
name: Optional[Text] = None,
xla_options: Optional[tf.tpu.XLAOptions] = None
) -> Any
```
| Args |
| `computation` | A Python function that builds a computation to apply to the input. If the function takes n inputs, 'inputs' should be a list of n tensors. `computation` may return a list of operations and tensors. Tensors must come before operations in the returned list. The return value of `rewrite` is a list of tensors corresponding to the tensors from the output of `computation`. All `Operation`s constructed during `computation` will be executed when evaluating any of the returned output tensors, not just the ones returned. |
| `inputs` | A list of input tensors or `None` (equivalent to an empty list). Each input can be a nested structure containing values that are convertible to tensors. Note that passing an N-dimension list of compatible values will result in a N-dimension list of scalar tensors rather than a single Rank-N tensors. If you need different behavior, convert part of inputs to tensors with [`tf.convert_to_tensor`](../../../convert_to_tensor). |
| `infeed_queue` | If not `None`, the `InfeedQueue` from which to append a tuple of arguments as inputs to `computation`. |
| `device_assignment` | if not `None`, a `DeviceAssignment` describing the mapping between logical cores in the computation with physical cores in the TPU topology. May be omitted for a single-core computation, in which case the core attached to task 0, TPU device 0 is used. |
| `name` | (Deprecated) Does nothing. |
| `xla_options` | An instance of [`tpu.XLAOptions`](https://www.tensorflow.org/api_docs/python/tf/tpu/XLAOptions) which indicates the options passed to XLA compiler. Use `None` for default options. |
| Returns |
| Same data structure as if computation(\*inputs) is called directly with some exceptions for correctness. Exceptions include: 1) None output: a NoOp would be returned which control-depends on computation. 2) Single value output: A tuple containing the value would be returned. 3) Operation-only outputs: a NoOp would be returned which control-depends on computation. |
tensorflow tf.compat.v1.tpu.CrossShardOptimizer tf.compat.v1.tpu.CrossShardOptimizer
====================================
An optimizer that averages gradients across TPU shards.
Inherits From: [`Optimizer`](../train/optimizer)
```
tf.compat.v1.tpu.CrossShardOptimizer(
opt,
reduction=losses.Reduction.MEAN,
name='CrossShardOptimizer',
group_assignment=None
)
```
| Args |
| `opt` | An existing `Optimizer` to encapsulate. |
| `reduction` | The reduction to apply to the shard losses. |
| `name` | Optional name prefix for the operations created when applying gradients. Defaults to "CrossShardOptimizer". |
| `group_assignment` | Optional 2d int32 lists with shape [num\_groups, num\_replicas\_per\_group] which describles how to apply optimizer to subgroups. |
| Raises |
| `ValueError` | If reduction is not a valid cross-shard reduction. |
Methods
-------
### `apply_gradients`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_optimizer.py#L164-L193)
```
apply_gradients(
grads_and_vars, global_step=None, name=None
)
```
Apply gradients to variables.
Calls tpu\_ops.cross\_replica\_sum() to sum gradient contributions across replicas, and then applies the real optimizer.
| Args |
| `grads_and_vars` | List of (gradient, variable) pairs as returned by compute\_gradients(). |
| `global_step` | Optional Variable to increment by one after the variables have been updated. |
| `name` | Optional name for the returned operation. Default to the name passed to the Optimizer constructor. |
| Returns |
| An `Operation` that applies the gradients. If `global_step` was not None, that operation also increments `global_step`. |
| Raises |
| `ValueError` | If the grads\_and\_vars is malformed. |
### `compute_gradients`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_optimizer.py#L112-L162)
```
compute_gradients(
loss, var_list=None, **kwargs
)
```
Compute gradients of "loss" for the variables in "var\_list".
This simply wraps `compute_gradients()` from the real optimizer. The gradients will be aggregated in `apply_gradients()` so that user can modify the gradients like clipping with per replica global norm if needed. The global norm with aggregated gradients can be bad as one replica's huge gradients can hurt the gradients from other replicas.
When the CrossShardOptimizer is constructed with `reduction == losses.Reduction.MEAN` (default), this function scales the loss by `1.0 / num_shards` before computing the gradients. Assuming the optimizer uses the default implementation of `compute_gradients()`, the gradients of the scaled loss are scaled by `1.0 / num_shards` compared to the gradients of the original loss. This scaling factor is important because `apply_gradients()` sums gradients across shards, rather than averaging them. However, the scaling factor must be taken into account when clipping the norm of the gradients or performing other postprocessing.
| Args |
| `loss` | A Tensor containing the value to minimize. |
| `var_list` | Optional list or tuple of [`tf.Variable`](../../../variable) to update to minimize `loss`. Defaults to the list of variables collected in the graph under the key `GraphKey.TRAINABLE_VARIABLES`. |
| `**kwargs` | Keyword arguments for compute\_gradients(). |
| Returns |
| A list of (gradient, variable) pairs. |
| Raises |
| `ValueError` | If not within a tpu\_shard\_context or group\_assignment is invalid. |
### `get_name`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/optimizer.py#L430-L431)
```
get_name()
```
### `get_slot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_optimizer.py#L195-L207)
```
get_slot(
*args, **kwargs
)
```
Return a slot named "name" created for "var" by the Optimizer.
This simply wraps the get\_slot() from the actual optimizer.
| Args |
| `*args` | Arguments for get\_slot(). |
| `**kwargs` | Keyword arguments for get\_slot(). |
| Returns |
| The `Variable` for the slot if it was created, `None` otherwise. |
### `get_slot_names`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_optimizer.py#L209-L221)
```
get_slot_names(
*args, **kwargs
)
```
Return a list of the names of slots created by the `Optimizer`.
This simply wraps the get\_slot\_names() from the actual optimizer.
| Args |
| `*args` | Arguments for get\_slot(). |
| `**kwargs` | Keyword arguments for get\_slot(). |
| Returns |
| A list of strings. |
### `minimize`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/optimizer.py#L433-L491)
```
minimize(
loss,
global_step=None,
var_list=None,
gate_gradients=GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None
)
```
Add operations to minimize `loss` by updating `var_list`.
This method simply combines calls `compute_gradients()` and `apply_gradients()`. If you want to process the gradient before applying them call `compute_gradients()` and `apply_gradients()` explicitly instead of using this function.
| Args |
| `loss` | A `Tensor` containing the value to minimize. |
| `global_step` | Optional `Variable` to increment by one after the variables have been updated. |
| `var_list` | Optional list or tuple of `Variable` objects to update to minimize `loss`. Defaults to the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. |
| `gate_gradients` | How to gate the computation of gradients. Can be `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. |
| `aggregation_method` | Specifies the method used to combine gradient terms. Valid values are defined in the class `AggregationMethod`. |
| `colocate_gradients_with_ops` | If True, try colocating gradients with the corresponding op. |
| `name` | Optional name for the returned operation. |
| `grad_loss` | Optional. A `Tensor` holding the gradient computed for `loss`. |
| Returns |
| An Operation that updates the variables in `var_list`. If `global_step` was not `None`, that operation also increments `global_step`. |
| Raises |
| `ValueError` | If some of the variables are not `Variable` objects. |
#### eager compatibility
When eager execution is enabled, `loss` should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of `var_list` if not None, else with respect to any trainable variables created during the execution of the `loss` function. `gate_gradients`, `aggregation_method`, `colocate_gradients_with_ops` and `grad_loss` are ignored when eager execution is enabled.
### `variables`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_optimizer.py#L223-L225)
```
variables()
```
Forwarding the variables from the underlying optimizer.
| Class Variables |
| GATE\_GRAPH | `2` |
| GATE\_NONE | `0` |
| GATE\_OP | `1` |
tensorflow tf.compat.v1.tpu.shutdown_system tf.compat.v1.tpu.shutdown\_system
=================================
Shuts down a running a distributed TPU system.
```
tf.compat.v1.tpu.shutdown_system(
job: Optional[Text] = None
) -> tf.Operation
```
| Args |
| `job` | The job (the XXX in TensorFlow device specification /job:XXX) that contains the TPU devices that will be shutdown. If job=None it is assumed there is only one job in the TensorFlow flock, and an error will be returned if this assumption does not hold. |
tensorflow tf.compat.v1.tpu.replicate tf.compat.v1.tpu.replicate
==========================
Builds a graph operator that runs a replicated TPU computation.
```
tf.compat.v1.tpu.replicate(
computation: Callable[..., Any],
inputs: Optional[List[List[core_types.Tensor]]] = None,
infeed_queue: Optional[tpu_feed.InfeedQueue] = None,
device_assignment: Optional[tf.tpu.experimental.DeviceAssignment] = None,
name: Optional[Text] = None,
maximum_shapes: Optional[Any] = None,
padding_spec: Optional[tf.compat.v1.tpu.PaddingSpec] = None,
xla_options: Optional[tf.tpu.XLAOptions] = None
) -> List[Any]
```
Example for the basic usage that `inputs` has static shape:
```
def computation(x):
x = x + 1
return tf.math.reduce_mean(x)
x = tf.convert_to_tensor([1., 2., 3.])
y = tf.convert_to_tensor([4., 5., 6.])
tf.compat.v1.tpu.replicate(computation, inputs=[[x], [y]])
```
If the `inputs` has dynamic shapes and you would like to automatically bucketize the inputs to avoid XLA recompilation. See the advanced example below:
```
def computation(x):
x = x + 1
return tf.math.reduce_mean(x)
# Assume input tensors in two replicas `x` and `y` both have dynamic shape
# ([None, 2]).
tf.compat.v1.tpu.replicate(
computation,
inputs=[x, y],
maximum_shapes=[tf.TensorShape([None, None])],
padding_spec=tf.compat.v1.tpu.PaddingSpec.POWER_OF_TWO)
```
| Args |
| `computation` | A Python function that builds the computation to replicate. |
| `inputs` | A list of lists of input tensors or `None` (equivalent to `[[]]`), indexed by `[replica_num][input_num]`. All replicas must have the same number of inputs. Each input can be a nested structure containing values that are convertible to tensors. Note that passing an N-dimension list of compatible values will result in a N-dimension list of scalar tensors rather than a single Rank-N tensors. If you need different behavior, convert part of inputs to tensors with [`tf.convert_to_tensor`](../../../convert_to_tensor). |
| `infeed_queue` | If not `None`, the `InfeedQueue` from which to append a tuple of arguments as inputs to computation. |
| `device_assignment` | If not `None`, a `DeviceAssignment` describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if `None`. The `DeviceAssignment` may be omitted if each replica of the computation uses only one core, and there is either only one replica, or the number of replicas is equal to the number of cores in the TPU system. |
| `name` | (Deprecated) Does nothing. |
| `maximum_shapes` | A nested structure of tf.TensorShape representing the shape to which the respective component of each input element in each replica should be padded. Any unknown dimensions (e.g. tf.compat.v1.Dimension(None) in a tf.TensorShape or -1 in a tensor-like object) will be padded to the maximum size of that dimension over all replicas. The structure of `maximum_shapes` needs to be the same as `inputs[0]`. |
| `padding_spec` | An enum specified by `tpu.PaddingSpec`. This describes the padding policy when the `inputs` to `tpu.replicate` is dynamic. One usage is to enable automatic bucketizing on the inputs by setting the value to `tpu.PaddingSpec.POWER_OF_TWO`, which can help to reduce the recompilation in the XLA side. |
| `xla_options` | An instance of [`tpu.XLAOptions`](https://www.tensorflow.org/api_docs/python/tf/tpu/XLAOptions) which indicates the options passed to XLA compiler. Use `None` for default options. |
| Returns |
| A list of outputs, indexed by `[replica_num]` each output can be a nested structure same as what computation() returns with a few exceptions. Exceptions include: 1) None output: a NoOp would be returned which control-depends on computation. 2) Single value output: A tuple containing the value would be returned. 3) Operation-only outputs: a NoOp would be returned which control-depends on computation. |
| Raises |
| `ValueError` | If all replicas do not have equal numbers of input tensors. |
| `ValueError` | If the number of inputs per replica does not match the number of formal parameters to `computation`. |
| `ValueError` | If the static `inputs` dimensions don't match with the values given in `maximum_shapes`. |
| `ValueError` | If the structure of inputs per replica does not match the structure of `maximum_shapes`. |
tensorflow tf.compat.v1.tpu.experimental.shared_embedding_columns tf.compat.v1.tpu.experimental.shared\_embedding\_columns
========================================================
TPU version of [`tf.compat.v1.feature_column.shared_embedding_columns`](../../feature_column/shared_embedding_columns).
```
tf.compat.v1.tpu.experimental.shared_embedding_columns(
categorical_columns,
dimension,
combiner='mean',
initializer=None,
shared_embedding_collection_name=None,
max_sequence_lengths=None,
learning_rate_fn=None,
embedding_lookup_device=None,
tensor_core_shape=None,
use_safe_embedding_lookup=True
)
```
Note that the interface for `tf.tpu.experimental.shared_embedding_columns` is different from that of [`tf.compat.v1.feature_column.shared_embedding_columns`](../../feature_column/shared_embedding_columns): The following arguments are NOT supported: `ckpt_to_load_from`, `tensor_name_in_ckpt`, `max_norm` and `trainable`.
Use this function in place of tf.compat.v1.feature\_column.shared\_embedding\_columns` when you want to use the TPU to accelerate your embedding lookups via TPU embeddings.
```
column_a = tf.feature_column.categorical_column_with_identity(...)
column_b = tf.feature_column.categorical_column_with_identity(...)
tpu_columns = tf.tpu.experimental.shared_embedding_columns(
[column_a, column_b], 10)
...
def model_fn(features):
dense_feature = tf.keras.layers.DenseFeature(tpu_columns)
embedded_feature = dense_feature(features)
...
estimator = tf.estimator.tpu.TPUEstimator(
model_fn=model_fn,
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
column=tpu_columns,
...))
```
| Args |
| `categorical_columns` | A list of categorical columns returned from `categorical_column_with_identity`, `weighted_categorical_column`, `categorical_column_with_vocabulary_file`, `categorical_column_with_vocabulary_list`, `sequence_categorical_column_with_identity`, `sequence_categorical_column_with_vocabulary_file`, `sequence_categorical_column_with_vocabulary_list` |
| `dimension` | An integer specifying dimension of the embedding, must be > 0. |
| `combiner` | A string specifying how to reduce if there are multiple entries in a single row for a non-sequence column. For more information, see [`tf.feature_column.embedding_column`](../../../../feature_column/embedding_column). |
| `initializer` | A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `tf.truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`. |
| `shared_embedding_collection_name` | Optional name of the collection where shared embedding weights are added. If not given, a reasonable name will be chosen based on the names of `categorical_columns`. This is also used in `variable_scope` when creating shared embedding weights. |
| `max_sequence_lengths` | An list of non-negative integers, either None or empty or the same length as the argument categorical\_columns. Entries corresponding to non-sequence columns must be 0 and entries corresponding to sequence columns specify the max sequence length for the column. Any sequence shorter then this will be padded with 0 embeddings and any sequence longer will be truncated. |
| `learning_rate_fn` | A function that takes global step and returns learning rate for the embedding table. If you intend to use the same learning rate for multiple embedding tables, please ensure that you pass the exact same python function to all calls of shared\_embedding\_columns, otherwise performence may suffer. |
| `embedding_lookup_device` | The device on which to run the embedding lookup. Valid options are "cpu", "tpu\_tensor\_core", and "tpu\_embedding\_core". If specifying "tpu\_tensor\_core", a tensor\_core\_shape must be supplied. Defaults to "cpu". If not specified, the default behavior is embedding lookup on "tpu\_embedding\_core" for training and "cpu" for inference. Valid options for training : ["tpu\_embedding\_core", "tpu\_tensor\_core"] Valid options for serving : ["cpu", "tpu\_tensor\_core"] For training, tpu\_embedding\_core is good for large embedding vocab (>1M), otherwise, tpu\_tensor\_core is often sufficient. For serving, doing embedding lookup on tpu\_tensor\_core during serving is a way to reduce host cpu usage in cases where that is a bottleneck. |
| `tensor_core_shape` | If supplied, a list of integers which specifies the intended dense shape to run embedding lookup for this feature on TensorCore. The batch dimension can be left None or -1 to indicate a dynamic shape. Only rank 2 shapes currently supported. |
| `use_safe_embedding_lookup` | If true, uses safe\_embedding\_lookup\_sparse instead of embedding\_lookup\_sparse. safe\_embedding\_lookup\_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted. |
| Returns |
| A list of `_TPUSharedEmbeddingColumnV2`. |
| Raises |
| `ValueError` | if `dimension` not > 0. |
| `ValueError` | if `initializer` is specified but not callable. |
| `ValueError` | if `max_sequence_lengths` is specified and not the same length as `categorical_columns`. |
| `ValueError` | if `max_sequence_lengths` is positive for a non sequence column or 0 for a sequence column. |
| programming_docs |
tensorflow tf.compat.v1.tpu.experimental.AdagradParameters tf.compat.v1.tpu.experimental.AdagradParameters
===============================================
Optimization parameters for Adagrad with TPU embeddings.
```
tf.compat.v1.tpu.experimental.AdagradParameters(
learning_rate: float,
initial_accumulator: float = 0.1,
use_gradient_accumulation: bool = True,
clip_weight_min: Optional[float] = None,
clip_weight_max: Optional[float] = None,
weight_decay_factor: Optional[float] = None,
multiply_weight_decay_factor_by_learning_rate: Optional[bool] = None,
clip_gradient_min: Optional[float] = None,
clip_gradient_max: Optional[float] = None
)
```
Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the `optimization_parameters` argument to set the optimizer and its parameters. See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` for more details.
```
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=tf.tpu.experimental.AdagradParameters(0.1),
...))
```
| Args |
| `learning_rate` | used for updating embedding table. |
| `initial_accumulator` | initial accumulator for Adagrad. |
| `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. Please see `optimization_parameters.proto` for details. |
| `clip_weight_min` | the minimum value to clip by; None means -infinity. |
| `clip_weight_max` | the maximum value to clip by; None means +infinity. |
| `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. |
| `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. |
| `clip_gradient_min` | the minimum value to clip by; None means -infinity. Gradient accumulation must be set to true if this is set. |
| `clip_gradient_max` | the maximum value to clip by; None means +infinity. Gradient accumulation must be set to true if this is set. |
tensorflow tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters
=================================================================
Optimization parameters for stochastic gradient descent for TPU embeddings.
```
tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters(
learning_rate: float,
use_gradient_accumulation: bool = True,
clip_weight_min: Optional[float] = None,
clip_weight_max: Optional[float] = None,
weight_decay_factor: Optional[float] = None,
multiply_weight_decay_factor_by_learning_rate: Optional[bool] = None,
clip_gradient_min: Optional[float] = None,
clip_gradient_max: Optional[float] = None
)
```
Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the `optimization_parameters` argument to set the optimizer and its parameters. See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` for more details.
```
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=(
tf.tpu.experimental.StochasticGradientDescentParameters(0.1))))
```
| Args |
| `learning_rate` | a floating point value. The learning rate. |
| `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. Please see `optimization_parameters.proto` for details. |
| `clip_weight_min` | the minimum value to clip by; None means -infinity. |
| `clip_weight_max` | the maximum value to clip by; None means +infinity. |
| `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. |
| `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. |
| `clip_gradient_min` | the minimum value to clip by; None means -infinity. |
| `clip_gradient_max` | the maximum value to clip by; None means +infinity. |
tensorflow tf.compat.v1.tpu.experimental.embedding_column tf.compat.v1.tpu.experimental.embedding\_column
===============================================
TPU version of [`tf.compat.v1.feature_column.embedding_column`](../../../../feature_column/embedding_column).
```
tf.compat.v1.tpu.experimental.embedding_column(
categorical_column,
dimension,
combiner='mean',
initializer=None,
max_sequence_length=0,
learning_rate_fn=None,
embedding_lookup_device=None,
tensor_core_shape=None,
use_safe_embedding_lookup=True
)
```
Note that the interface for `tf.tpu.experimental.embedding_column` is different from that of [`tf.compat.v1.feature_column.embedding_column`](../../../../feature_column/embedding_column): The following arguments are NOT supported: `ckpt_to_load_from`, `tensor_name_in_ckpt`, `max_norm` and `trainable`.
Use this function in place of [`tf.compat.v1.feature_column.embedding_column`](../../../../feature_column/embedding_column) when you want to use the TPU to accelerate your embedding lookups via TPU embeddings.
```
column = tf.feature_column.categorical_column_with_identity(...)
tpu_column = tf.tpu.experimental.embedding_column(column, 10)
...
def model_fn(features):
dense_feature = tf.keras.layers.DenseFeature(tpu_column)
embedded_feature = dense_feature(features)
...
estimator = tf.estimator.tpu.TPUEstimator(
model_fn=model_fn,
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
column=[tpu_column],
...))
```
| Args |
| `categorical_column` | A categorical column returned from `categorical_column_with_identity`, `weighted_categorical_column`, `categorical_column_with_vocabulary_file`, `categorical_column_with_vocabulary_list`, `sequence_categorical_column_with_identity`, `sequence_categorical_column_with_vocabulary_file`, `sequence_categorical_column_with_vocabulary_list` |
| `dimension` | An integer specifying dimension of the embedding, must be > 0. |
| `combiner` | A string specifying how to reduce if there are multiple entries in a single row for a non-sequence column. For more information, see [`tf.feature_column.embedding_column`](../../../../feature_column/embedding_column). |
| `initializer` | A variable initializer function to be used in embedding variable initialization. If not specified, defaults to [`tf.compat.v1.truncated_normal_initializer`](../../truncated_normal_initializer) with mean `0.0` and standard deviation `1/sqrt(dimension)`. |
| `max_sequence_length` | An non-negative integer specifying the max sequence length. Any sequence shorter then this will be padded with 0 embeddings and any sequence longer will be truncated. This must be positive for sequence features and 0 for non-sequence features. |
| `learning_rate_fn` | A function that takes global step and returns learning rate for the embedding table. If you intend to use the same learning rate for multiple embedding tables, please ensure that you pass the exact same python function to all calls of embedding\_column, otherwise performence may suffer. |
| `embedding_lookup_device` | The device on which to run the embedding lookup. Valid options are "cpu", "tpu\_tensor\_core", and "tpu\_embedding\_core". If specifying "tpu\_tensor\_core", a tensor\_core\_shape must be supplied. If not specified, the default behavior is embedding lookup on "tpu\_embedding\_core" for training and "cpu" for inference. Valid options for training : ["tpu\_embedding\_core", "tpu\_tensor\_core"] Valid options for serving : ["cpu", "tpu\_tensor\_core"] For training, tpu\_embedding\_core is good for large embedding vocab (>1M), otherwise, tpu\_tensor\_core is often sufficient. For serving, doing embedding lookup on tpu\_tensor\_core during serving is a way to reduce host cpu usage in cases where that is a bottleneck. |
| `tensor_core_shape` | If supplied, a list of integers which specifies the intended dense shape to run embedding lookup for this feature on TensorCore. The batch dimension can be left None or -1 to indicate a dynamic shape. Only rank 2 shapes currently supported. |
| `use_safe_embedding_lookup` | If true, uses safe\_embedding\_lookup\_sparse instead of embedding\_lookup\_sparse. safe\_embedding\_lookup\_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted. |
| Returns |
| A `_TPUEmbeddingColumnV2`. |
| Raises |
| `ValueError` | if `dimension` not > 0. |
| `ValueError` | if `initializer` is specified but not callable. |
tensorflow Module: tf.compat.v1.tpu.experimental.embedding Module: tf.compat.v1.tpu.experimental.embedding
===============================================
Public API for tf.tpu.experimental.embedding namespace.
Classes
-------
[`class Adagrad`](../../../../tpu/experimental/embedding/adagrad): Optimization parameters for Adagrad with TPU embeddings.
[`class AdagradMomentum`](../../../../tpu/experimental/embedding/adagradmomentum): Optimization parameters for Adagrad + Momentum with TPU embeddings.
[`class Adam`](../../../../tpu/experimental/embedding/adam): Optimization parameters for Adam with TPU embeddings.
[`class FTRL`](../../../../tpu/experimental/embedding/ftrl): Optimization parameters for FTRL with TPU embeddings.
[`class FeatureConfig`](../../../../tpu/experimental/embedding/featureconfig): Configuration data for one embedding feature.
[`class SGD`](../../../../tpu/experimental/embedding/sgd): Optimization parameters for stochastic gradient descent for TPU embeddings.
[`class TPUEmbedding`](../../../../tpu/experimental/embedding/tpuembedding): The TPUEmbedding mid level API.
[`class TPUEmbeddingForServing`](../../../../tpu/experimental/embedding/tpuembeddingforserving): The TPUEmbedding mid level API running on CPU for serving.
[`class TPUEmbeddingV0`](../../../../tpu/experimental/embedding/tpuembeddingv0): The TPUEmbedding mid level API running on TPU without Embedding accelerator.
[`class TableConfig`](../../../../tpu/experimental/embedding/tableconfig): Configuration data for one embedding table.
Functions
---------
[`serving_embedding_lookup(...)`](../../../../tpu/experimental/embedding/serving_embedding_lookup): Apply standard lookup ops with [`tf.tpu.experimental.embedding`](../../../../tpu/experimental/embedding) configs.
tensorflow tf.compat.v1.tpu.experimental.AdamParameters tf.compat.v1.tpu.experimental.AdamParameters
============================================
Optimization parameters for Adam with TPU embeddings.
```
tf.compat.v1.tpu.experimental.AdamParameters(
learning_rate: float,
beta1: float = 0.9,
beta2: float = 0.999,
epsilon: float = 1e-08,
lazy_adam: bool = True,
sum_inside_sqrt: bool = True,
use_gradient_accumulation: bool = True,
clip_weight_min: Optional[float] = None,
clip_weight_max: Optional[float] = None,
weight_decay_factor: Optional[float] = None,
multiply_weight_decay_factor_by_learning_rate: Optional[bool] = None,
clip_gradient_min: Optional[float] = None,
clip_gradient_max: Optional[float] = None
)
```
Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the `optimization_parameters` argument to set the optimizer and its parameters. See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` for more details.
```
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=tf.tpu.experimental.AdamParameters(0.1),
...))
```
| Args |
| `learning_rate` | a floating point value. The learning rate. |
| `beta1` | A float value. The exponential decay rate for the 1st moment estimates. |
| `beta2` | A float value. The exponential decay rate for the 2nd moment estimates. |
| `epsilon` | A small constant for numerical stability. |
| `lazy_adam` | Use lazy Adam instead of Adam. Lazy Adam trains faster. See `optimization_parameters.proto` for details. |
| `sum_inside_sqrt` | This improves training speed. Please see `optimization_parameters.proto` for details. |
| `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. Please see `optimization_parameters.proto` for details. |
| `clip_weight_min` | the minimum value to clip by; None means -infinity. |
| `clip_weight_max` | the maximum value to clip by; None means +infinity. |
| `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. |
| `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. |
| `clip_gradient_min` | the minimum value to clip by; None means -infinity. Gradient accumulation must be set to true if this is set. |
| `clip_gradient_max` | the maximum value to clip by; None means +infinity. Gradient accumulation must be set to true if this is set. |
tensorflow tf.compat.v1.tpu.experimental.FtrlParameters tf.compat.v1.tpu.experimental.FtrlParameters
============================================
Optimization parameters for Ftrl with TPU embeddings.
```
tf.compat.v1.tpu.experimental.FtrlParameters(
learning_rate: float,
learning_rate_power: float = -0.5,
initial_accumulator_value: float = 0.1,
l1_regularization_strength: float = 0.0,
l2_regularization_strength: float = 0.0,
use_gradient_accumulation: bool = True,
clip_weight_min: Optional[float] = None,
clip_weight_max: Optional[float] = None,
weight_decay_factor: Optional[float] = None,
multiply_weight_decay_factor_by_learning_rate: Optional[bool] = None,
multiply_linear_by_learning_rate: bool = False,
beta: float = 0,
allow_zero_accumulator: bool = False,
clip_gradient_min: Optional[float] = None,
clip_gradient_max: Optional[float] = None
)
```
Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the `optimization_parameters` argument to set the optimizer and its parameters. See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` for more details.
```
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=tf.tpu.experimental.FtrlParameters(0.1),
...))
```
| Args |
| `learning_rate` | a floating point value. The learning rate. |
| `learning_rate_power` | A float value, must be less or equal to zero. Controls how the learning rate decreases during training. Use zero for a fixed learning rate. See section 3.1 in the [paper](https://www.eecs.tufts.edu/%7Edsculley/papers/ad-click-prediction.pdf). |
| `initial_accumulator_value` | The starting value for accumulators. Only zero or positive values are allowed. |
| `l1_regularization_strength` | A float value, must be greater than or equal to zero. |
| `l2_regularization_strength` | A float value, must be greater than or equal to zero. |
| `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. Please see `optimization_parameters.proto` for details. for details. |
| `clip_weight_min` | the minimum value to clip by; None means -infinity. |
| `clip_weight_max` | the maximum value to clip by; None means +infinity. |
| `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. |
| `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. |
| `multiply_linear_by_learning_rate` | When true, multiplies the usages of the linear slot in the weight update by the learning rate. This is useful when ramping up learning rate from 0 (which would normally produce NaNs). |
| `beta` | The beta parameter for FTRL. |
| `allow_zero_accumulator` | Changes the implementation of the square root to allow for the case of initial\_accumulator\_value being zero. This will cause a slight performance drop. |
| `clip_gradient_min` | the minimum value to clip by; None means -infinity. Gradient accumulation must be set to true if this is set. |
| `clip_gradient_max` | the maximum value to clip by; None means +infinity. Gradient accumulation must be set to true if this is set. |
tensorflow tf.compat.v1.errors.raise_exception_on_not_ok_status tf.compat.v1.errors.raise\_exception\_on\_not\_ok\_status
=========================================================
Context manager to check for C API status.
Methods
-------
### `__enter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L554-L556)
```
__enter__()
```
### `__exit__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L558-L569)
```
__exit__(
type_arg, value_arg, traceback_arg
)
```
tensorflow tf.compat.v1.errors.error_code_from_exception_type tf.compat.v1.errors.error\_code\_from\_exception\_type
======================================================
```
tf.compat.v1.errors.error_code_from_exception_type(
cls
)
```
tensorflow tf.compat.v1.errors.exception_type_from_error_code tf.compat.v1.errors.exception\_type\_from\_error\_code
======================================================
```
tf.compat.v1.errors.exception_type_from_error_code(
error_code
)
```
tensorflow tf.compat.v1.random.stateless_multinomial tf.compat.v1.random.stateless\_multinomial
==========================================
Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated)
```
tf.compat.v1.random.stateless_multinomial(
logits,
num_samples,
seed,
output_dtype=tf.dtypes.int64,
name=None
)
```
This is a stateless version of [`tf.random.categorical`](../../../random/categorical): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware.
#### Example:
```
# samples has shape [1, 5], where each value is either 0 or 1 with equal
# probability.
samples = tf.random.stateless_categorical(
tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17])
```
| Args |
| `logits` | 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` represents the unnormalized log-probabilities for all classes. |
| `num_samples` | 0-D. Number of independent samples to draw for each row slice. |
| `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) |
| `output_dtype` | The integer type of the output: `int32` or `int64`. Defaults to `int64`. |
| `name` | Optional name for the operation. |
| Returns |
| The drawn samples of shape `[batch_size, num_samples]`. |
tensorflow Module: tf.compat.v1.random.experimental Module: tf.compat.v1.random.experimental
========================================
Public API for tf.random.experimental namespace.
Classes
-------
[`class Algorithm`](../../../random/algorithm): An enumeration.
[`class Generator`](../../../random/generator): Random-number generator.
Functions
---------
[`create_rng_state(...)`](../../../random/create_rng_state): Creates a RNG state from an integer or a vector.
[`get_global_generator(...)`](../../../random/get_global_generator): Retrieves the global generator.
[`index_shuffle(...)`](../../../random/experimental/index_shuffle): Outputs the position of `index` in a permutation of [0, ..., max\_index].
[`set_global_generator(...)`](../../../random/set_global_generator): Replaces the global generator with another `Generator` object.
[`stateless_fold_in(...)`](../../../random/experimental/stateless_fold_in): Folds in data to an RNG seed to form a new RNG seed.
[`stateless_split(...)`](../../../random/experimental/stateless_split): Splits an RNG seed into `num` new seeds by adding a leading axis.
| programming_docs |
tensorflow tf.compat.v1.MetaGraphDef.MetaInfoDef tf.compat.v1.MetaGraphDef.MetaInfoDef
=====================================
A ProtocolMessage
| Attributes |
| `any_info` | `Any any_info` |
| `function_aliases` | `repeated FunctionAliasesEntry function_aliases` |
| `meta_graph_version` | `string meta_graph_version` |
| `stripped_default_attrs` | `bool stripped_default_attrs` |
| `stripped_op_list` | `OpList stripped_op_list` |
| `tags` | `repeated string tags` |
| `tensorflow_git_version` | `string tensorflow_git_version` |
| `tensorflow_version` | `string tensorflow_version` |
Child Classes
-------------
[`class FunctionAliasesEntry`](metainfodef/functionaliasesentry)
tensorflow tf.compat.v1.MetaGraphDef.CollectionDefEntry tf.compat.v1.MetaGraphDef.CollectionDefEntry
============================================
A ProtocolMessage
| Attributes |
| `key` | `string key` |
| `value` | `CollectionDef value` |
tensorflow tf.compat.v1.MetaGraphDef.SignatureDefEntry tf.compat.v1.MetaGraphDef.SignatureDefEntry
===========================================
A ProtocolMessage
| Attributes |
| `key` | `string key` |
| `value` | `SignatureDef value` |
tensorflow tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry
==========================================================
A ProtocolMessage
| Attributes |
| `key` | `string key` |
| `value` | `string value` |
tensorflow tf.compat.v1.TensorInfo.CompositeTensor tf.compat.v1.TensorInfo.CompositeTensor
=======================================
A ProtocolMessage
| Attributes |
| `components` | `repeated TensorInfo components` |
| `type_spec` | `TypeSpecProto type_spec` |
tensorflow tf.compat.v1.TensorInfo.CooSparse tf.compat.v1.TensorInfo.CooSparse
=================================
A ProtocolMessage
| Attributes |
| `dense_shape_tensor_name` | `string dense_shape_tensor_name` |
| `indices_tensor_name` | `string indices_tensor_name` |
| `values_tensor_name` | `string values_tensor_name` |
tensorflow tf.compat.v1.data.Dataset tf.compat.v1.data.Dataset
=========================
Represents a potentially large set of elements.
Inherits From: [`Dataset`](../../../data/dataset)
```
tf.compat.v1.data.Dataset()
```
A `Dataset` can be used to represent an input pipeline as a collection of elements and a "logical plan" of transformations that act on those elements.
| Args |
| `variant_tensor` | A DT\_VARIANT tensor that represents the dataset. |
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../tf#string) scalar [`tf.Tensor`](../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../data/dataset) of scalar [`tf.int64`](../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../py_function) accepts [`tf.Tensor`](../../../tensor) whereas [`tf.numpy_function`](../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../numpy_function) and [`tf.py_function`](../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../tf#int32) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) or [`tf.int64`](../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../tf#int32), [`tf.int64`](../../../../tf#int64) or [`tf.string`](../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.get_output_shapes tf.compat.v1.data.get\_output\_shapes
=====================================
Returns the output shapes for elements of the input dataset / iterator.
```
tf.compat.v1.data.get_output_shapes(
dataset_or_iterator
)
```
Migrate to TF2
--------------
This is a legacy API for inspecting the type signature of dataset elements. In TF 2, you should use the [`tf.data.Dataset.element_spec`](../../../data/dataset#element_spec) attribute instead.
Description
-----------
| Args |
| `dataset_or_iterator` | A [`tf.data.Dataset`](../../../data/dataset) or [`tf.data.Iterator`](../../../data/iterator). |
| Returns |
| A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects matching the structure of the dataset / iterator elements and specifying the shape of the individual components. |
tensorflow tf.compat.v1.data.TFRecordDataset tf.compat.v1.data.TFRecordDataset
=================================
A `Dataset` comprising records from one or more TFRecord files.
Inherits From: [`Dataset`](dataset), [`Dataset`](../../../data/dataset)
```
tf.compat.v1.data.TFRecordDataset(
filenames,
compression_type=None,
buffer_size=None,
num_parallel_reads=None,
name=None
)
```
| Args |
| `filenames` | A [`tf.string`](../../../../tf#string) tensor or [`tf.data.Dataset`](../../../data/dataset) containing one or more filenames. |
| `compression_type` | (Optional.) A [`tf.string`](../../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. |
| `buffer_size` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of bytes in the read buffer. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value 1-100 MBs. If `None`, a sensible default for both local and remote file systems is used. |
| `num_parallel_reads` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If `None`, files will be read sequentially. |
| `name` | (Optional.) A name for the tf.data operation. |
| Raises |
| `TypeError` | If any argument does not have the expected type. |
| `ValueError` | If any argument does not have the expected shape. |
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../tf#string) scalar [`tf.Tensor`](../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../data/dataset) of scalar [`tf.int64`](../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../py_function) accepts [`tf.Tensor`](../../../tensor) whereas [`tf.numpy_function`](../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../numpy_function) and [`tf.py_function`](../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../tf#int32) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4009-L4010)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) or [`tf.int64`](../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../tf#int32), [`tf.int64`](../../../../tf#int64) or [`tf.string`](../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4016-L4017)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.TextLineDataset tf.compat.v1.data.TextLineDataset
=================================
A `Dataset` comprising lines from one or more text files.
Inherits From: [`Dataset`](dataset), [`Dataset`](../../../data/dataset)
```
tf.compat.v1.data.TextLineDataset(
filenames,
compression_type=None,
buffer_size=None,
num_parallel_reads=None,
name=None
)
```
| Args |
| `filenames` | A [`tf.data.Dataset`](../../../data/dataset) whose elements are [`tf.string`](../../../../tf#string) scalars, a [`tf.string`](../../../../tf#string) tensor, or a value that can be converted to a [`tf.string`](../../../../tf#string) tensor (such as a list of Python strings). |
| `compression_type` | (Optional.) A [`tf.string`](../../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. |
| `buffer_size` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar denoting the number of bytes to buffer. A value of 0 results in the default buffering values chosen based on the compression type. |
| `num_parallel_reads` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If `None`, files will be read sequentially. |
| `name` | (Optional.) A name for the tf.data operation. |
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../tf#string) scalar [`tf.Tensor`](../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../data/dataset) of scalar [`tf.int64`](../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../py_function) accepts [`tf.Tensor`](../../../tensor) whereas [`tf.numpy_function`](../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../numpy_function) and [`tf.py_function`](../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../tf#int32) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4009-L4010)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) or [`tf.int64`](../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../tf#int32), [`tf.int64`](../../../../tf#int64) or [`tf.string`](../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4016-L4017)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.get_output_types tf.compat.v1.data.get\_output\_types
====================================
Returns the output shapes for elements of the input dataset / iterator.
```
tf.compat.v1.data.get_output_types(
dataset_or_iterator
)
```
Migrate to TF2
--------------
This is a legacy API for inspecting the type signature of dataset elements. In TF 2, you should use the [`tf.data.Dataset.element_spec`](../../../data/dataset#element_spec) attribute instead.
Description
-----------
| Args |
| `dataset_or_iterator` | A [`tf.data.Dataset`](../../../data/dataset) or [`tf.data.Iterator`](../../../data/iterator). |
| Returns |
| A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects matching the structure of dataset / iterator elements and specifying the shape of the individual components. |
tensorflow tf.compat.v1.data.FixedLengthRecordDataset tf.compat.v1.data.FixedLengthRecordDataset
==========================================
A `Dataset` of fixed-length records from one or more binary files.
Inherits From: [`Dataset`](dataset), [`Dataset`](../../../data/dataset)
```
tf.compat.v1.data.FixedLengthRecordDataset(
filenames,
record_bytes,
header_bytes=None,
footer_bytes=None,
buffer_size=None,
compression_type=None,
num_parallel_reads=None,
name=None
)
```
| Args |
| `filenames` | A [`tf.string`](../../../../tf#string) tensor or [`tf.data.Dataset`](../../../data/dataset) containing one or more filenames. |
| `record_bytes` | A [`tf.int64`](../../../../tf#int64) scalar representing the number of bytes in each record. |
| `header_bytes` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of bytes to skip at the start of a file. |
| `footer_bytes` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of bytes to ignore at the end of a file. |
| `buffer_size` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of bytes to buffer when reading. |
| `compression_type` | (Optional.) A [`tf.string`](../../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. |
| `num_parallel_reads` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If `None`, files will be read sequentially. |
| `name` | (Optional.) A name for the tf.data operation. |
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../tf#string) scalar [`tf.Tensor`](../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../data/dataset) of scalar [`tf.int64`](../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../py_function) accepts [`tf.Tensor`](../../../tensor) whereas [`tf.numpy_function`](../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../numpy_function) and [`tf.py_function`](../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../tf#int32) scalar [`tf.Tensor`](../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4009-L4010)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) or [`tf.int64`](../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../tf#int32), [`tf.int64`](../../../../tf#int64) or [`tf.string`](../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../tf#int64) scalar [`tf.Tensor`](../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../tf#bool) scalar [`tf.Tensor`](../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4016-L4017)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.Iterator tf.compat.v1.data.Iterator
==========================
Represents the state of iterating through a `Dataset`.
```
tf.compat.v1.data.Iterator(
iterator_resource, initializer, output_types, output_shapes, output_classes
)
```
| Args |
| `iterator_resource` | A [`tf.resource`](../../../../tf#resource) scalar [`tf.Tensor`](../../../tensor) representing the iterator. |
| `initializer` | A [`tf.Operation`](../../../operation) that should be run to initialize this iterator. |
| `output_types` | A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element of this iterator. |
| `output_shapes` | A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element of this iterator. |
| `output_classes` | A (nested) structure of Python `type` objects corresponding to each component of an element of this iterator. |
| Raises |
| `TypeError` | If `output_types`, `output_shapes`, or `output_classes` is not specified. |
| Attributes |
| `element_spec` | The type specification of an element of this iterator. For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `initializer` | A [`tf.Operation`](../../../operation) that should be run to initialize this iterator. |
| `output_classes` | Returns the class of each component of an element of this iterator. (deprecated) The expected values are [`tf.Tensor`](../../../tensor) and [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor). |
| `output_shapes` | Returns the shape of each component of an element of this iterator. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this iterator. (deprecated)
|
Methods
-------
### `from_string_handle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L237-L306)
```
@staticmethod
from_string_handle(
string_handle, output_types, output_shapes=None, output_classes=None
)
```
Creates a new, uninitialized `Iterator` based on the given handle.
This method allows you to define a "feedable" iterator where you can choose between concrete iterators by feeding a value in a `tf.Session.run` call. In that case, `string_handle` would be a [`tf.compat.v1.placeholder`](../placeholder), and you would feed it with the value of `tf.data.Iterator.string_handle` in each step.
For example, if you had two iterators that marked the current position in a training dataset and a test dataset, you could choose which to use in each step as follows:
```
train_iterator = tf.data.Dataset(...).make_one_shot_iterator()
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator = tf.data.Dataset(...).make_one_shot_iterator()
test_iterator_handle = sess.run(test_iterator.string_handle())
handle = tf.compat.v1.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
next_element = iterator.get_next()
loss = f(next_element)
train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle})
test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle})
```
| Args |
| `string_handle` | A scalar [`tf.Tensor`](../../../tensor) of type [`tf.string`](../../../../tf#string) that evaluates to a handle produced by the `Iterator.string_handle()` method. |
| `output_types` | A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element of this dataset. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element of this dataset. If omitted, each component will have an unconstrainted shape. |
| `output_classes` | (Optional.) A (nested) structure of Python `type` objects corresponding to each component of an element of this iterator. If omitted, each component is assumed to be of type [`tf.Tensor`](../../../tensor). |
| Returns |
| An `Iterator`. |
### `from_structure`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L143-L235)
```
@staticmethod
from_structure(
output_types, output_shapes=None, shared_name=None, output_classes=None
)
```
Creates a new, uninitialized `Iterator` with the given structure.
This iterator-constructing method can be used to create an iterator that is reusable with many different datasets.
The returned iterator is not bound to a particular dataset, and it has no `initializer`. To initialize the iterator, run the operation returned by `Iterator.make_initializer(dataset)`.
The following is an example
```
iterator = Iterator.from_structure(tf.int64, tf.TensorShape([]))
dataset_range = Dataset.range(10)
range_initializer = iterator.make_initializer(dataset_range)
dataset_evens = dataset_range.filter(lambda x: x % 2 == 0)
evens_initializer = iterator.make_initializer(dataset_evens)
# Define a model based on the iterator; in this example, the model_fn
# is expected to take scalar tf.int64 Tensors as input (see
# the definition of 'iterator' above).
prediction, loss = model_fn(iterator.get_next())
# Train for `num_epochs`, where for each epoch, we first iterate over
# dataset_range, and then iterate over dataset_evens.
for _ in range(num_epochs):
# Initialize the iterator to `dataset_range`
sess.run(range_initializer)
while True:
try:
pred, loss_val = sess.run([prediction, loss])
except tf.errors.OutOfRangeError:
break
# Initialize the iterator to `dataset_evens`
sess.run(evens_initializer)
while True:
try:
pred, loss_val = sess.run([prediction, loss])
except tf.errors.OutOfRangeError:
break
```
| Args |
| `output_types` | A (nested) structure of [`tf.DType`](../../../dtypes/dtype) objects corresponding to each component of an element of this dataset. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../tensorshape) objects corresponding to each component of an element of this dataset. If omitted, each component will have an unconstrainted shape. |
| `shared_name` | (Optional.) If non-empty, this iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| `output_classes` | (Optional.) A (nested) structure of Python `type` objects corresponding to each component of an element of this iterator. If omitted, each component is assumed to be of type [`tf.Tensor`](../../../tensor). |
| Returns |
| An `Iterator`. |
| Raises |
| `TypeError` | If the structures of `output_shapes` and `output_types` are not the same. |
### `get_next`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L389-L445)
```
get_next(
name=None
)
```
Returns the next element.
In graph mode, you should typically call this method *once* and use its result as the input to another computation. A typical loop will then call `tf.Session.run` on the result of that computation. The loop will terminate when the [`Iterator.get_next()`](https://www.tensorflow.org/api_docs/python/tf/data/Iterator#get_next) operation raises [`tf.errors.OutOfRangeError`](../../../errors/outofrangeerror). The following skeleton shows how to use this method when building a training loop:
```
dataset = ... # A `tf.data.Dataset` object.
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
# Build a TensorFlow graph that does something with each element.
loss = model_function(next_element)
optimizer = ... # A `tf.compat.v1.train.Optimizer` object.
train_op = optimizer.minimize(loss)
with tf.compat.v1.Session() as sess:
try:
while True:
sess.run(train_op)
except tf.errors.OutOfRangeError:
pass
```
>
> **Note:** It is legitimate to call [`Iterator.get_next()`](https://www.tensorflow.org/api_docs/python/tf/data/Iterator#get_next) multiple times, e.g. when you are distributing different elements to multiple devices in a single step. However, a common pitfall arises when users call [`Iterator.get_next()`](https://www.tensorflow.org/api_docs/python/tf/data/Iterator#get_next) in each iteration of their training loop. [`Iterator.get_next()`](https://www.tensorflow.org/api_docs/python/tf/data/Iterator#get_next) adds ops to the graph, and executing each op allocates resources (including threads); as a consequence, invoking it in every iteration of a training loop causes slowdown and eventual resource exhaustion. To guard against this outcome, we log a warning when the number of uses crosses a fixed threshold of suspiciousness.
>
| Args |
| `name` | (Optional.) A name for the created operation. |
| Returns |
| A (nested) structure of values matching [`tf.data.Iterator.element_spec`](../../../data/iterator#element_spec). |
### `get_next_as_optional`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L447-L456)
```
get_next_as_optional()
```
### `make_initializer`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L329-L387)
```
make_initializer(
dataset, name=None
)
```
Returns a [`tf.Operation`](../../../operation) that initializes this iterator on `dataset`.
| Args |
| `dataset` | A `Dataset` whose `element_spec` if compatible with this iterator. |
| `name` | (Optional.) A name for the created operation. |
| Returns |
| A [`tf.Operation`](../../../operation) that can be run to initialize this iterator on the given `dataset`. |
| Raises |
| `TypeError` | If `dataset` and this iterator do not have a compatible `element_spec`. |
### `string_handle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L458-L471)
```
string_handle(
name=None
)
```
Returns a string-valued [`tf.Tensor`](../../../tensor) that represents this iterator.
| Args |
| `name` | (Optional.) A name for the created operation. |
| Returns |
| A scalar [`tf.Tensor`](../../../tensor) of type [`tf.string`](../../../../tf#string). |
tensorflow tf.compat.v1.data.make_initializable_iterator tf.compat.v1.data.make\_initializable\_iterator
===============================================
Creates an iterator for elements of `dataset`.
```
tf.compat.v1.data.make_initializable_iterator(
dataset, shared_name=None
)
```
Migrate to TF2
--------------
This is a legacy API for consuming dataset elements and should only be used during transition from TF 1 to TF 2. Note that using this API should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
In TF 2 datasets are Python iterables which means you can consume their elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`.
Description
-----------
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
dataset = ...
iterator = tf.compat.v1.data.make_initializable_iterator(dataset)
# ...
sess.run(iterator.initializer)
```
| Args |
| `dataset` | A [`tf.data.Dataset`](../../../data/dataset). |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../data/iterator) for elements of `dataset`. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
tensorflow Module: tf.compat.v1.data.experimental Module: tf.compat.v1.data.experimental
======================================
Experimental API for building input pipelines.
This module contains experimental `Dataset` sources and transformations that can be used in conjunction with the [`tf.data.Dataset`](../../../data/dataset) API. Note that the [`tf.data.experimental`](../../../data/experimental) API is not subject to the same backwards compatibility guarantees as [`tf.data`](../../../data), but we will provide deprecation advice in advance of removing existing functionality.
See [Importing Data](https://tensorflow.org/guide/datasets) for an overview.
Modules
-------
[`service`](experimental/service) module: API for using the tf.data service.
Classes
-------
[`class AutoShardPolicy`](../../../data/experimental/autoshardpolicy): Represents the type of auto-sharding to use.
[`class AutotuneAlgorithm`](../../../data/experimental/autotunealgorithm): Represents the type of autotuning algorithm to use.
[`class AutotuneOptions`](../../../data/experimental/autotuneoptions): Represents options for autotuning dataset performance.
[`class CheckpointInputPipelineHook`](../../../data/experimental/checkpointinputpipelinehook): Checkpoints input pipeline state every N steps or seconds.
[`class CsvDataset`](experimental/csvdataset): A Dataset comprising lines from one or more CSV files.
[`class DatasetInitializer`](../../../data/experimental/datasetinitializer): Creates a table initializer from a [`tf.data.Dataset`](../../../data/dataset).
[`class DatasetStructure`](../../../data/datasetspec): Type specification for [`tf.data.Dataset`](../../../data/dataset).
[`class DistributeOptions`](../../../data/experimental/distributeoptions): Represents options for distributed data processing.
[`class ExternalStatePolicy`](../../../data/experimental/externalstatepolicy): Represents how to handle external state during serialization.
[`class OptimizationOptions`](../../../data/experimental/optimizationoptions): Represents options for dataset optimizations.
[`class Optional`](../../../experimental/optional): Represents a value that may or may not be present.
[`class OptionalStructure`](../../../optionalspec): Type specification for [`tf.experimental.Optional`](../../../experimental/optional).
[`class RandomDataset`](experimental/randomdataset): A `Dataset` of pseudorandom values. (deprecated)
[`class Reducer`](../../../data/experimental/reducer): A reducer is used for reducing a set of elements.
[`class SqlDataset`](experimental/sqldataset): A `Dataset` consisting of the results from a SQL query.
[`class Structure`](../../../typespec): Specifies a TensorFlow value type.
[`class TFRecordWriter`](../../../data/experimental/tfrecordwriter): Writes a dataset to a TFRecord file. (deprecated)
[`class ThreadingOptions`](../../../data/threadingoptions): Represents options for dataset threading.
Functions
---------
[`Counter(...)`](experimental/counter): Creates a `Dataset` that counts from `start` in steps of size `step`.
[`RaggedTensorStructure(...)`](experimental/raggedtensorstructure): DEPRECATED FUNCTION
[`SparseTensorStructure(...)`](experimental/sparsetensorstructure): DEPRECATED FUNCTION
[`TensorArrayStructure(...)`](experimental/tensorarraystructure): DEPRECATED FUNCTION
[`TensorStructure(...)`](experimental/tensorstructure): DEPRECATED FUNCTION
[`assert_cardinality(...)`](../../../data/experimental/assert_cardinality): Asserts the cardinality of the input dataset.
[`bucket_by_sequence_length(...)`](../../../data/experimental/bucket_by_sequence_length): A transformation that buckets elements in a `Dataset` by length. (deprecated)
[`cardinality(...)`](../../../data/experimental/cardinality): Returns the cardinality of `dataset`, if known.
[`choose_from_datasets(...)`](experimental/choose_from_datasets): Creates a dataset that deterministically chooses elements from `datasets`. (deprecated)
[`copy_to_device(...)`](../../../data/experimental/copy_to_device): A transformation that copies dataset elements to the given `target_device`.
[`dense_to_ragged_batch(...)`](../../../data/experimental/dense_to_ragged_batch): A transformation that batches ragged elements into [`tf.RaggedTensor`](../../../raggedtensor)s.
[`dense_to_sparse_batch(...)`](../../../data/experimental/dense_to_sparse_batch): A transformation that batches ragged elements into [`tf.sparse.SparseTensor`](../../../sparse/sparsetensor)s.
[`enable_debug_mode(...)`](../../../data/experimental/enable_debug_mode): Enables debug mode for tf.data.
[`enumerate_dataset(...)`](../../../data/experimental/enumerate_dataset): A transformation that enumerates the elements of a dataset. (deprecated)
[`from_variant(...)`](../../../data/experimental/from_variant): Constructs a dataset from the given variant and (nested) structure.
[`get_next_as_optional(...)`](../../../data/experimental/get_next_as_optional): Returns a [`tf.experimental.Optional`](../../../experimental/optional) with the next element of the iterator. (deprecated)
[`get_single_element(...)`](../../../data/experimental/get_single_element): Returns the single element of the `dataset` as a nested structure of tensors. (deprecated)
[`get_structure(...)`](../../../data/experimental/get_structure): Returns the type signature for elements of the input dataset / iterator.
[`group_by_reducer(...)`](../../../data/experimental/group_by_reducer): A transformation that groups elements and performs a reduction.
[`group_by_window(...)`](../../../data/experimental/group_by_window): A transformation that groups windows of elements by key and reduces them. (deprecated)
[`ignore_errors(...)`](../../../data/experimental/ignore_errors): Creates a `Dataset` from another `Dataset` and silently ignores any errors.
[`index_table_from_dataset(...)`](../../../data/experimental/index_table_from_dataset): Returns an index lookup table based on the given dataset.
[`make_batched_features_dataset(...)`](experimental/make_batched_features_dataset): Returns a `Dataset` of feature dictionaries from `Example` protos.
[`make_csv_dataset(...)`](experimental/make_csv_dataset): Reads CSV files into a dataset.
[`make_saveable_from_iterator(...)`](../../../data/experimental/make_saveable_from_iterator): Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated)
[`map_and_batch(...)`](../../../data/experimental/map_and_batch): Fused implementation of `map` and `batch`. (deprecated)
[`map_and_batch_with_legacy_function(...)`](experimental/map_and_batch_with_legacy_function): Fused implementation of `map` and `batch`. (deprecated)
[`parallel_interleave(...)`](../../../data/experimental/parallel_interleave): A parallel version of the [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) transformation. (deprecated)
[`parse_example_dataset(...)`](../../../data/experimental/parse_example_dataset): A transformation that parses `Example` protos into a `dict` of tensors.
[`prefetch_to_device(...)`](../../../data/experimental/prefetch_to_device): A transformation that prefetches dataset values to the given `device`.
[`rejection_resample(...)`](../../../data/experimental/rejection_resample): A transformation that resamples a dataset to achieve a target distribution. (deprecated)
[`sample_from_datasets(...)`](experimental/sample_from_datasets): Samples elements at random from the datasets in `datasets`. (deprecated)
[`scan(...)`](../../../data/experimental/scan): A transformation that scans a function across an input dataset. (deprecated)
[`shuffle_and_repeat(...)`](../../../data/experimental/shuffle_and_repeat): Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated)
[`snapshot(...)`](../../../data/experimental/snapshot): API to persist the output of the input dataset. (deprecated)
[`table_from_dataset(...)`](../../../data/experimental/table_from_dataset): Returns a lookup table based on the given dataset.
[`take_while(...)`](../../../data/experimental/take_while): A transformation that stops dataset iteration based on a `predicate`. (deprecated)
[`to_variant(...)`](../../../data/experimental/to_variant): Returns a variant representing the given dataset.
[`unbatch(...)`](../../../data/experimental/unbatch): Splits elements of a dataset into multiple elements on the batch dimension. (deprecated)
[`unique(...)`](../../../data/experimental/unique): Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated)
| Other Members |
| AUTOTUNE | `-1` |
| INFINITE\_CARDINALITY | `-1` |
| SHARD\_HINT | `-1` |
| UNKNOWN\_CARDINALITY | `-2` |
| programming_docs |
tensorflow tf.compat.v1.data.get_output_classes tf.compat.v1.data.get\_output\_classes
======================================
Returns the output classes for elements of the input dataset / iterator.
```
tf.compat.v1.data.get_output_classes(
dataset_or_iterator
)
```
Migrate to TF2
--------------
This is a legacy API for inspecting the type signature of dataset elements. In TF 2, you should use the [`tf.data.Dataset.element_spec`](../../../data/dataset#element_spec) attribute instead.
Description
-----------
| Args |
| `dataset_or_iterator` | A [`tf.data.Dataset`](../../../data/dataset) or [`tf.data.Iterator`](../../../data/iterator). |
| Returns |
| A (nested) structure of Python `type` objects matching the structure of the dataset / iterator elements and specifying the class of the individual components. |
tensorflow tf.compat.v1.data.make_one_shot_iterator tf.compat.v1.data.make\_one\_shot\_iterator
===========================================
Creates an iterator for elements of `dataset`.
```
tf.compat.v1.data.make_one_shot_iterator(
dataset
)
```
Migrate to TF2
--------------
This is a legacy API for consuming dataset elements and should only be used during transition from TF 1 to TF 2. Note that using this API should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
In TF 2 datasets are Python iterables which means you can consume their elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`.
Description
-----------
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not support re-initialization.
>
| Args |
| `dataset` | A [`tf.data.Dataset`](../../../data/dataset). |
| Returns |
| A [`tf.data.Iterator`](../../../data/iterator) for elements of `dataset`. |
tensorflow tf.compat.v1.data.experimental.choose_from_datasets tf.compat.v1.data.experimental.choose\_from\_datasets
=====================================================
Creates a dataset that deterministically chooses elements from `datasets`. (deprecated)
```
tf.compat.v1.data.experimental.choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=False
)
```
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../../data/dataset) of scalar [`tf.int64`](../../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
tensorflow tf.compat.v1.data.experimental.make_batched_features_dataset tf.compat.v1.data.experimental.make\_batched\_features\_dataset
===============================================================
Returns a `Dataset` of feature dictionaries from `Example` protos.
```
tf.compat.v1.data.experimental.make_batched_features_dataset(
file_pattern,
batch_size,
features,
reader=None,
label_key=None,
reader_args=None,
num_epochs=None,
shuffle=True,
shuffle_buffer_size=10000,
shuffle_seed=None,
prefetch_buffer_size=None,
reader_num_threads=None,
parser_num_threads=None,
sloppy_ordering=False,
drop_final_batch=False
)
```
If label\_key argument is provided, returns a `Dataset` of tuple comprising of feature dictionaries and label.
#### Example:
```
serialized_examples = [
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
feature { key: "kws" value { bytes_list { value: [ "sports" ] } } }
}
]
```
#### We can use arguments:
```
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
"kws": VarLenFeature(dtype=tf.string),
}
```
And the expected output is:
```
{
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
"kws": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["code", "art", "sports"]
dense_shape=[2, 2]),
}
```
| Args |
| `file_pattern` | List of files or patterns of file paths containing `Example` records. See [`tf.io.gfile.glob`](../../../../io/gfile/glob) for pattern rules. |
| `batch_size` | An int representing the number of records to combine in a single batch. |
| `features` | A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` values. See [`tf.io.parse_example`](../../../../io/parse_example). |
| `reader` | A function or class that can be called with a `filenames` tensor and (optional) `reader_args` and returns a `Dataset` of `Example` tensors. Defaults to [`tf.data.TFRecordDataset`](../../../../data/tfrecorddataset). |
| `label_key` | (Optional) A string corresponding to the key labels are stored in `tf.Examples`. If provided, it must be one of the `features` key, otherwise results in `ValueError`. |
| `reader_args` | Additional arguments to pass to the reader class. |
| `num_epochs` | Integer specifying the number of times to read through the dataset. If None, cycles through the dataset forever. Defaults to `None`. |
| `shuffle` | A boolean, indicates whether the input should be shuffled. Defaults to `True`. |
| `shuffle_buffer_size` | Buffer size of the ShuffleDataset. A large capacity ensures better shuffling but would increase memory usage and startup time. |
| `shuffle_seed` | Randomization seed to use for shuffling. |
| `prefetch_buffer_size` | Number of feature batches to prefetch in order to improve performance. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. |
| `reader_num_threads` | Number of threads used to read `Example` records. If >1, the results will be interleaved. Defaults to `1`. |
| `parser_num_threads` | Number of threads to use for parsing `Example` tensors into a dictionary of `Feature` tensors. Defaults to `2`. |
| `sloppy_ordering` | If `True`, reading performance will be improved at the cost of non-deterministic ordering. If `False`, the order of elements produced is deterministic prior to shuffling (elements are still randomized if `shuffle=True`. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to `False`. |
| `drop_final_batch` | If `True`, and the batch size does not evenly divide the input dataset size, the final smaller batch will be dropped. Defaults to `False`. |
| Returns |
| A dataset of `dict` elements, (or a tuple of `dict` elements and label). Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects. |
| Raises |
| `TypeError` | If `reader` is of the wrong type. |
| `ValueError` | If `label_key` is not one of the `features` keys. |
tensorflow tf.compat.v1.data.experimental.TensorStructure tf.compat.v1.data.experimental.TensorStructure
==============================================
DEPRECATED FUNCTION
```
tf.compat.v1.data.experimental.TensorStructure(
dtype, shape
)
```
tensorflow tf.compat.v1.data.experimental.SparseTensorStructure tf.compat.v1.data.experimental.SparseTensorStructure
====================================================
DEPRECATED FUNCTION
```
tf.compat.v1.data.experimental.SparseTensorStructure(
dtype, shape
)
```
tensorflow tf.compat.v1.data.experimental.Counter tf.compat.v1.data.experimental.Counter
======================================
Creates a `Dataset` that counts from `start` in steps of size `step`.
```
tf.compat.v1.data.experimental.Counter(
start=0,
step=1,
dtype=tf.dtypes.int64
)
```
Unlike [`tf.data.Dataset.range`](../../../../data/dataset#range) which will stop at some ending number, `Counter` will produce elements indefinitely.
```
dataset = tf.data.experimental.Counter().take(5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int64, name=None)
dataset = tf.data.experimental.Counter(dtype=tf.int32)
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
dataset = tf.data.experimental.Counter(start=2).take(5)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
dataset = tf.data.experimental.Counter(start=2, step=5).take(5)
list(dataset.as_numpy_iterator())
[2, 7, 12, 17, 22]
dataset = tf.data.experimental.Counter(start=10, step=-1).take(5)
list(dataset.as_numpy_iterator())
[10, 9, 8, 7, 6]
```
| Args |
| `start` | (Optional.) The starting value for the counter. Defaults to 0. |
| `step` | (Optional.) The step size for the counter. Defaults to 1. |
| `dtype` | (Optional.) The data type for counter elements. Defaults to [`tf.int64`](../../../../../tf#int64). |
| Returns |
| A `Dataset` of scalar `dtype` elements. |
tensorflow tf.compat.v1.data.experimental.RandomDataset tf.compat.v1.data.experimental.RandomDataset
============================================
A `Dataset` of pseudorandom values. (deprecated)
Inherits From: [`Dataset`](../dataset), [`Dataset`](../../../../data/dataset)
```
tf.compat.v1.data.experimental.RandomDataset(
seed=None, name=None
)
```
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../../tf#string) scalar [`tf.Tensor`](../../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../../data/dataset) of scalar [`tf.int64`](../../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../../py_function) accepts [`tf.Tensor`](../../../../tensor) whereas [`tf.numpy_function`](../../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../../numpy_function) and [`tf.py_function`](../../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../../tf#int32) scalar [`tf.Tensor`](../../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4009-L4010)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../../tensorshape) or [`tf.int64`](../../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../../tf#int32), [`tf.int64`](../../../../../tf#int64) or [`tf.string`](../../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4016-L4017)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.experimental.RaggedTensorStructure tf.compat.v1.data.experimental.RaggedTensorStructure
====================================================
DEPRECATED FUNCTION
```
tf.compat.v1.data.experimental.RaggedTensorStructure(
dtype, shape, ragged_rank
)
```
tensorflow Module: tf.compat.v1.data.experimental.service Module: tf.compat.v1.data.experimental.service
==============================================
API for using the tf.data service.
#### This module contains:
1. tf.data server implementations for running the tf.data service.
2. APIs for registering datasets with the tf.data service and reading from the registered datasets.
The tf.data service provides the following benefits:
* Horizontal scaling of tf.data input pipeline processing to solve input bottlenecks.
* Data coordination for distributed training. Coordinated reads enable all replicas to train on similar-length examples across each global training step, improving step times in synchronous training.
* Dynamic balancing of data across training replicas.
```
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
Setup
-----
This section goes over how to set up the tf.data service.
### Run tf.data servers
The tf.data service consists of one dispatch server and `n` worker servers. tf.data servers should be brought up alongside your training jobs, then brought down when the jobs are finished. Use [`tf.data.experimental.service.DispatchServer`](../../../../data/experimental/service/dispatchserver) to start a dispatch server, and [`tf.data.experimental.service.WorkerServer`](../../../../data/experimental/service/workerserver) to start worker servers. Servers can be run in the same process for testing purposes, or scaled up on separate machines.
See <https://github.com/tensorflow/ecosystem/tree/master/data_service> for an example of using Google Kubernetes Engine (GKE) to manage the tf.data service. Note that the server implementation in [tf\_std\_data\_server.py](https://github.com/tensorflow/ecosystem/blob/master/data_service/tf_std_data_server.py) is not GKE-specific, and can be used to run the tf.data service in other contexts.
### Custom ops
If your dataset uses custom ops, these ops need to be made available to tf.data servers by calling [load\_op\_library](https://www.tensorflow.org/api_docs/python/tf/load_op_library) from the dispatcher and worker processes at startup.
Usage
-----
Users interact with tf.data service by programmatically registering their datasets with tf.data service, then creating datasets that read from the registered datasets. The [register\_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/register_dataset) function registers a dataset, then the [from\_dataset\_id](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/from_dataset_id) function creates a new dataset which reads from the registered dataset. The [distribute](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/distribute) function wraps `register_dataset` and `from_dataset_id` into a single convenient transformation which registers its input dataset and then reads from it. `distribute` enables tf.data service to be used with a one-line code change. However, it assumes that the dataset is created and consumed by the same entity and this assumption might not always be valid or desirable. In particular, in certain scenarios, such as distributed training, it might be desirable to decouple the creation and consumption of the dataset (via `register_dataset` and `from_dataset_id` respectively) to avoid having to create the dataset on each of the training workers.
### Example
#### `distribute`
To use the `distribute` transformation, apply the transformation after the prefix of your input pipeline that you would like to be executed using tf.data service (typically at the end).
```
dataset = ... # Define your dataset here.
# Move dataset processing from the local machine to the tf.data service
dataset = dataset.apply(
tf.data.experimental.service.distribute(
processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
service=FLAGS.tf_data_service_address,
job_name="shared_job"))
# Any transformations added after `distribute` will be run on the local machine.
dataset = dataset.prefetch(1)
```
The above code will create a tf.data service "job", which iterates through the dataset to generate data. To share the data from a job across multiple clients (e.g. when using TPUStrategy or MultiWorkerMirroredStrategy), set a common `job_name` across all clients.
####
`register_dataset` and `from_dataset_id`
`register_dataset` registers a dataset with the tf.data service, returning a dataset id for the registered dataset. `from_dataset_id` creates a dataset that reads from the registered dataset. These APIs can be used to reduce dataset building time for distributed training. Instead of building the dataset on all training workers, we can build the dataset just once and then register the dataset using `register_dataset`. Then all workers can call `from_dataset_id` without needing to build the dataset themselves.
```
dataset = ... # Define your dataset here.
dataset_id = tf.data.experimental.service.register_dataset(
service=FLAGS.tf_data_service_address,
dataset=dataset)
# Use `from_dataset_id` to create per-worker datasets.
per_worker_datasets = {}
for worker in workers:
per_worker_datasets[worker] = tf.data.experimental.service.from_dataset_id(
processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
service=FLAGS.tf_data_service_address,
dataset_id=dataset_id,
job_name="shared_job")
```
### Processing Modes
`processing_mode` specifies how to shard a dataset among tf.data service workers. tf.data service supports `OFF`, `DYNAMIC`, `FILE`, `DATA`, `FILE_OR_DATA`, `HINT` sharding policies.
OFF: No sharding will be performed. The entire input dataset will be processed independently by each of the tf.data service workers. For this reason, it is important to shuffle data (e.g. filenames) non-deterministically, so that each worker will process the elements of the dataset in a different order. This mode can be used to distribute datasets that aren't splittable.
If a worker is added or restarted during ShardingPolicy.OFF processing, the worker will instantiate a new copy of the dataset and begin producing data from the beginning.
#### Dynamic Sharding
DYNAMIC: In this mode, tf.data service divides the dataset into two components: a source component that generates "splits" such as filenames, and a processing component that takes splits and outputs dataset elements. The source component is executed in a centralized fashion by the tf.data service dispatcher, which generates different splits of input data. The processing component is executed in a parallel fashion by the tf.data service workers, each operating on a different set of input data splits.
For example, consider the following dataset:
```
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(TFRecordDataset)
dataset = dataset.map(preprocess_fn)
dataset = dataset.batch(batch_size)
dataset = dataset.apply(
tf.data.experimental.service.distribute(
processing_mode=tf.data.experimental.service.ShardingPolicy.DYNAMIC,
...))
```
The `from_tensor_slices` will be run on the dispatcher, while the `interleave`, `map`, and `batch` will be run on tf.data service workers. The workers will pull filenames from the dispatcher for processing. To process a dataset with dynamic sharding, the dataset must have a splittable source, and all of its transformations must be compatible with splitting. While most sources and transformations support splitting, there are exceptions, such as custom datasets which may not implement the splitting API. Please file a Github issue if you would like to use distributed epoch processing for a currently unsupported dataset source or transformation.
If no workers are restarted during training, dynamic sharding mode will visit every example exactly once. If workers are restarted during training, the splits they were processing will not be fully visited. The dispatcher maintains a cursor through the dataset's splits. Assuming fault tolerance is enabled (See "Fault Tolerance" below), the dispatcher will store cursor state in write-ahead logs so that the cursor can be restored in case the dispatcher is restarted mid-training. This provides an at-most-once visitation guarantee in the presence of server restarts.
#### Static Sharding
The following are static sharding policies. The semantics are similar to [`tf.data.experimental.AutoShardPolicy`](../../../../data/experimental/autoshardpolicy). These policies require:
* The tf.data service cluster is configured with a fixed list of workers in DispatcherConfig.
* Each client only reads from the local tf.data service worker.
If a worker is restarted while performing static sharding, the worker will begin processing its shard again from the beginning.
FILE: Shards by input files (i.e. each worker will get a fixed set of files to process). When this option is selected, make sure that there is at least as many files as workers. If there are fewer input files than workers, a runtime error will be raised.
DATA: Shards by elements produced by the dataset. Each worker will process the whole dataset and discard the portion that is not for itself. Note that for this mode to correctly partition the dataset elements, the dataset needs to produce elements in a deterministic order.
FILE\_OR\_DATA: Attempts FILE-based sharding, falling back to DATA-based sharding on failure.
HINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a placeholder to replace with `shard(num_workers, worker_index)`.
For backwards compatibility, `processing_mode` may also be set to the strings `"parallel_epochs"` or `"distributed_epoch"`, which are respectively equivalent to [`ShardingPolicy.OFF`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/ShardingPolicy#OFF) and [`ShardingPolicy.DYNAMIC`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/ShardingPolicy#DYNAMIC).
### Coordinated Data Read
By default, when multiple consumers read from the same job, they receive data on a first-come first-served basis. In some use cases, it is advantageous to coordinate the consumers. At each step, consumers read data from the same worker.
For example, the tf.data service can be used to coordinate example sizes across a cluster during synchronous training, so that during each step all replicas train on similar-sized elements. To achieve this, define a dataset which generates rounds of `num_consumers` consecutive similar-sized batches, then enable coordinated reads by setting `consumer_index` and `num_consumers`.
>
> **Note:** To keep consumers in sync, coordinated reads require that the dataset have infinite cardinality. You can get this by adding `.repeat()` at the end of the dataset definition.
>
### Jobs
A tf.data service "job" refers to the process of reading from a dataset managed by the tf.data service, using one or more data consumers. Jobs are created when iterating over datasets that read from tf.data service. The data produced by a job is determined by (1) dataset associated with the job and (2) the job's processing mode. For example, if a job is created for the dataset [`Dataset.range(5)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#range), and the processing mode is [`ShardingPolicy.OFF`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/ShardingPolicy#OFF), each tf.data worker will produce the elements `{0, 1, 2, 3, 4}` for the job, resulting in the job producing `5 * num_workers` elements. If the processing mode is [`ShardingPolicy.DYNAMIC`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/ShardingPolicy#DYNAMIC), the job will only produce `5` elements.
One or more consumers can consume data from a job. By default, jobs are "anonymous", meaning that only the consumer which created the job can read from it. To share the output of a job across multiple consumers, you can set a common `job_name`.
### Fault Tolerance
By default, the tf.data dispatch server stores its state in-memory, making it a single point of failure during training. To avoid this, pass `fault_tolerant_mode=True` when creating your `DispatchServer`. Dispatcher fault tolerance requires `work_dir` to be configured and accessible from the dispatcher both before and after restart (e.g. a GCS path). With fault tolerant mode enabled, the dispatcher will journal its state to the work directory so that no state is lost when the dispatcher is restarted.
WorkerServers may be freely restarted, added, or removed during training. At startup, workers will register with the dispatcher and begin processing all outstanding jobs from the beginning.
### Usage with tf.distribute
tf.distribute is the TensorFlow API for distributed training. There are several ways to use tf.data with tf.distribute: `strategy.experimental_distribute_dataset`, `strategy.distribute_datasets_from_function`, and (for PSStrategy) `coordinator.create_per_worker_dataset`. The following sections give code examples for each.
In general we recommend using `tf.data.experimental.service.{register_dataset,from_dataset_id}` over [`tf.data.experimental.service.distribute`](../../../../data/experimental/service/distribute) for two reasons:
* The dataset only needs to be constructed and optimized once, instead of once per worker. This can significantly reduce startup time, because the current `experimental_distribute_dataset` and `distribute_datasets_from_function` implementations create and optimize worker datasets sequentially.
* If a dataset depends on lookup tables or variables that are only present on one host, the dataset needs to be registered from that host. Typically this only happens when resources are placed on the chief or worker 0. Registering the dataset from the chief will avoid issues with depending on remote resources.
#### strategy.experimental\_distribute\_dataset
Nothing special is required when using `strategy.experimental_distribute_dataset`, just apply `register_dataset` and `from_dataset_id` as above, making sure to specify a `job_name` so that all workers consume from the same tf.data service job.
```
dataset = ... # Define your dataset here.
dataset_id = tf.data.experimental.service.register_dataset(
service=FLAGS.tf_data_service_address,
dataset=dataset)
dataset = tf.data.experimental.service.from_dataset_id(
processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
service=FLAGS.tf_data_service_address,
dataset_id=dataset_id,
job_name="shared_job")
dataset = strategy.experimental_distribute_dataset(dataset)
```
#### strategy.distribute\_datasets\_from\_function
First, make sure the dataset produced by the `dataset_fn` does not depend on the `input_context` for the training worker on which it is run. Instead of each worker building its own (sharded) dataset, one worker should register an unsharded dataset, and the remaining workers should consume data from that dataset.
```
dataset = dataset_fn()
dataset_id = tf.data.experimental.service.register_dataset(
service=FLAGS.tf_data_service_address,
dataset=dataset)
def new_dataset_fn(input_context):
del input_context
return tf.data.experimental.service.from_dataset_id(
processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
service=FLAGS.tf_data_service_address,
dataset_id=dataset_id,
job_name="shared_job")
dataset = strategy.distribute_datasets_from_function(new_dataset_fn)
```
#### coordinator.create\_per\_worker\_dataset
`create_per_worker_dataset` works the same as `distribute_datasets_from_function`.
```
dataset = dataset_fn()
dataset_id = tf.data.experimental.service.register_dataset(
service=FLAGS.tf_data_service_address,
dataset=dataset)
def new_dataset_fn(input_context):
del input_context
return tf.data.experimental.service.from_dataset_id(
processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
service=FLAGS.tf_data_service_address,
dataset_id=dataset_id,
job_name="shared_job")
dataset = coordinator.create_per_worker_dataset(new_dataset_fn)
```
Limitations
-----------
* Python-based data processing: Datasets which use Python-based data processing (e.g. [`tf.py_function`](../../../../py_function), [`tf.numpy_function`](../../../../numpy_function), or [`tf.data.Dataset.from_generator`](../../../../data/dataset#from_generator)) are currently not supported.
* Non-Serializable Resources: Datasets may only depend on TF resources that support serialization. Serialization is currently supported for lookup tables and variables. If your dataset depends on a TF resource that cannot be serialized, please file a Github issue.
* Remote Resources: If a dataset depends on a resource, the dataset must be registered from the same process that created the resource (e.g. the "chief" job of ParameterServerStrategy).
Classes
-------
[`class DispatcherConfig`](../../../../data/experimental/service/dispatcherconfig): Configuration class for tf.data service dispatchers.
[`class ShardingPolicy`](../../../../data/experimental/service/shardingpolicy): Specifies how to shard data among tf.data service workers.
[`class WorkerConfig`](../../../../data/experimental/service/workerconfig): Configuration class for tf.data service dispatchers.
Functions
---------
[`distribute(...)`](../../../../data/experimental/service/distribute): A transformation that moves dataset processing to the tf.data service.
[`from_dataset_id(...)`](../../../../data/experimental/service/from_dataset_id): Creates a dataset which reads data from the tf.data service.
[`register_dataset(...)`](../../../../data/experimental/service/register_dataset): Registers a dataset with the tf.data service.
tensorflow tf.compat.v1.data.experimental.sample_from_datasets tf.compat.v1.data.experimental.sample\_from\_datasets
=====================================================
Samples elements at random from the datasets in `datasets`. (deprecated)
```
tf.compat.v1.data.experimental.sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose also that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
| programming_docs |
tensorflow tf.compat.v1.data.experimental.make_csv_dataset tf.compat.v1.data.experimental.make\_csv\_dataset
=================================================
Reads CSV files into a dataset.
```
tf.compat.v1.data.experimental.make_csv_dataset(
file_pattern,
batch_size,
column_names=None,
column_defaults=None,
label_name=None,
select_columns=None,
field_delim=',',
use_quote_delim=True,
na_value='',
header=True,
num_epochs=None,
shuffle=True,
shuffle_buffer_size=10000,
shuffle_seed=None,
prefetch_buffer_size=None,
num_parallel_reads=None,
sloppy=False,
num_rows_for_inference=100,
compression_type=None,
ignore_errors=False
)
```
Reads CSV files into a dataset, where each element of the dataset is a (features, labels) tuple that corresponds to a batch of CSV rows. The features dictionary maps feature column names to `Tensor`s containing the corresponding feature data, and labels is a `Tensor` containing the batch's label data.
By default, the first rows of the CSV files are expected to be headers listing the column names. If the first rows are not headers, set `header=False` and provide the column names with the `column_names` argument.
By default, the dataset is repeated indefinitely, reshuffling the order each time. This behavior can be modified by setting the `num_epochs` and `shuffle` arguments.
For example, suppose you have a CSV file containing
| Feature\_A | Feature\_B |
| --- | --- |
| 1 | "a" |
| 2 | "b" |
| 3 | "c" |
| 4 | "d" |
```
# No label column specified
dataset = tf.data.experimental.make_csv_dataset(filename, batch_size=2)
iterator = dataset.as_numpy_iterator()
print(dict(next(iterator)))
# prints a dictionary of batched features:
# OrderedDict([('Feature_A', array([1, 4], dtype=int32)),
# ('Feature_B', array([b'a', b'd'], dtype=object))])
```
```
# Set Feature_B as label column
dataset = tf.data.experimental.make_csv_dataset(
filename, batch_size=2, label_name="Feature_B")
iterator = dataset.as_numpy_iterator()
print(next(iterator))
# prints (features, labels) tuple:
# (OrderedDict([('Feature_A', array([1, 2], dtype=int32))]),
# array([b'a', b'b'], dtype=object))
```
See the [Load CSV data guide](https://www.tensorflow.org/tutorials/load_data/csv) for more examples of using `make_csv_dataset` to read CSV data.
| Args |
| `file_pattern` | List of files or patterns of file paths containing CSV records. See [`tf.io.gfile.glob`](../../../../io/gfile/glob) for pattern rules. |
| `batch_size` | An int representing the number of records to combine in a single batch. |
| `column_names` | An optional list of strings that corresponds to the CSV columns, in order. One per column of the input record. If this is not provided, infers the column names from the first row of the records. These names will be the keys of the features dict of each dataset element. |
| `column_defaults` | A optional list of default values for the CSV fields. One item per selected column of the input record. Each item in the list is either a valid CSV dtype (float32, float64, int32, int64, or string), or a `Tensor` with one of the aforementioned types. The tensor can either be a scalar default value (if the column is optional), or an empty tensor (if the column is required). If a dtype is provided instead of a tensor, the column is also treated as required. If this list is not provided, tries to infer types based on reading the first num\_rows\_for\_inference rows of files specified, and assumes all columns are optional, defaulting to `0` for numeric values and `""` for string values. If both this and `select_columns` are specified, these must have the same lengths, and `column_defaults` is assumed to be sorted in order of increasing column index. |
| `label_name` | A optional string corresponding to the label column. If provided, the data for this column is returned as a separate `Tensor` from the features dictionary, so that the dataset complies with the format expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input function. |
| `select_columns` | An optional list of integer indices or string column names, that specifies a subset of columns of CSV data to select. If column names are provided, these must correspond to names provided in `column_names` or inferred from the file header lines. When this argument is specified, only a subset of CSV columns will be parsed and returned, corresponding to the columns specified. Using this results in faster parsing and lower memory usage. If both this and `column_defaults` are specified, these must have the same lengths, and `column_defaults` is assumed to be sorted in order of increasing column index. |
| `field_delim` | An optional `string`. Defaults to `","`. Char delimiter to separate fields in a record. |
| `use_quote_delim` | An optional bool. Defaults to `True`. If false, treats double quotation marks as regular characters inside of the string fields. |
| `na_value` | Additional string to recognize as NA/NaN. |
| `header` | A bool that indicates whether the first rows of provided CSV files correspond to header lines with column names, and should not be included in the data. |
| `num_epochs` | An int specifying the number of times this dataset is repeated. If None, cycles through the dataset forever. |
| `shuffle` | A bool that indicates whether the input should be shuffled. |
| `shuffle_buffer_size` | Buffer size to use for shuffling. A large buffer size ensures better shuffling, but increases memory usage and startup time. |
| `shuffle_seed` | Randomization seed to use for shuffling. |
| `prefetch_buffer_size` | An int specifying the number of feature batches to prefetch for performance improvement. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. |
| `num_parallel_reads` | Number of threads used to read CSV records from files. If >1, the results will be interleaved. Defaults to `1`. |
| `sloppy` | If `True`, reading performance will be improved at the cost of non-deterministic ordering. If `False`, the order of elements produced is deterministic prior to shuffling (elements are still randomized if `shuffle=True`. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to `False`. |
| `num_rows_for_inference` | Number of rows of a file to use for type inference if record\_defaults is not provided. If None, reads all the rows of all the files. Defaults to 100. |
| `compression_type` | (Optional.) A [`tf.string`](../../../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression. |
| `ignore_errors` | (Optional.) If `True`, ignores errors with CSV file parsing, such as malformed data or empty lines, and moves on to the next valid CSV record. Otherwise, the dataset raises an error and stops processing when encountering any invalid records. Defaults to `False`. |
| Returns |
| A dataset, where each element is a (features, labels) tuple that corresponds to a batch of `batch_size` CSV rows. The features dictionary maps feature column names to `Tensor`s containing the corresponding column data, and labels is a `Tensor` containing the column data for the label column specified by `label_name`. |
| Raises |
| `ValueError` | If any of the arguments is malformed. |
tensorflow tf.compat.v1.data.experimental.TensorArrayStructure tf.compat.v1.data.experimental.TensorArrayStructure
===================================================
DEPRECATED FUNCTION
```
tf.compat.v1.data.experimental.TensorArrayStructure(
dtype, element_shape, dynamic_size, infer_shape
)
```
tensorflow tf.compat.v1.data.experimental.CsvDataset tf.compat.v1.data.experimental.CsvDataset
=========================================
A Dataset comprising lines from one or more CSV files.
Inherits From: [`Dataset`](../dataset), [`Dataset`](../../../../data/dataset)
```
tf.compat.v1.data.experimental.CsvDataset(
filenames,
record_defaults,
compression_type=None,
buffer_size=None,
header=False,
field_delim=',',
use_quote_delim=True,
na_value='',
select_cols=None,
exclude_cols=None
)
```
| Args |
| `filenames` | A [`tf.string`](../../../../../tf#string) tensor containing one or more filenames. |
| `record_defaults` | A list of default values for the CSV fields. Each item in the list is either a valid CSV `DType` (float32, float64, int32, int64, string), or a `Tensor` object with one of the above types. One per column of CSV data, with either a scalar `Tensor` default value for the column if it is optional, or `DType` or empty `Tensor` if required. If both this and `select_columns` are specified, these must have the same lengths, and `column_defaults` is assumed to be sorted in order of increasing column index. If both this and 'exclude\_cols' are specified, the sum of lengths of record\_defaults and exclude\_cols should equal the total number of columns in the CSV file. |
| `compression_type` | (Optional.) A [`tf.string`](../../../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression. |
| `buffer_size` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar denoting the number of bytes to buffer while reading files. Defaults to 4MB. |
| `header` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar indicating whether the CSV file(s) have header line(s) that should be skipped when parsing. Defaults to `False`. |
| `field_delim` | (Optional.) A [`tf.string`](../../../../../tf#string) scalar containing the delimiter character that separates fields in a record. Defaults to `","`. |
| `use_quote_delim` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar. If `False`, treats double quotation marks as regular characters inside of string fields (ignoring RFC 4180, Section 2, Bullet 5). Defaults to `True`. |
| `na_value` | (Optional.) A [`tf.string`](../../../../../tf#string) scalar indicating a value that will be treated as NA/NaN. |
| `select_cols` | (Optional.) A sorted list of column indices to select from the input data. If specified, only this subset of columns will be parsed. Defaults to parsing all columns. At most one of `select_cols` and `exclude_cols` can be specified. |
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../../tf#string) scalar [`tf.Tensor`](../../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../../data/dataset) of scalar [`tf.int64`](../../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../../py_function) accepts [`tf.Tensor`](../../../../tensor) whereas [`tf.numpy_function`](../../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../../numpy_function) and [`tf.py_function`](../../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../../tf#int32) scalar [`tf.Tensor`](../../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4009-L4010)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../../tensorshape) or [`tf.int64`](../../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../../tf#int32), [`tf.int64`](../../../../../tf#int64) or [`tf.string`](../../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4016-L4017)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.experimental.SqlDataset tf.compat.v1.data.experimental.SqlDataset
=========================================
A `Dataset` consisting of the results from a SQL query.
Inherits From: [`Dataset`](../dataset), [`Dataset`](../../../../data/dataset)
```
tf.compat.v1.data.experimental.SqlDataset(
driver_name, data_source_name, query, output_types
)
```
| Args |
| `driver_name` | A 0-D [`tf.string`](../../../../../tf#string) tensor containing the database type. Currently, the only supported value is 'sqlite'. |
| `data_source_name` | A 0-D [`tf.string`](../../../../../tf#string) tensor containing a connection string to connect to the database. |
| `query` | A 0-D [`tf.string`](../../../../../tf#string) tensor containing the SQL query to execute. |
| `output_types` | A tuple of [`tf.DType`](../../../../dtypes/dtype) objects representing the types of the columns returned by `query`. |
| Attributes |
| `element_spec` | The type specification of an element of this dataset.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
```
For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). |
| `output_classes` | Returns the class of each component of an element of this dataset. (deprecated)
|
| `output_shapes` | Returns the shape of each component of an element of this dataset. (deprecated)
|
| `output_types` | Returns the type of each component of an element of this dataset. (deprecated)
|
Methods
-------
### `apply`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276)
```
apply(
transformation_func
)
```
Applies a transformation function to this dataset.
`apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`.
```
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. |
| Returns |
| `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. |
### `as_numpy_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620)
```
as_numpy_iterator()
```
Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
```
This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
```
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
```
`as_numpy_iterator()` will preserve the nested structure of dataset elements.
```
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
```
| Returns |
| An iterable over the elements of the dataset, with their tensors converted to numpy arrays. |
| Raises |
| `TypeError` | if an element contains a non-`Tensor` value. |
| `RuntimeError` | if eager execution is not enabled. |
### `batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754)
```
batch(
batch_size,
drop_remainder=False,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Combines consecutive elements of this dataset into batches.
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
```
```
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
```
The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
>
> **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch.
>
| Args |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `bucket_by_sequence_length`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971)
```
bucket_by_sequence_length(
element_length_func,
bucket_boundaries,
bucket_batch_sizes,
padded_shapes=None,
padding_values=None,
pad_to_bucket_boundary=False,
no_padding=False,
drop_remainder=False,
name=None
)
```
A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded and batched.
This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency.
Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2.
```
elements = [
[0], [1, 2, 3, 4], [5, 6, 7],
[7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]
dataset = tf.data.Dataset.from_generator(
lambda: elements, tf.int64, output_shapes=[None])
dataset = dataset.bucket_by_sequence_length(
element_length_func=lambda elem: tf.shape(elem)[0],
bucket_boundaries=[3, 5],
bucket_batch_sizes=[2, 2, 2])
for elem in dataset.as_numpy_iterator():
print(elem)
[[1 2 3 4]
[5 6 7 0]]
[[ 7 8 9 10 11 0]
[13 14 15 16 19 20]]
[[ 0 0]
[21 22]]
```
| Args |
| `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. |
| `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. |
| `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. |
| `padded_shapes` | Nested structure of [`tf.TensorShape`](../../../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../../../../data/dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. |
| `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../../../../data/dataset#padded_batch). Defaults to padding with 0. |
| `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. |
| `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor) or of same shape). |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
### `cache`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576)
```
cache(
filename='', name=None
)
```
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
>
> **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
>
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
```
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file")
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file!
list(dataset.as_numpy_iterator())
# [0, 1, 2, 3, 4]
```
>
> **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`.
>
| Args |
| `filename` | A [`tf.string`](../../../../../tf#string) scalar [`tf.Tensor`](../../../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `cardinality`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754)
```
cardinality()
```
Returns the cardinality of the dataset, if known.
`cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
```
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
```
| Returns |
| A scalar [`tf.int64`](../../../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../../../data#UNKNOWN_CARDINALITY) respectively. |
### `choose_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471)
```
@staticmethod
choose_from_datasets(
datasets, choice_dataset, stop_on_empty_dataset=True
)
```
Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `choice_dataset` | A [`tf.data.Dataset`](../../../../data/dataset) of scalar [`tf.int64`](../../../../../tf#int64) tensors between `0` and `len(datasets) - 1`. |
| `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. |
| Returns |
| A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. |
| Raises |
| `TypeError` | If `datasets` or `choice_dataset` has the wrong type. |
| `ValueError` | If `datasets` is empty. |
### `concatenate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289)
```
concatenate(
dataset, name=None
)
```
Creates a `Dataset` by concatenating the given dataset with this dataset.
```
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have
# compatible element specs.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
```
| Args |
| `dataset` | `Dataset` to be concatenated. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `enumerate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451)
```
enumerate(
start=0, name=None
)
```
Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
```
```
# The (nested) structure of the input dataset determines the
# structure of elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
```
| Args |
| `start` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the start value for enumeration. |
| `name` | Optional. A name for the tf.data operations used by `enumerate`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `filter`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246)
```
filter(
predicate, name=None
)
```
Filters this dataset according to `predicate`.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
```
| Args |
| `predicate` | A function mapping a dataset element to a boolean. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `filter_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3948-L3965)
```
filter_with_legacy_function(
predicate
)
```
Filters this dataset according to `predicate`. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `filter` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `filter` as this method will be removed in V2.
>
| Args |
| `predicate` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../../tf#bool) tensor. |
| Returns |
| `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. |
### `flat_map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092)
```
flat_map(
map_func, name=None
)
```
Maps `map_func` across this dataset and flattens the result.
#### The type signature is:
```
def flat_map(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
```
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
[`tf.data.Dataset.interleave()`](../../../../data/dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../../../../data/dataset#interleave)
| Args |
| `map_func` | A function mapping a dataset element to a dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_generator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173)
```
@staticmethod
from_generator(
generator,
output_types=None,
output_shapes=None,
args=None,
output_signature=None,
name=None
)
```
Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments)
>
> **Note:** The current implementation of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) uses [`tf.numpy_function`](../../../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment.
>
The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified.
The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../../../typespec) objects from `output_signature` argument:
```
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
```
There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`.
>
> **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator).
>
>
> **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth.
>
| Args |
| `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. |
| `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. |
| `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../../tensorshape) objects corresponding to each component of an element yielded by `generator`. |
| `args` | (Optional.) A tuple of [`tf.Tensor`](../../../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. |
| `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../../../typespec) objects corresponding to each component of an element yielded by `generator`. |
| `name` | (Optional.) A name for the tf.data operations used by `from_generator`. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_sparse_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3732-L3743)
```
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
```
Splits each rank-N [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor) in this dataset row-wise. (deprecated)
| Args |
| `sparse_tensor` | A [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor). |
| Returns |
| `Dataset` | A `Dataset` of rank-(N-1) sparse tensors. |
### `from_tensor_slices`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809)
```
@staticmethod
from_tensor_slices(
tensors, name=None
)
```
Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
```
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
```
```
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
```
```
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
```
```
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
```
```
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `from_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729)
```
@staticmethod
from_tensors(
tensors, name=None
)
```
Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead.
```
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
```
```
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
```
Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
| Args |
| `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `get_single_element`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671)
```
get_single_element(
name=None
)
```
Returns the single element of the `dataset`.
The function enables you to use a [`tf.data.Dataset`](../../../../data/dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../../../../data/dataset) abstraction on top of them.
For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label.
```
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
raw_features = ... # input batch of BATCH_SIZE elements.
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
```
In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../../../../data/dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features.
>
> **Note:** The `dataset` should contain only one element.
>
Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features.
This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../../../../data/dataset) operations, and you want to use those transformations while serving your model.
#### Keras
```
model = ... # A pre-built or custom model
class PreprocessingModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
@tf.function(input_signature=[...])
def serving_fn(self, data):
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
ds = ds.batch(batch_size=BATCH_SIZE)
return tf.argmax(self.model(ds.get_single_element()), axis=-1)
preprocessing_model = PreprocessingModel(model)
your_exported_model_dir = ... # save the model to this path.
tf.saved_model.save(preprocessing_model, your_exported_model_dir,
signatures={'serving_default': preprocessing_model.serving_fn}
)
```
#### Estimator
In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing.
```
def serving_input_fn():
raw_feature_spec = ... # Spec for the raw_features
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
)
serving_input_receiver = input_fn()
raw_features = serving_input_receiver.features
def preprocessing_fn(raw_feature):
# ... the raw_feature is preprocessed as per the use-case
return feature
dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
processed_features = dataset.get_single_element()
# Please note that the value of `BATCH_SIZE` should be equal to
# the size of the leading dimension of `raw_features`. This ensures
# that `dataset` has only element, which is a pre-requisite for
# using `dataset.get_single_element()`.
return tf.estimator.export.ServingInputReceiver(
processed_features, serving_input_receiver.receiver_tensors)
estimator = ... # A pre-built or custom estimator
estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
```
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A nested structure of [`tf.Tensor`](../../../../tensor) objects, corresponding to the single element of `dataset`. |
| Raises |
| `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
### `group_by_window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824)
```
group_by_window(
key_func, reduce_func, window_size=None, window_size_func=None, name=None
)
```
Groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`.
```
dataset = tf.data.Dataset.range(10)
window_size = 5
key_func = lambda x: x%2
reduce_func = lambda key, dataset: dataset.batch(window_size)
dataset = dataset.group_by_window(
key_func=key_func,
reduce_func=reduce_func,
window_size=window_size)
for elem in dataset.as_numpy_iterator():
print(elem)
[0 2 4 6 8]
[1 3 5 7 9]
```
| Args |
| `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../../../tf#int64) tensor. |
| `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. |
| `window_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. |
| `window_size_func` | A function mapping a key to a [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
| Raises |
| `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. |
### `interleave`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222)
```
interleave(
map_func,
cycle_length=None,
block_length=None,
num_parallel_calls=None,
deterministic=None,
name=None
)
```
Maps `map_func` across this dataset, and interleaves the results.
#### The type signature is:
```
def interleave(
self: Dataset[T],
map_func: Callable[[T], Dataset[S]]
) -> Dataset[S]
```
For example, you can use [`Dataset.interleave()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) to process many input files concurrently:
```
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
```
The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../../../../data/dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.
#### For example:
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
```
>
> **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined.
>
Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`.
```
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
| Args |
| `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../../../../data/dataset). |
| `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. |
| `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. |
| `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `list_files`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393)
```
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None, name=None
)
```
A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems.
>
> **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order.
>
#### Example:
If we had the following files on our filesystem:
* /path/to/dir/a.txt
* /path/to/dir/b.py
* /path/to/dir/c.py
If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce:
* /path/to/dir/b.py
* /path/to/dir/c.py
| Args |
| `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. |
| `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `name` | Optional. A name for the tf.data operations used by `list_files`. |
| Returns |
| `Dataset` | A `Dataset` of strings corresponding to file names. |
### `make_initializable_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3601-L3648)
```
make_initializable_iterator(
shared_name=None
)
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be in an uninitialized state, and you must run the `iterator.initializer` operation before using it:
>
```
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Args |
| `shared_name` | (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). |
| Returns |
| A [`tf.data.Iterator`](../../../../data/iterator) for elements of this dataset. |
| Raises |
| `RuntimeError` | If eager execution is enabled. |
### `make_one_shot_iterator`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3509-L3548)
```
make_one_shot_iterator()
```
Creates an iterator for elements of this dataset. (deprecated)
>
> **Note:** The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see `make_initializable_iterator`.
>
#### Example:
```
# Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
```
| Returns |
| An [`tf.data.Iterator`](../../../../data/iterator) for elements of this dataset. |
### `map`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056)
```
map(
map_func, num_parallel_calls=None, deterministic=None, name=None
)
```
Maps `map_func` across the elements of this dataset.
This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
For example, `map` can be used for adding 1 to each element, or projecting a subset of element components.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
```
The input signature of `map_func` is determined by the structure of each element in this dataset.
```
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
```
```
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
```
```
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
```
The value or values returned by `map_func` determine the structure of each element in the returned dataset.
```
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
```
`map_func` can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options:
1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.
2) Use [`tf.py_function`](../../../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
3) Use [`tf.numpy_function`](../../../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../../../py_function) accepts [`tf.Tensor`](../../../../tensor) whereas [`tf.numpy_function`](../../../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example:
```
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
```
Note that the use of [`tf.numpy_function`](../../../../numpy_function) and [`tf.py_function`](../../../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL).
Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`.
```
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
```
The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value.
| Args |
| `map_func` | A function mapping a dataset element to another dataset element. |
| `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `map_with_legacy_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3872-L3920)
```
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
```
Maps `map_func` across the elements of this dataset. (deprecated)
>
> **Note:** This is an escape hatch for existing uses of `map` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map` as this method will be removed in V2.
>
| Args |
| `map_func` | A function mapping a (nested) structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to another (nested) structure of tensors. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../../tf#int32) scalar [`tf.Tensor`](../../../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| `deterministic` | (Optional.) When `num_parallel_calls` is specified, this boolean controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../../../../data/options#deterministic) option (`True` by default) controls the behavior. |
| Returns |
| `Dataset` | A `Dataset`. |
### `options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4009-L4010)
```
options()
```
Returns the options for this dataset and its inputs.
| Returns |
| A [`tf.data.Options`](../../../../data/options) object representing the dataset options. |
### `padded_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889)
```
padded_batch(
batch_size,
padded_shapes=None,
padding_values=None,
drop_remainder=False,
name=None
)
```
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input dataset into a single element.
Like [`tf.data.Dataset.batch`](../../../../data/dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced.
Unlike [`tf.data.Dataset.batch`](../../../../data/dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element:
* If the dimension is a constant, the component will be padded out to that length in that dimension.
* If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
```
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
```
See also [`tf.data.experimental.dense_to_sparse_batch`](../../../../data/experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../../../sparse/sparsetensor).
| Args |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../../../tensorshape) or [`tf.int64`](../../../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. |
| `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. |
| `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> |
### `prefetch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321)
```
prefetch(
buffer_size, name=None
)
```
Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
>
> **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each).
>
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. |
| `name` | Optional. A name for the tf.data transformation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `random`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992)
```
@staticmethod
random(
seed=None, name=None
)
```
Creates a `Dataset` of pseudorandom values.
The dataset generates a sequence of uniformly distributed integer values.
```
ds1 = tf.data.Dataset.random(seed=4).take(10)
ds2 = tf.data.Dataset.random(seed=4).take(10)
print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator()))
True
```
| Args |
| `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211)
```
@staticmethod
range(
*args, **kwargs
)
```
Creates a `Dataset` of a step-separated range of values.
```
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
```
| Args |
| `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. |
| `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../../../tf#int64)).
* name: (Optional.) A name for the tf.data operation.
|
| Returns |
| `Dataset` | A `RangeDataset`. |
| Raises |
| `ValueError` | if len(args) == 0. |
### `reduce`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544)
```
reduce(
initial_state, reduce_func, name=None
)
```
Reduces the input dataset to a single element.
The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result.
```
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
```
| Args |
| `initial_state` | An element representing the initial state of the transformation. |
| `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A dataset element corresponding to the final state of the transformation. |
### `rejection_resample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272)
```
rejection_resample(
class_func, target_dist, initial_dist=None, seed=None, name=None
)
```
A transformation that resamples a dataset to a target distribution.
Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution.
```
initial_dist = [0.6, 0.4]
num_classes = len(initial_dist)
num_samples = 1000
data_np = np.random.choice(num_classes, num_samples, p=initial_dist)
dataset = tf.data.Dataset.from_tensor_slices(data_np)
```
The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution.
```
target_dist = [0.5, 0.5]
resampled_dataset = dataset.rejection_resample(
class_func=lambda x: x,
target_dist=target_dist,
initial_dist=initial_dist)
resampled_dataset = resampled_dataset.map(
lambda class_func_result, data: data)
```
The value distribution of classes in the resampled\_distribution will be now be close to the target distribution.
| Args |
| `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../../../tf#int32) tensor. Values should be in `[0, num_classes)`. |
| `target_dist` | A floating point type tensor, shaped `[num_classes]`. |
| `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. |
| `seed` | (Optional.) Python integer seed for the resampler. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset` |
### `repeat`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416)
```
repeat(
count=None, name=None
)
```
Repeats this dataset so each original value is seen `count` times.
```
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
```
>
> **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements.
>
| Args |
| `count` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `sample_from_datasets`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412)
```
@staticmethod
sample_from_datasets(
datasets, weights=None, seed=None, stop_on_empty_dataset=False
)
```
Samples elements at random from the datasets in `datasets`.
Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets:
```
dataset1 = tf.data.Dataset.range(0, 3)
dataset2 = tf.data.Dataset.range(100, 103)
```
Suppose that we sample from these 2 datasets with the following weights:
```
sample_dataset = tf.data.Dataset.sample_from_datasets(
[dataset1, dataset2], weights=[0.5, 0.5])
```
One possible outcome of elements in sample\_dataset is:
```
print(list(sample_dataset.as_numpy_iterator()))
# [100, 0, 1, 101, 2, 102]
```
| Args |
| `datasets` | A non-empty list of [`tf.data.Dataset`](../../../../data/dataset) objects with compatible structure. |
| `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../../../../data/dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. |
| Returns |
| A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. |
| Raises |
| `TypeError` | If the `datasets` or `weights` arguments have the wrong type. |
| `ValueError` | * If `datasets` is empty, or
* If `weights` is specified and does not match the length of `datasets`.
|
### `scan`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130)
```
scan(
initial_state, scan_func, name=None
)
```
A transformation that scans a function across an input dataset.
This transformation is a stateful relative of [`tf.data.Dataset.map`](../../../../data/dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`.
```
dataset = tf.data.Dataset.range(10)
initial_state = tf.constant(0, dtype=tf.int64)
scan_func = lambda state, i: (state + i, state + i)
dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func)
list(dataset.as_numpy_iterator())
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
```
| Args |
| `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. |
| `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `shard`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685)
```
shard(
num_shards, index, name=None
)
```
Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i.
```
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
```
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
#### Important caveats:
* Be sure to shard before you use any randomizing operator (such as shuffle).
* Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
```
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
| Args |
| `num_shards` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of shards operating in parallel. |
| `index` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the worker index. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
| Raises |
| `InvalidArgumentError` | if `num_shards` or `index` are illegal values.
**Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
|
### `shuffle`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523)
```
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
```
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2)
# [1, 0, 2, 1, 0, 2]
```
In TF 2.0, [`tf.data.Dataset`](../../../../data/dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration:
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 2, 0]
```
```
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator())
# [1, 0, 2]
list(dataset.as_numpy_iterator())
# [1, 0, 2]
```
| Args |
| `buffer_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements from this dataset from which the new dataset will sample. |
| `seed` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../../../random/set_seed) for behavior. |
| `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `skip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616)
```
skip(
count, name=None
)
```
Creates a `Dataset` that skips `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
```
| Args |
| `count` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `snapshot`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099)
```
snapshot(
path,
compression='AUTO',
reader_func=None,
shard_func=None,
name=None
)
```
API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run.
This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time.
<https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot shards.
Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written.
```
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...)
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets:
```
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func)
```
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
| Args |
| `path` | Required. A directory to use for storing / loading the snapshot to / from. |
| `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. |
| `reader_func` | Optional. A function to control how to read data from snapshot shards. |
| `shard_func` | Optional. A function to control how to shard data when writing a snapshot. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `take`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596)
```
take(
count, name=None
)
```
Creates a `Dataset` with at most `count` elements from this dataset.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
```
| Args |
| `count` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `take_while`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150)
```
take_while(
predicate, name=None
)
```
A transformation that stops dataset iteration based on a `predicate`.
```
dataset = tf.data.Dataset.range(10)
dataset = dataset.take_while(lambda x: x < 5)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
```
| Args |
| `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../../../tf#bool) tensor. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unbatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698)
```
unbatch(
name=None
)
```
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`.
```
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
```
>
> **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `unique`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173)
```
unique(
name=None
)
```
A transformation that discards duplicate elements of a `Dataset`.
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example:
```
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
dataset = dataset.unique()
sorted(list(dataset.as_numpy_iterator()))
[1, 2, 37]
```
>
> **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../../../tf#int32), [`tf.int64`](../../../../../tf#int64) or [`tf.string`](../../../../../tf#string) type.
>
| Args |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| A `Dataset`. |
### `window`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426)
```
window(
size, shift=None, stride=1, drop_remainder=False, name=None
)
```
Returns a dataset of "windows".
Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`).
#### For example:
```
dataset = tf.data.Dataset.range(7).window(3)
for window in dataset:
print(window)
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)>
```
Since windows are datasets, they can be iterated over:
```
for window in dataset:
print([item.numpy() for item in window])
[0, 1, 2]
[3, 4, 5]
[6]
```
#### Shift
The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
```
#### Stride
The `stride` argument determines the stride between input elements within a window.
```
dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2,
drop_remainder=True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
```
#### Nested elements
When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them.
#### The type signature is:
```
def window(
self: Dataset[Nest[T]], ...
) -> Dataset[Nest[Dataset[T]]]
```
Applying `window` to a `Dataset` of tuples gives a tuple of windows:
```
dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]))
dataset = dataset.window(2)
windows = next(iter(dataset))
windows
(<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>,
<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>)
```
```
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(to_numpy(windows[0]), to_numpy(windows[1]))
[1, 2] [6, 7]
[3, 4] [8, 9]
[5] [10]
```
Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`:
```
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]})
dataset = dataset.window(2)
def to_numpy(ds):
return list(ds.as_numpy_iterator())
for windows in dataset:
print(tf.nest.map_structure(to_numpy, windows))
{'a': [1, 2], 'b': [4, 5], 'c': [7, 8]}
{'a': [3], 'b': [6], 'c': [9]}
```
#### Flatten a dataset of windows
The [`Dataset.flat_map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) and [`Dataset.interleave`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset.
The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially.
For example, to turn each window into a dense tensor:
```
size = 3
dataset = tf.data.Dataset.range(7).window(size, shift=1,
drop_remainder=True)
batched = dataset.flat_map(lambda x:x.batch(3))
for batch in batched:
print(batch.numpy())
[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]
```
| Args |
| `size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. |
| `shift` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. |
| `stride` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. |
### `with_options`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726)
```
with_options(
options, name=None
)
```
Returns a new [`tf.data.Dataset`](../../../../data/dataset) with the given options set.
The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
```
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.deterministic = False
ds = ds.with_options(options)
```
| Args |
| `options` | A [`tf.data.Options`](../../../../data/options) that identifies the options the use. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset` with the given options. |
| Raises |
| `ValueError` | when an option is set more than once to a non-default value |
### `zip`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259)
```
@staticmethod
zip(
datasets, name=None
)
```
Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure).
```
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
```
| Args |
| `datasets` | A (nested) structure of datasets. |
| `name` | (Optional.) A name for the tf.data operation. |
| Returns |
| `Dataset` | A `Dataset`. |
### `__bool__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__bool__()
```
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4016-L4017)
```
__iter__()
```
Creates an iterator for elements of this dataset.
The returned iterator implements the Python Iterator protocol.
| Returns |
| An [`tf.data.Iterator`](../../../../data/iterator) for the elements of this dataset. |
| Raises |
| `RuntimeError` | If not inside of tf.function and not executing eagerly. |
### `__len__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527)
```
__len__()
```
Returns the length of the dataset if it is known and finite.
This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../../../../data/dataset#cardinality) instead.
| Returns |
| An integer representing the length of the dataset. |
| Raises |
| `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. |
### `__nonzero__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500)
```
__nonzero__()
```
| programming_docs |
tensorflow tf.compat.v1.data.experimental.map_and_batch_with_legacy_function tf.compat.v1.data.experimental.map\_and\_batch\_with\_legacy\_function
======================================================================
Fused implementation of `map` and `batch`. (deprecated)
```
tf.compat.v1.data.experimental.map_and_batch_with_legacy_function(
map_func,
batch_size,
num_parallel_batches=None,
drop_remainder=False,
num_parallel_calls=None
)
```
>
> **Note:** This is an escape hatch for existing uses of `map_and_batch` that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to `map_and_batch` as this method will not be removed in V2.
>
| Args |
| `map_func` | A function mapping a nested structure of tensors to another nested structure of tensors. |
| `batch_size` | A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. |
| `num_parallel_batches` | (Optional.) A [`tf.int64`](../../../../../tf#int64) scalar [`tf.Tensor`](../../../../tensor), representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce. |
| `drop_remainder` | (Optional.) A [`tf.bool`](../../../../../tf#bool) scalar [`tf.Tensor`](../../../../tensor), representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch. |
| `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../../../tf#int32) scalar [`tf.Tensor`](../../../../tensor), representing the number of elements to process in parallel. If not specified, `batch_size * num_parallel_batches` elements will be processed in parallel. If the value [`tf.data.AUTOTUNE`](../../../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. |
| Returns |
| A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../../../../data/dataset#apply). |
| Raises |
| `ValueError` | If both `num_parallel_batches` and `num_parallel_calls` are specified. |
tensorflow tf.compat.v1.resource_loader.get_path_to_datafile tf.compat.v1.resource\_loader.get\_path\_to\_datafile
=====================================================
Get the path to the specified file in the data dependencies.
```
tf.compat.v1.resource_loader.get_path_to_datafile(
path
)
```
The path is relative to tensorflow/
| Args |
| `path` | a string resource path relative to tensorflow/ |
| Returns |
| The path to the specified file present in the data attribute of py\_test or py\_binary. |
| Raises |
| `IOError` | If the path is not found, or the resource can't be opened. |
tensorflow tf.compat.v1.resource_loader.get_data_files_path tf.compat.v1.resource\_loader.get\_data\_files\_path
====================================================
Get a direct path to the data files colocated with the script.
```
tf.compat.v1.resource_loader.get_data_files_path()
```
| Returns |
| The directory where files specified in data attribute of py\_test and py\_binary are stored. |
tensorflow tf.compat.v1.resource_loader.load_resource tf.compat.v1.resource\_loader.load\_resource
============================================
Load the resource at given path, where path is relative to tensorflow/.
```
tf.compat.v1.resource_loader.load_resource(
path
)
```
| Args |
| `path` | a string resource path relative to tensorflow/. |
| Returns |
| The contents of that resource. |
| Raises |
| `IOError` | If the path is not found, or the resource can't be opened. |
tensorflow tf.compat.v1.resource_loader.get_root_dir_with_all_resources tf.compat.v1.resource\_loader.get\_root\_dir\_with\_all\_resources
==================================================================
Get a root directory containing all the data attributes in the build rule.
```
tf.compat.v1.resource_loader.get_root_dir_with_all_resources()
```
| Returns |
| The path to the specified file present in the data attribute of py\_test or py\_binary. Falls back to returning the same as get\_data\_files\_path if it fails to detect a bazel runfiles directory. |
tensorflow tf.compat.v1.resource_loader.readahead_file_path tf.compat.v1.resource\_loader.readahead\_file\_path
===================================================
Readahead files not implemented; simply returns given path.
```
tf.compat.v1.resource_loader.readahead_file_path(
path, readahead='128M'
)
```
tensorflow tf.compat.v1.logging.log_every_n tf.compat.v1.logging.log\_every\_n
==================================
Log 'msg % args' at level 'level' once per 'n' times.
```
tf.compat.v1.logging.log_every_n(
level, msg, n, *args
)
```
Logs the 1st call, (N+1)st call, (2N+1)st call, etc. Not threadsafe.
| Args |
| `level` | The level at which to log. |
| `msg` | The message to be logged. |
| `n` | The number of times this should be called before it is logged. |
| `*args` | The args to be substituted into the msg. |
tensorflow tf.compat.v1.logging.warn tf.compat.v1.logging.warn
=========================
```
tf.compat.v1.logging.warn(
msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.warning tf.compat.v1.logging.warning
============================
```
tf.compat.v1.logging.warning(
msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.vlog tf.compat.v1.logging.vlog
=========================
```
tf.compat.v1.logging.vlog(
level, msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.fatal tf.compat.v1.logging.fatal
==========================
```
tf.compat.v1.logging.fatal(
msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.info tf.compat.v1.logging.info
=========================
```
tf.compat.v1.logging.info(
msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.log tf.compat.v1.logging.log
========================
```
tf.compat.v1.logging.log(
level, msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.debug tf.compat.v1.logging.debug
==========================
```
tf.compat.v1.logging.debug(
msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.TaskLevelStatusMessage tf.compat.v1.logging.TaskLevelStatusMessage
===========================================
```
tf.compat.v1.logging.TaskLevelStatusMessage(
msg
)
```
tensorflow tf.compat.v1.logging.log_first_n tf.compat.v1.logging.log\_first\_n
==================================
Log 'msg % args' at level 'level' only first 'n' times.
```
tf.compat.v1.logging.log_first_n(
level, msg, n, *args
)
```
Not threadsafe.
| Args |
| `level` | The level at which to log. |
| `msg` | The message to be logged. |
| `n` | The number of times this should be called before it is logged. |
| `*args` | The args to be substituted into the msg. |
tensorflow tf.compat.v1.logging.set_verbosity tf.compat.v1.logging.set\_verbosity
===================================
Sets the threshold for what messages will be logged.
```
tf.compat.v1.logging.set_verbosity(
v
)
```
tensorflow tf.compat.v1.logging.get_verbosity tf.compat.v1.logging.get\_verbosity
===================================
Return how much logging output will be produced.
```
tf.compat.v1.logging.get_verbosity()
```
tensorflow tf.compat.v1.logging.error tf.compat.v1.logging.error
==========================
```
tf.compat.v1.logging.error(
msg, *args, **kwargs
)
```
tensorflow tf.compat.v1.logging.log_if tf.compat.v1.logging.log\_if
============================
Log 'msg % args' at level 'level' only if condition is fulfilled.
```
tf.compat.v1.logging.log_if(
level, msg, condition, *args
)
```
tensorflow tf.compat.v1.distributions.DirichletMultinomial tf.compat.v1.distributions.DirichletMultinomial
===============================================
Dirichlet-Multinomial compound distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.DirichletMultinomial(
total_count,
concentration,
validate_args=False,
allow_nan_stats=True,
name='DirichletMultinomial'
)
```
The Dirichlet-Multinomial distribution is parameterized by a (batch of) length-`K` `concentration` vectors (`K > 1`) and a `total_count` number of trials, i.e., the number of trials per draw from the DirichletMultinomial. It is defined over a (batch of) length-`K` vector `counts` such that `tf.reduce_sum(counts, -1) = total_count`. The Dirichlet-Multinomial is identically the Beta-Binomial distribution when `K = 2`.
#### Mathematical Details
The Dirichlet-Multinomial is a distribution over `K`-class counts, i.e., a length-`K` vector of non-negative integer `counts = n = [n_0, ..., n_{K-1}]`.
The probability mass function (pmf) is,
```
pmf(n; alpha, N) = Beta(alpha + n) / (prod_j n_j!) / Z
Z = Beta(alpha) / N!
```
where:
* `concentration = alpha = [alpha_0, ..., alpha_{K-1}]`, `alpha_j > 0`,
* `total_count = N`, `N` a positive integer,
* `N!` is `N` factorial, and,
* `Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the [multivariate beta function](https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function), and,
* `Gamma` is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function).
Dirichlet-Multinomial is a [compound distribution](https://en.wikipedia.org/wiki/Compound_probability_distribution), i.e., its samples are generated as follows.
1. Choose class probabilities: `probs = [p_0,...,p_{K-1}] ~ Dir(concentration)`
2. Draw integers: `counts = [n_0,...,n_{K-1}] ~ Multinomial(total_count, probs)`
The last `concentration` dimension parametrizes a single Dirichlet-Multinomial distribution. When calling distribution functions (e.g., `dist.prob(counts)`), `concentration`, `total_count` and `counts` are broadcast to the same shape. The last dimension of `counts` corresponds single Dirichlet-Multinomial distributions.
Distribution parameters are automatically broadcast in all functions; see examples for details.
#### Pitfalls
The number of classes, `K`, must not exceed:
* the largest integer representable by `self.dtype`, i.e., `2**(mantissa_bits+1)` (IEE754),
* the maximum `Tensor` index, i.e., `2**31-1`.
In other words,
```
K <= min(2**31-1, {
tf.float16: 2**11,
tf.float32: 2**24,
tf.float64: 2**53 }[param.dtype])
```
>
> **Note:** This condition is validated only when `self.validate_args = True`.
>
#### Examples
```
alpha = [1., 2., 3.]
n = 2.
dist = DirichletMultinomial(n, alpha)
```
Creates a 3-class distribution, with the 3rd class is most likely to be drawn. The distribution functions can be evaluated on counts.
```
# counts same shape as alpha.
counts = [0., 0., 2.]
dist.prob(counts) # Shape []
# alpha will be broadcast to [[1., 2., 3.], [1., 2., 3.]] to match counts.
counts = [[1., 1., 0.], [1., 0., 1.]]
dist.prob(counts) # Shape [2]
# alpha will be broadcast to shape [5, 7, 3] to match counts.
counts = [[...]] # Shape [5, 7, 3]
dist.prob(counts) # Shape [5, 7]
```
Creates a 2-batch of 3-class distributions.
```
alpha = [[1., 2., 3.], [4., 5., 6.]] # Shape [2, 3]
n = [3., 3.]
dist = DirichletMultinomial(n, alpha)
# counts will be broadcast to [[2., 1., 0.], [2., 1., 0.]] to match alpha.
counts = [2., 1., 0.]
dist.prob(counts) # Shape [2]
```
| Args |
| `total_count` | Non-negative floating point tensor, whose dtype is the same as `concentration`. The shape is broadcastable to `[N1,..., Nm]` with `m >= 0`. Defines this as a batch of `N1 x ... x Nm` different Dirichlet multinomial distributions. Its components should be equal to integer values. |
| `concentration` | Positive floating point tensor, whose dtype is the same as `n` with shape broadcastable to `[N1,..., Nm, K]` `m >= 0`. Defines this as a batch of `N1 x ... x Nm` different `K` class Dirichlet multinomial distributions. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `concentration` | Concentration parameter; expected prior counts for that coordinate. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `total_concentration` | Sum of last dim of concentration parameter. |
| `total_count` | Number of trials used to construct a sample. |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
Additional documentation from `DirichletMultinomial`:
The covariance for each batch member is defined as the following:
```
Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) *
(n + alpha_0) / (1 + alpha_0)
```
where `concentration = alpha` and `total_concentration = alpha_0 = sum_j alpha_j`.
The covariance between elements in a batch is defined as:
```
Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 *
(n + alpha_0) / (1 + alpha_0)
```
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
Additional documentation from `DirichletMultinomial`:
For each batch of counts, `value = [n_0, ..., n_{K-1}]`, `P[value]` is the probability that after sampling `self.total_count` draws from this Dirichlet-Multinomial distribution, the number of draws falling in class `j` is `n_j`. Since this definition is [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables); different sequences have the same counts so the probability includes a combinatorial coefficient.
>
> **Note:** `value` must be a non-negative tensor with dtype `self.dtype`, have no fractional components, and such that `tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable with `self.concentration` and `self.total_count`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
Additional documentation from `DirichletMultinomial`:
For each batch of counts, `value = [n_0, ..., n_{K-1}]`, `P[value]` is the probability that after sampling `self.total_count` draws from this Dirichlet-Multinomial distribution, the number of draws falling in class `j` is `n_j`. Since this definition is [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables); different sequences have the same counts so the probability includes a combinatorial coefficient.
>
> **Note:** `value` must be a non-negative tensor with dtype `self.dtype`, have no fractional components, and such that `tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable with `self.concentration` and `self.total_count`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.Exponential tf.compat.v1.distributions.Exponential
======================================
Exponential distribution.
Inherits From: [`Gamma`](gamma), [`Distribution`](distribution)
```
tf.compat.v1.distributions.Exponential(
rate,
validate_args=False,
allow_nan_stats=True,
name='Exponential'
)
```
The Exponential distribution is parameterized by an event `rate` parameter.
#### Mathematical Details
The probability density function (pdf) is,
```
pdf(x; lambda, x > 0) = exp(-lambda x) / Z
Z = 1 / lambda
```
where `rate = lambda` and `Z` is the normalizaing constant.
The Exponential distribution is a special case of the Gamma distribution, i.e.,
```
Exponential(rate) = Gamma(concentration=1., rate)
```
The Exponential distribution uses a `rate` parameter, or "inverse scale", which can be intuited as,
```
X ~ Exponential(rate=1)
Y = X / rate
```
| Args |
| `rate` | Floating point tensor, equivalent to `1 / mean`. Must contain only positive values. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `concentration` | Concentration parameter. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `rate` | Rate parameter. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
Additional documentation from `Gamma`:
The mode of a gamma distribution is `(shape - 1) / rate` when `shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`, an exception will be raised rather than returning `NaN`.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
tensorflow tf.compat.v1.distributions.Categorical tf.compat.v1.distributions.Categorical
======================================
Categorical distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Categorical(
logits=None,
probs=None,
dtype=tf.dtypes.int32,
validate_args=False,
allow_nan_stats=True,
name='Categorical'
)
```
The Categorical distribution is parameterized by either probabilities or log-probabilities of a set of `K` classes. It is defined over the integers `{0, 1, ..., K}`.
The Categorical distribution is closely related to the `OneHotCategorical` and `Multinomial` distributions. The Categorical distribution can be intuited as generating samples according to `argmax{ OneHotCategorical(probs) }` itself being identical to `argmax{ Multinomial(probs, total_count=1) }`.
#### Mathematical Details
The probability mass function (pmf) is,
```
pmf(k; pi) = prod_j pi_j**[k == j]
```
#### Pitfalls
The number of classes, `K`, must not exceed:
* the largest integer representable by `self.dtype`, i.e., `2**(mantissa_bits+1)` (IEEE 754),
* the maximum `Tensor` index, i.e., `2**31-1`.
In other words,
```
K <= min(2**31-1, {
tf.float16: 2**11,
tf.float32: 2**24,
tf.float64: 2**53 }[param.dtype])
```
>
> **Note:** This condition is validated only when `self.validate_args = True`.
>
#### Examples
Creates a 3-class distribution with the 2nd class being most likely.
```
dist = Categorical(probs=[0.1, 0.5, 0.4])
n = 1e4
empirical_prob = tf.cast(
tf.histogram_fixed_width(
dist.sample(int(n)),
[0., 2],
nbins=3),
dtype=tf.float32) / n
# ==> array([ 0.1005, 0.5037, 0.3958], dtype=float32)
```
Creates a 3-class distribution with the 2nd class being most likely. Parameterized by [logits](https://en.wikipedia.org/wiki/Logit) rather than probabilities.
```
dist = Categorical(logits=np.log([0.1, 0.5, 0.4])
n = 1e4
empirical_prob = tf.cast(
tf.histogram_fixed_width(
dist.sample(int(n)),
[0., 2],
nbins=3),
dtype=tf.float32) / n
# ==> array([0.1045, 0.5047, 0.3908], dtype=float32)
```
Creates a 3-class distribution with the 3rd class being most likely. The distribution functions can be evaluated on counts.
```
# counts is a scalar.
p = [0.1, 0.4, 0.5]
dist = Categorical(probs=p)
dist.prob(0) # Shape []
# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match counts.
counts = [1, 0]
dist.prob(counts) # Shape [2]
# p will be broadcast to shape [3, 5, 7, 3] to match counts.
counts = [[...]] # Shape [5, 7, 3]
dist.prob(counts) # Shape [5, 7, 3]
```
| Args |
| `logits` | An N-D `Tensor`, `N >= 1`, representing the log probabilities of a set of Categorical distributions. The first `N - 1` dimensions index into a batch of independent distributions and the last dimension represents a vector of logits for each class. Only one of `logits` or `probs` should be passed in. |
| `probs` | An N-D `Tensor`, `N >= 1`, representing the probabilities of a set of Categorical distributions. The first `N - 1` dimensions index into a batch of independent distributions and the last dimension represents a vector of probabilities for each class. Only one of `logits` or `probs` should be passed in. |
| `dtype` | The type of the event samples (default: int32). |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `event_size` | Scalar `int32` tensor: the number of classes. |
| `logits` | Vector of coordinatewise logits. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `probs` | Vector of coordinatewise probabilities. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.Bernoulli tf.compat.v1.distributions.Bernoulli
====================================
Bernoulli distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Bernoulli(
logits=None,
probs=None,
dtype=tf.dtypes.int32,
validate_args=False,
allow_nan_stats=True,
name='Bernoulli'
)
```
The Bernoulli distribution with `probs` parameter, i.e., the probability of a `1` outcome (vs a `0` outcome).
| Args |
| `logits` | An N-D `Tensor` representing the log-odds of a `1` event. Each entry in the `Tensor` parametrizes an independent Bernoulli distribution where the probability of an event is sigmoid(logits). Only one of `logits` or `probs` should be passed in. |
| `probs` | An N-D `Tensor` representing the probability of a `1` event. Each entry in the `Tensor` parameterizes an independent Bernoulli distribution. Only one of `logits` or `probs` should be passed in. |
| `dtype` | The type of the event samples. Default: `int32`. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Raises |
| `ValueError` | If p and logits are passed, or if neither are passed. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `logits` | Log-odds of a `1` outcome (vs `0`). |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `probs` | Probability of a `1` outcome (vs `0`). |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
Additional documentation from `Bernoulli`:
Returns `1` if `prob > 0.5` and `0` otherwise.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
tensorflow tf.compat.v1.distributions.Uniform tf.compat.v1.distributions.Uniform
==================================
Uniform distribution with `low` and `high` parameters.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Uniform(
low=0.0,
high=1.0,
validate_args=False,
allow_nan_stats=True,
name='Uniform'
)
```
#### Mathematical Details
The probability density function (pdf) is,
```
pdf(x; a, b) = I[a <= x < b] / Z
Z = b - a
```
where
* `low = a`,
* `high = b`,
* `Z` is the normalizing constant, and
* `I[predicate]` is the [indicator function](https://en.wikipedia.org/wiki/Indicator_function) for `predicate`.
The parameters `low` and `high` must be shaped in a way that supports broadcasting (e.g., `high - low` is a valid operation).
#### Examples
```
# Without broadcasting:
u1 = Uniform(low=3.0, high=4.0) # a single uniform distribution [3, 4]
u2 = Uniform(low=[1.0, 2.0],
high=[3.0, 4.0]) # 2 distributions [1, 3], [2, 4]
u3 = Uniform(low=[[1.0, 2.0],
[3.0, 4.0]],
high=[[1.5, 2.5],
[3.5, 4.5]]) # 4 distributions
```
```
# With broadcasting:
u1 = Uniform(low=3.0, high=[5.0, 6.0, 7.0]) # 3 distributions
```
| Args |
| `low` | Floating point tensor, lower boundary of the output interval. Must have `low < high`. |
| `high` | Floating point tensor, upper boundary of the output interval. Must have `low < high`. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Raises |
| `InvalidArgumentError` | if `low >= high` and `validate_args=False`. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `high` | Upper boundary of the output interval. |
| `low` | Lower boundary of the output interval. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `range`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/uniform.py#L145-L148)
```
range(
name='range'
)
```
`high - low`.
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.Dirichlet tf.compat.v1.distributions.Dirichlet
====================================
Dirichlet distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Dirichlet(
concentration,
validate_args=False,
allow_nan_stats=True,
name='Dirichlet'
)
```
The Dirichlet distribution is defined over the [`(k-1)`-simplex](https://en.wikipedia.org/wiki/Simplex) using a positive, length-`k` vector `concentration` (`k > 1`). The Dirichlet is identically the Beta distribution when `k = 2`.
#### Mathematical Details
The Dirichlet is a distribution over the open `(k-1)`-simplex, i.e.,
```
S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }.
```
The probability density function (pdf) is,
```
pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z
Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j)
```
where:
* `x in S^{k-1}`, i.e., the `(k-1)`-simplex,
* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,
* `Z` is the normalization constant aka the [multivariate beta function](https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function), and,
* `Gamma` is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function).
The `concentration` represents mean total counts of class occurrence, i.e.,
```
concentration = alpha = mean * total_concentration
```
where `mean` in `S^{k-1}` and `total_concentration` is a positive real number representing a mean total count.
Distribution parameters are automatically broadcast in all functions; see examples for details.
Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018).
#### Examples
```
import tensorflow_probability as tfp
tfd = tfp.distributions
# Create a single trivariate Dirichlet, with the 3rd class being three times
# more frequent than the first. I.e., batch_shape=[], event_shape=[3].
alpha = [1., 2, 3]
dist = tfd.Dirichlet(alpha)
dist.sample([4, 5]) # shape: [4, 5, 3]
# x has one sample, one batch, three classes:
x = [.2, .3, .5] # shape: [3]
dist.prob(x) # shape: []
# x has two samples from one batch:
x = [[.1, .4, .5],
[.2, .3, .5]]
dist.prob(x) # shape: [2]
# alpha will be broadcast to shape [5, 7, 3] to match x.
x = [[...]] # shape: [5, 7, 3]
dist.prob(x) # shape: [5, 7]
```
```
# Create batch_shape=[2], event_shape=[3]:
alpha = [[1., 2, 3],
[4, 5, 6]] # shape: [2, 3]
dist = tfd.Dirichlet(alpha)
dist.sample([4, 5]) # shape: [4, 5, 2, 3]
x = [.2, .3, .5]
# x will be broadcast as [[.2, .3, .5],
# [.2, .3, .5]],
# thus matching batch_shape [2, 3].
dist.prob(x) # shape: [2]
```
Compute the gradients of samples w.r.t. the parameters:
```
alpha = tf.constant([1.0, 2.0, 3.0])
dist = tfd.Dirichlet(alpha)
samples = dist.sample(5) # Shape [5, 3]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, alpha)
```
#### References:
Implicit Reparameterization Gradients: [Figurnov et al., 2018](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))
| Args |
| `concentration` | Positive floating-point `Tensor` indicating mean number of class occurrences; aka "alpha". Implies `self.dtype`, and `self.batch_shape`, `self.event_shape`, i.e., if `concentration.shape = [N1, N2, ..., Nm, k]` then `batch_shape = [N1, N2, ..., Nm]` and `event_shape = [k]`. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `concentration` | Concentration parameter; expected counts for that coordinate. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `total_concentration` | Sum of last dim of concentration parameter. |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
Additional documentation from `Dirichlet`:
>
> **Note:** `value` must be a non-negative tensor with dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e., `tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with `self.batch_shape() + self.event_shape()`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
Additional documentation from `Dirichlet`:
>
> **Note:** The mode is undefined when any `concentration <= 1`. If `self.allow_nan_stats` is `True`, `NaN` is used for undefined modes. If `self.allow_nan_stats` is `False` an exception is raised when one or more modes are undefined.
>
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
Additional documentation from `Dirichlet`:
>
> **Note:** `value` must be a non-negative tensor with dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e., `tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with `self.batch_shape() + self.event_shape()`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.RegisterKL tf.compat.v1.distributions.RegisterKL
=====================================
Decorator to register a KL divergence implementation function.
```
tf.compat.v1.distributions.RegisterKL(
dist_cls_a, dist_cls_b
)
```
#### Usage:
@distributions.RegisterKL(distributions.Normal, distributions.Normal) def \_kl\_normal\_mvn(norm\_a, norm\_b): # Return KL(norm\_a || norm\_b)
| Args |
| `dist_cls_a` | the class of the first argument of the KL divergence. |
| `dist_cls_b` | the class of the second argument of the KL divergence. |
Methods
-------
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/kullback_leibler.py#L188-L209)
```
__call__(
kl_fn
)
```
Perform the KL registration.
| Args |
| `kl_fn` | The function to use for the KL divergence. |
| Returns |
| kl\_fn |
| Raises |
| `TypeError` | if kl\_fn is not a callable. |
| `ValueError` | if a KL divergence function has already been registered for the given argument classes. |
tensorflow tf.compat.v1.distributions.Gamma tf.compat.v1.distributions.Gamma
================================
Gamma distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Gamma(
concentration,
rate,
validate_args=False,
allow_nan_stats=True,
name='Gamma'
)
```
The Gamma distribution is defined over positive real numbers using parameters `concentration` (aka "alpha") and `rate` (aka "beta").
#### Mathematical Details
The probability density function (pdf) is,
```
pdf(x; alpha, beta, x > 0) = x**(alpha - 1) exp(-x beta) / Z
Z = Gamma(alpha) beta**(-alpha)
```
where:
* `concentration = alpha`, `alpha > 0`,
* `rate = beta`, `beta > 0`,
* `Z` is the normalizing constant, and,
* `Gamma` is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function).
The cumulative density function (cdf) is,
```
cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta x) / Gamma(alpha)
```
where `GammaInc` is the [lower incomplete Gamma function](https://en.wikipedia.org/wiki/Incomplete_gamma_function).
The parameters can be intuited via their relationship to mean and stddev,
```
concentration = alpha = (mean / stddev)**2
rate = beta = mean / stddev**2 = concentration / mean
```
Distribution parameters are automatically broadcast in all functions; see examples for details.
Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018).
#### Examples
```
import tensorflow_probability as tfp
tfd = tfp.distributions
dist = tfd.Gamma(concentration=3.0, rate=2.0)
dist2 = tfd.Gamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])
```
Compute the gradients of samples w.r.t. the parameters:
```
concentration = tf.constant(3.0)
rate = tf.constant(2.0)
dist = tfd.Gamma(concentration, rate)
samples = dist.sample(5) # Shape [5]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, [concentration, rate])
```
#### References:
Implicit Reparameterization Gradients: [Figurnov et al., 2018](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))
| Args |
| `concentration` | Floating point tensor, the concentration params of the distribution(s). Must contain only positive values. |
| `rate` | Floating point tensor, the inverse scale params of the distribution(s). Must contain only positive values. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Raises |
| `TypeError` | if `concentration` and `rate` are different dtypes. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `concentration` | Concentration parameter. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `rate` | Rate parameter. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
Additional documentation from `Gamma`:
The mode of a gamma distribution is `(shape - 1) / rate` when `shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`, an exception will be raised rather than returning `NaN`.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.StudentT tf.compat.v1.distributions.StudentT
===================================
Student's t-distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.StudentT(
df,
loc,
scale,
validate_args=False,
allow_nan_stats=True,
name='StudentT'
)
```
This distribution has parameters: degree of freedom `df`, location `loc`, and `scale`.
#### Mathematical details
The probability density function (pdf) is,
```
pdf(x; df, mu, sigma) = (1 + y**2 / df)**(-0.5 (df + 1)) / Z
where,
y = (x - mu) / sigma
Z = abs(sigma) sqrt(df pi) Gamma(0.5 df) / Gamma(0.5 (df + 1))
```
where:
* `loc = mu`,
* `scale = sigma`, and,
* `Z` is the normalization constant, and,
* `Gamma` is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function).
The StudentT distribution is a member of the [location-scale family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be constructed as,
```
X ~ StudentT(df, loc=0, scale=1)
Y = loc + scale * X
```
Notice that `scale` has semantics more similar to standard deviation than variance. However it is not actually the std. deviation; the Student's t-distribution std. dev. is `scale sqrt(df / (df - 2))` when `df > 2`.
Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018).
#### Examples
Examples of initialization of one or a batch of distributions.
```
import tensorflow_probability as tfp
tfd = tfp.distributions
# Define a single scalar Student t distribution.
single_dist = tfd.StudentT(df=3)
# Evaluate the pdf at 1, returning a scalar Tensor.
single_dist.prob(1.)
# Define a batch of two scalar valued Student t's.
# The first has degrees of freedom 2, mean 1, and scale 11.
# The second 3, 2 and 22.
multi_dist = tfd.StudentT(df=[2, 3], loc=[1, 2.], scale=[11, 22.])
# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
# returning a length two tensor.
multi_dist.prob([0, 1.5])
# Get 3 samples, returning a 3 x 2 tensor.
multi_dist.sample(3)
```
Arguments are broadcast when possible.
```
# Define a batch of two Student's t distributions.
# Both have df 2 and mean 1, but different scales.
dist = tfd.StudentT(df=2, loc=1, scale=[11, 22.])
# Evaluate the pdf of both distributions on the same point, 3.0,
# returning a length 2 tensor.
dist.prob(3.0)
```
Compute the gradients of samples w.r.t. the parameters:
```
df = tf.constant(2.0)
loc = tf.constant(2.0)
scale = tf.constant(11.0)
dist = tfd.StudentT(df=df, loc=loc, scale=scale)
samples = dist.sample(5) # Shape [5]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, [df, loc, scale])
```
#### References:
Implicit Reparameterization Gradients: [Figurnov et al., 2018](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))
| Args |
| `df` | Floating-point `Tensor`. The degrees of freedom of the distribution(s). `df` must contain only positive values. |
| `loc` | Floating-point `Tensor`. The mean(s) of the distribution(s). |
| `scale` | Floating-point `Tensor`. The scaling factor(s) for the distribution(s). Note that `scale` is not technically the standard deviation of this distribution but has semantics more similar to standard deviation than variance. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Raises |
| `TypeError` | if loc and scale are different dtypes. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `df` | Degrees of freedom in these Student's t distribution(s). |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `loc` | Locations of these Student's t distribution(s). |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `scale` | Scaling factors of these Student's t distribution(s). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
Additional documentation from `StudentT`:
The mean of Student's T equals `loc` if `df > 1`, otherwise it is `NaN`. If `self.allow_nan_stats=True`, then an exception will be raised rather than returning `NaN`.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
Additional documentation from `StudentT`:
The variance for Student's T equals
```
df / (df - 2), when df > 2
infinity, when 1 < df <= 2
NaN, when df <= 1
```
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.ReparameterizationType tf.compat.v1.distributions.ReparameterizationType
=================================================
Instances of this class represent how sampling is reparameterized.
```
tf.compat.v1.distributions.ReparameterizationType(
rep_type
)
```
Two static instances exist in the distributions library, signifying one of two possible properties for samples from a distribution:
`FULLY_REPARAMETERIZED`: Samples from the distribution are fully reparameterized, and straight-through gradients are supported.
`NOT_REPARAMETERIZED`: Samples from the distribution are not fully reparameterized, and straight-through gradients are either partially unsupported or are not supported at all. In this case, for purposes of e.g. RL or variational inference, it is generally safest to wrap the sample results in a `stop_gradients` call and use policy gradients / surrogate loss instead.
Methods
-------
### `__eq__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L241-L253)
```
__eq__(
other
)
```
Determine if this `ReparameterizationType` is equal to another.
Since ReparameterizationType instances are constant static global instances, equality checks if two instances' id() values are equal.
| Args |
| `other` | Object to compare against. |
| Returns |
| `self is other`. |
tensorflow tf.compat.v1.distributions.Multinomial tf.compat.v1.distributions.Multinomial
======================================
Multinomial distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Multinomial(
total_count,
logits=None,
probs=None,
validate_args=False,
allow_nan_stats=True,
name='Multinomial'
)
```
This Multinomial distribution is parameterized by `probs`, a (batch of) length-`K` `prob` (probability) vectors (`K > 1`) such that `tf.reduce_sum(probs, -1) = 1`, and a `total_count` number of trials, i.e., the number of trials per draw from the Multinomial. It is defined over a (batch of) length-`K` vector `counts` such that `tf.reduce_sum(counts, -1) = total_count`. The Multinomial is identically the Binomial distribution when `K = 2`.
#### Mathematical Details
The Multinomial is a distribution over `K`-class counts, i.e., a length-`K` vector of non-negative integer `counts = n = [n_0, ..., n_{K-1}]`.
The probability mass function (pmf) is,
```
pmf(n; pi, N) = prod_j (pi_j)**n_j / Z
Z = (prod_j n_j!) / N!
```
where:
* `probs = pi = [pi_0, ..., pi_{K-1}]`, `pi_j > 0`, `sum_j pi_j = 1`,
* `total_count = N`, `N` a positive integer,
* `Z` is the normalization constant, and,
* `N!` denotes `N` factorial.
Distribution parameters are automatically broadcast in all functions; see examples for details.
#### Pitfalls
The number of classes, `K`, must not exceed:
* the largest integer representable by `self.dtype`, i.e., `2**(mantissa_bits+1)` (IEE754),
* the maximum `Tensor` index, i.e., `2**31-1`.
In other words,
```
K <= min(2**31-1, {
tf.float16: 2**11,
tf.float32: 2**24,
tf.float64: 2**53 }[param.dtype])
```
>
> **Note:** This condition is validated only when `self.validate_args = True`.
>
#### Examples
Create a 3-class distribution, with the 3rd class is most likely to be drawn, using logits.
```
logits = [-50., -43, 0]
dist = Multinomial(total_count=4., logits=logits)
```
Create a 3-class distribution, with the 3rd class is most likely to be drawn.
```
p = [.2, .3, .5]
dist = Multinomial(total_count=4., probs=p)
```
The distribution functions can be evaluated on counts.
```
# counts same shape as p.
counts = [1., 0, 3]
dist.prob(counts) # Shape []
# p will be broadcast to [[.2, .3, .5], [.2, .3, .5]] to match counts.
counts = [[1., 2, 1], [2, 2, 0]]
dist.prob(counts) # Shape [2]
# p will be broadcast to shape [5, 7, 3] to match counts.
counts = [[...]] # Shape [5, 7, 3]
dist.prob(counts) # Shape [5, 7]
```
Create a 2-batch of 3-class distributions.
```
p = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3]
dist = Multinomial(total_count=[4., 5], probs=p)
counts = [[2., 1, 1], [3, 1, 1]]
dist.prob(counts) # Shape [2]
dist.sample(5) # Shape [5, 2, 3]
```
| Args |
| `total_count` | Non-negative floating point tensor with shape broadcastable to `[N1,..., Nm]` with `m >= 0`. Defines this as a batch of `N1 x ... x Nm` different Multinomial distributions. Its components should be equal to integer values. |
| `logits` | Floating point tensor representing unnormalized log-probabilities of a positive event with shape broadcastable to `[N1,..., Nm, K]` `m >= 0`, and the same dtype as `total_count`. Defines this as a batch of `N1 x ... x Nm` different `K` class Multinomial distributions. Only one of `logits` or `probs` should be passed in. |
| `probs` | Positive floating point tensor with shape broadcastable to `[N1,..., Nm, K]` `m >= 0` and same dtype as `total_count`. Defines this as a batch of `N1 x ... x Nm` different `K` class Multinomial distributions. `probs`'s components in the last portion of its shape should sum to `1`. Only one of `logits` or `probs` should be passed in. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `logits` | Vector of coordinatewise logits. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `probs` | Probability of drawing a `1` in that coordinate. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `total_count` | Number of trials used to construct a sample. |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
Additional documentation from `Multinomial`:
For each batch of counts, `value = [n_0, ... ,n_{k-1}]`, `P[value]` is the probability that after sampling `self.total_count` draws from this Multinomial distribution, the number of draws falling in class `j` is `n_j`. Since this definition is [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables); different sequences have the same counts so the probability includes a combinatorial coefficient.
>
> **Note:** `value` must be a non-negative tensor with dtype `self.dtype`, have no fractional components, and such that `tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable with `self.probs` and `self.total_count`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.Beta tf.compat.v1.distributions.Beta
===============================
Beta distribution.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Beta(
concentration1=None,
concentration0=None,
validate_args=False,
allow_nan_stats=True,
name='Beta'
)
```
The Beta distribution is defined over the `(0, 1)` interval using parameters `concentration1` (aka "alpha") and `concentration0` (aka "beta").
#### Mathematical Details
The probability density function (pdf) is,
```
pdf(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z
Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta)
```
where:
* `concentration1 = alpha`,
* `concentration0 = beta`,
* `Z` is the normalization constant, and,
* `Gamma` is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function).
The concentration parameters represent mean total counts of a `1` or a `0`, i.e.,
```
concentration1 = alpha = mean * total_concentration
concentration0 = beta = (1. - mean) * total_concentration
```
where `mean` in `(0, 1)` and `total_concentration` is a positive real number representing a mean `total_count = concentration1 + concentration0`.
Distribution parameters are automatically broadcast in all functions; see examples for details.
Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018).
#### Examples
```
import tensorflow_probability as tfp
tfd = tfp.distributions
# Create a batch of three Beta distributions.
alpha = [1, 2, 3]
beta = [1, 2, 3]
dist = tfd.Beta(alpha, beta)
dist.sample([4, 5]) # Shape [4, 5, 3]
# `x` has three batch entries, each with two samples.
x = [[.1, .4, .5],
[.2, .3, .5]]
# Calculate the probability of each pair of samples under the corresponding
# distribution in `dist`.
dist.prob(x) # Shape [2, 3]
```
```
# Create batch_shape=[2, 3] via parameter broadcast:
alpha = [[1.], [2]] # Shape [2, 1]
beta = [3., 4, 5] # Shape [3]
dist = tfd.Beta(alpha, beta)
# alpha broadcast as: [[1., 1, 1,],
# [2, 2, 2]]
# beta broadcast as: [[3., 4, 5],
# [3, 4, 5]]
# batch_Shape [2, 3]
dist.sample([4, 5]) # Shape [4, 5, 2, 3]
x = [.2, .3, .5]
# x will be broadcast as [[.2, .3, .5],
# [.2, .3, .5]],
# thus matching batch_shape [2, 3].
dist.prob(x) # Shape [2, 3]
```
Compute the gradients of samples w.r.t. the parameters:
```
alpha = tf.constant(1.0)
beta = tf.constant(2.0)
dist = tfd.Beta(alpha, beta)
samples = dist.sample(5) # Shape [5]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, [alpha, beta])
```
#### References:
Implicit Reparameterization Gradients: [Figurnov et al., 2018](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))
| Args |
| `concentration1` | Positive floating-point `Tensor` indicating mean number of successes; aka "alpha". Implies `self.dtype` and `self.batch_shape`, i.e., `concentration1.shape = [N1, N2, ..., Nm] = self.batch_shape`. |
| `concentration0` | Positive floating-point `Tensor` indicating mean number of failures; aka "beta". Otherwise has same semantics as `concentration1`. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `concentration0` | Concentration parameter associated with a `0` outcome. |
| `concentration1` | Concentration parameter associated with a `1` outcome. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `total_concentration` | Sum of concentration parameters. |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
Additional documentation from `Beta`:
>
> **Note:** `x` must have dtype `self.dtype` and be in `[0, 1].` It must have a shape compatible with `self.batch_shape()`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
Additional documentation from `Beta`:
>
> **Note:** `x` must have dtype `self.dtype` and be in `[0, 1].` It must have a shape compatible with `self.batch_shape()`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
Additional documentation from `Beta`:
>
> **Note:** `x` must have dtype `self.dtype` and be in `[0, 1].` It must have a shape compatible with `self.batch_shape()`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
Additional documentation from `Beta`:
>
> **Note:** The mode is undefined when `concentration1 <= 1` or `concentration0 <= 1`. If `self.allow_nan_stats` is `True`, `NaN` is used for undefined modes. If `self.allow_nan_stats` is `False` an exception is raised when one or more modes are undefined.
>
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
Additional documentation from `Beta`:
>
> **Note:** `x` must have dtype `self.dtype` and be in `[0, 1].` It must have a shape compatible with `self.batch_shape()`.
>
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.Laplace tf.compat.v1.distributions.Laplace
==================================
The Laplace distribution with location `loc` and `scale` parameters.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Laplace(
loc,
scale,
validate_args=False,
allow_nan_stats=True,
name='Laplace'
)
```
#### Mathematical details
The probability density function (pdf) of this distribution is,
```
pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z
Z = 2 sigma
```
where `loc = mu`, `scale = sigma`, and `Z` is the normalization constant.
Note that the Laplace distribution can be thought of two exponential distributions spliced together "back-to-back."
The Lpalce distribution is a member of the [location-scale family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be constructed as,
```
X ~ Laplace(loc=0, scale=1)
Y = loc + scale * X
```
| Args |
| `loc` | Floating point tensor which characterizes the location (center) of the distribution. |
| `scale` | Positive floating point tensor which characterizes the spread of the distribution. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Raises |
| `TypeError` | if `loc` and `scale` are of different dtype. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `loc` | Distribution parameter for the location. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `scale` | Distribution parameter for scale. |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
tensorflow tf.compat.v1.distributions.Normal tf.compat.v1.distributions.Normal
=================================
The Normal distribution with location `loc` and `scale` parameters.
Inherits From: [`Distribution`](distribution)
```
tf.compat.v1.distributions.Normal(
loc,
scale,
validate_args=False,
allow_nan_stats=True,
name='Normal'
)
```
#### Mathematical details
The probability density function (pdf) is,
```
pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z
Z = (2 pi sigma**2)**0.5
```
where `loc = mu` is the mean, `scale = sigma` is the std. deviation, and, `Z` is the normalization constant.
The Normal distribution is a member of the [location-scale family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be constructed as,
```
X ~ Normal(loc=0, scale=1)
Y = loc + scale * X
```
#### Examples
Examples of initialization of one or a batch of distributions.
```
import tensorflow_probability as tfp
tfd = tfp.distributions
# Define a single scalar Normal distribution.
dist = tfd.Normal(loc=0., scale=3.)
# Evaluate the cdf at 1, returning a scalar.
dist.cdf(1.)
# Define a batch of two scalar valued Normals.
# The first has mean 1 and standard deviation 11, the second 2 and 22.
dist = tfd.Normal(loc=[1, 2.], scale=[11, 22.])
# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
# returning a length two tensor.
dist.prob([0, 1.5])
# Get 3 samples, returning a 3 x 2 tensor.
dist.sample([3])
```
Arguments are broadcast when possible.
```
# Define a batch of two scalar valued Normals.
# Both have mean 1, but different standard deviations.
dist = tfd.Normal(loc=1., scale=[11, 22.])
# Evaluate the pdf of both distributions on the same point, 3.0,
# returning a length 2 tensor.
dist.prob(3.0)
```
| Args |
| `loc` | Floating point tensor; the means of the distribution(s). |
| `scale` | Floating point tensor; the stddevs of the distribution(s). Must contain only positive values. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Raises |
| `TypeError` | if `loc` and `scale` have different `dtype`. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `loc` | Distribution parameter for the mean. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `scale` | Distribution parameter for standard deviation. |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.distributions.kl_divergence tf.compat.v1.distributions.kl\_divergence
=========================================
Get the KL-divergence KL(distribution\_a || distribution\_b). (deprecated)
```
tf.compat.v1.distributions.kl_divergence(
distribution_a, distribution_b, allow_nan_stats=True, name=None
)
```
If there is no KL method registered specifically for `type(distribution_a)` and `type(distribution_b)`, then the class hierarchies of these types are searched.
If one KL method is registered between any pairs of classes in these two parent hierarchies, it is used.
If more than one such registered method exists, the method whose registered classes have the shortest sum MRO paths to the input types is used.
If more than one such shortest path exists, the first method identified in the search is used (favoring a shorter MRO distance to `type(distribution_a)`).
| Args |
| `distribution_a` | The first distribution. |
| `distribution_b` | The second distribution. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `name` | Python `str` name prefixed to Ops created by this class. |
| Returns |
| A Tensor with the batchwise KL-divergence between `distribution_a` and `distribution_b`. |
| Raises |
| `NotImplementedError` | If no KL method is defined for distribution types of `distribution_a` and `distribution_b`. |
tensorflow tf.compat.v1.distributions.Distribution tf.compat.v1.distributions.Distribution
=======================================
A generic probability distribution base class.
```
tf.compat.v1.distributions.Distribution(
dtype,
reparameterization_type,
validate_args,
allow_nan_stats,
parameters=None,
graph_parents=None,
name=None
)
```
`Distribution` is a base class for constructing and organizing properties (e.g., mean, variance) of random variables (e.g, Bernoulli, Gaussian).
#### Subclassing
Subclasses are expected to implement a leading-underscore version of the same-named function. The argument signature should be identical except for the omission of `name="..."`. For example, to enable `log_prob(value, name="log_prob")` a subclass should implement `_log_prob(value)`.
Subclasses can append to public-level docstrings by providing docstrings for their method specializations. For example:
```
@util.AppendDocstring("Some other details.")
def _log_prob(self, value):
...
```
would add the string "Some other details." to the `log_prob` function docstring. This is implemented as a simple decorator to avoid python linter complaining about missing Args/Returns/Raises sections in the partial docstrings.
#### Broadcasting, batching, and shapes
All distributions support batches of independent distributions of that type. The batch shape is determined by broadcasting together the parameters.
The shape of arguments to `__init__`, `cdf`, `log_cdf`, `prob`, and `log_prob` reflect this broadcasting, as does the return value of `sample` and `sample_n`.
`sample_n_shape = [n] + batch_shape + event_shape`, where `sample_n_shape` is the shape of the `Tensor` returned from `sample_n`, `n` is the number of samples, `batch_shape` defines how many independent distributions there are, and `event_shape` defines the shape of samples from each of those independent distributions. Samples are independent along the `batch_shape` dimensions, but not necessarily so along the `event_shape` dimensions (depending on the particulars of the underlying distribution).
Using the `Uniform` distribution as an example:
```
minval = 3.0
maxval = [[4.0, 6.0],
[10.0, 12.0]]
# Broadcasting:
# This instance represents 4 Uniform distributions. Each has a lower bound at
# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape.
u = Uniform(minval, maxval)
# `event_shape` is `TensorShape([])`.
event_shape = u.event_shape
# `event_shape_t` is a `Tensor` which will evaluate to [].
event_shape_t = u.event_shape_tensor()
# Sampling returns a sample per distribution. `samples` has shape
# [5, 2, 2], which is [n] + batch_shape + event_shape, where n=5,
# batch_shape=[2, 2], and event_shape=[].
samples = u.sample_n(5)
# The broadcasting holds across methods. Here we use `cdf` as an example. The
# same holds for `log_cdf` and the likelihood functions.
# `cum_prob` has shape [2, 2] as the `value` argument was broadcasted to the
# shape of the `Uniform` instance.
cum_prob_broadcast = u.cdf(4.0)
# `cum_prob`'s shape is [2, 2], one per distribution. No broadcasting
# occurred.
cum_prob_per_dist = u.cdf([[4.0, 5.0],
[6.0, 7.0]])
# INVALID as the `value` argument is not broadcastable to the distribution's
# shape.
cum_prob_invalid = u.cdf([4.0, 5.0, 6.0])
```
#### Shapes
There are three important concepts associated with TensorFlow Distributions shapes:
* Event shape describes the shape of a single draw from the distribution; it may be dependent across dimensions. For scalar distributions, the event shape is `[]`. For a 5-dimensional MultivariateNormal, the event shape is `[5]`.
* Batch shape describes independent, not identically distributed draws, aka a "collection" or "bunch" of distributions.
* Sample shape describes independent, identically distributed draws of batches from the distribution family.
The event shape and the batch shape are properties of a Distribution object, whereas the sample shape is associated with a specific call to `sample` or `log_prob`.
For detailed usage examples of TensorFlow Distributions shapes, see [this tutorial](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb)
#### Parameter values leading to undefined statistics or distributions.
Some distributions do not have well-defined statistics for all initialization parameter values. For example, the beta distribution is parameterized by positive real numbers `concentration1` and `concentration0`, and does not have well-defined mode if `concentration1 < 1` or `concentration0 < 1`.
The user is given the option of raising an exception or returning `NaN`.
```
a = tf.exp(tf.matmul(logits, weights_a))
b = tf.exp(tf.matmul(logits, weights_b))
# Will raise exception if ANY batch member has a < 1 or b < 1.
dist = distributions.beta(a, b, allow_nan_stats=False)
mode = dist.mode().eval()
# Will return NaN for batch members with either a < 1 or b < 1.
dist = distributions.beta(a, b, allow_nan_stats=True) # Default behavior
mode = dist.mode().eval()
```
In all cases, an exception is raised if *invalid* parameters are passed, e.g.
```
# Will raise an exception if any Op is run.
negative_a = -1.0 * a # beta distribution by definition has a > 0.
dist = distributions.beta(negative_a, b, allow_nan_stats=True)
dist.mean().eval()
```
| Args |
| `dtype` | The type of the event samples. `None` implies no type-enforcement. |
| `reparameterization_type` | Instance of `ReparameterizationType`. If [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED), this `Distribution` can be reparameterized in terms of some standard distribution with a function whose Jacobian is constant for the support of the standard distribution. If [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED), then no such reparameterization is available. |
| `validate_args` | Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. |
| `allow_nan_stats` | Python `bool`, default `True`. When `True`, statistics (e.g., mean, mode, variance) use the value "`NaN`" to indicate the result is undefined. When `False`, an exception is raised if one or more of the statistic's batch members are undefined. |
| `parameters` | Python `dict` of parameters used to instantiate this `Distribution`. |
| `graph_parents` | Python `list` of graph prerequisites of this `Distribution`. |
| `name` | Python `str` name prefixed to Ops created by this class. Default: subclass name. |
| Raises |
| `ValueError` | if any member of graph\_parents is `None` or not a `Tensor`. |
| Attributes |
| `allow_nan_stats` | Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)\*\*2] is also undefined. |
| `batch_shape` | Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. |
| `dtype` | The `DType` of `Tensor`s handled by this `Distribution`. |
| `event_shape` | Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. |
| `name` | Name prepended to all ops created by this `Distribution`. |
| `parameters` | Dictionary of parameters used to instantiate this `Distribution`. |
| `reparameterization_type` | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances [`distributions.FULLY_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#FULLY_REPARAMETERIZED) or [`distributions.NOT_REPARAMETERIZED`](https://www.tensorflow.org/probability/oryx/api_docs/python/oryx/distributions#NOT_REPARAMETERIZED). |
| `validate_args` | Python `bool` indicating possibly expensive checks are enabled. |
Methods
-------
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L630-L647)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args |
| `name` | name to give to the op |
| Returns |
| `batch_shape` | `Tensor`. |
### `cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L874-L891)
```
cdf(
value, name='cdf'
)
```
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
cdf(x) := P[X <= x]
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `copy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L608-L624)
```
copy(
**override_parameters_kwargs
)
```
Creates a deep copy of the distribution.
>
> **Note:** the copy distribution may continue to depend on the original initialization arguments.
>
| Args |
| `**override_parameters_kwargs` | String/value dictionary of initialization arguments to override with new values. |
| Returns |
| `distribution` | A new instance of `type(self)` initialized from the union of self.parameters and override\_parameters\_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. |
### `covariance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1087-L1124)
```
covariance(
name='covariance'
)
```
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-`k`, vector-valued distribution, it is calculated as,
```
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
```
where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e.,
```
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
```
where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `covariance` | Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. |
### `cross_entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1139-L1162)
```
cross_entropy(
other, name='cross_entropy'
)
```
Computes the (Shannon) cross entropy.
Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) cross entropy is defined as:
```
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
```
where `F` denotes the support of the random variable `X ~ P`.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `cross_entropy` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shanon) cross entropy. |
### `entropy`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L975-L978)
```
entropy(
name='entropy'
)
```
Shannon entropy in nats.
### `event_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L670-L684)
```
event_shape_tensor(
name='event_shape_tensor'
)
```
Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
| Args |
| `name` | name to give to the op |
| Returns |
| `event_shape` | `Tensor`. |
### `is_scalar_batch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L714-L726)
```
is_scalar_batch(
name='is_scalar_batch'
)
```
Indicates that `batch_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_batch` | `bool` scalar `Tensor`. |
### `is_scalar_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L700-L712)
```
is_scalar_event(
name='is_scalar_event'
)
```
Indicates that `event_shape == []`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `is_scalar_event` | `bool` scalar `Tensor`. |
### `kl_divergence`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1168-L1194)
```
kl_divergence(
other, name='kl_divergence'
)
```
Computes the Kullback--Leibler divergence.
Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as:
```
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
```
where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy.
| Args |
| `other` | [`tfp.distributions.Distribution`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution) instance. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `kl_divergence` | `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. |
### `log_cdf`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L835-L856)
```
log_cdf(
value, name='log_cdf'
)
```
Log cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```
log_cdf(x) := Log[ P[X <= x] ]
```
Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `logcdf` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L777-L788)
```
log_prob(
value, name='log_prob'
)
```
Log probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `log_prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `log_survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L910-L932)
```
log_survival_function(
value, name='log_survival_function'
)
```
Log survival function.
Given random variable `X`, the survival function is defined:
```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```
Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `mean`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L984-L987)
```
mean(
name='mean'
)
```
Mean.
### `mode`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1130-L1133)
```
mode(
name='mode'
)
```
Mode.
### `param_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L490-L509)
```
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
```
Shapes of parameters given the desired shape of a call to `sample()`.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`.
Subclasses should override class method `_param_shapes`.
| Args |
| `sample_shape` | `Tensor` or python list/tuple. Desired shape of a call to `sample()`. |
| `name` | name to prepend ops with. |
| Returns |
| `dict` of parameter name to `Tensor` shapes. |
### `param_static_shapes`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L511-L548)
```
@classmethod
param_static_shapes(
sample_shape
)
```
param\_shapes with static (i.e. `TensorShape`) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically.
Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed.
| Args |
| `sample_shape` | `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. |
| Returns |
| `dict` of parameter name to `TensorShape`. |
| Raises |
| `ValueError` | if `sample_shape` is a `TensorShape` and is not fully defined. |
### `prob`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L806-L817)
```
prob(
value, name='prob'
)
```
Probability density/mass function.
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `prob` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `quantile`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L999-L1016)
```
quantile(
value, name='quantile'
)
```
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable `X` and `p in [0, 1]`, the `quantile` is:
```
quantile(p) := x such that P[X <= x] == p
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `quantile` | a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `sample`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L745-L759)
```
sample(
sample_shape=(), seed=None, name='sample'
)
```
Generate samples of the specified shape.
Note that a call to `sample()` without arguments will generate a single sample.
| Args |
| `sample_shape` | 0D or 1D `int32` `Tensor`. Shape of the generated samples. |
| `seed` | Python integer seed for RNG |
| `name` | name to give to the op. |
| Returns |
| `samples` | a `Tensor` with prepended dimensions `sample_shape`. |
### `stddev`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1054-L1081)
```
stddev(
name='stddev'
)
```
Standard deviation.
Standard deviation is defined as,
```
stddev = E[(X - E[X])**2]**0.5
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `stddev` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
### `survival_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L950-L969)
```
survival_function(
value, name='survival_function'
)
```
Survival function.
Given random variable `X`, the survival function is defined:
```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```
| Args |
| `value` | `float` or `double` `Tensor`. |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. |
### `variance`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/distributions/distribution.py#L1022-L1048)
```
variance(
name='variance'
)
```
Variance.
Variance is defined as,
```
Var = E[(X - E[X])**2]
```
where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`.
| Args |
| `name` | Python `str` prepended to names of ops created by this function. |
| Returns |
| `variance` | Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. |
| programming_docs |
tensorflow tf.compat.v1.summary.scalar tf.compat.v1.summary.scalar
===========================
Outputs a `Summary` protocol buffer containing a single scalar value.
```
tf.compat.v1.summary.scalar(
name, tensor, collections=None, family=None
)
```
Migrate to TF2
--------------
For compatibility purposes, when invoked in TF2 where the outermost context is eager mode, this API will check if there is a suitable TF2 summary writer context available, and if so will forward this call to that writer instead. A "suitable" writer context means that the writer is set as the default writer, and there is an associated non-empty value for `step` (see [`tf.summary.SummaryWriter.as_default`](../../../summary/summarywriter#as_default), [`tf.summary.experimental.set_step`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step) or alternatively [`tf.compat.v1.train.create_global_step`](../train/create_global_step)). For the forwarded call, the arguments here will be passed to the TF2 implementation of [`tf.summary.scalar`](../../../summary/scalar), and the return value will be an empty bytestring tensor, to avoid duplicate summary writing. This forwarding is best-effort and not all arguments will be preserved.
To migrate to TF2, please use [`tf.summary.scalar`](../../../summary/scalar) instead. Please check [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete steps for migration. [`tf.summary.scalar`](../../../summary/scalar) can also log training metrics in Keras, you can check [Logging training metrics in Keras](https://www.tensorflow.org/tensorboard/scalars_and_keras) for detials.
#### How to Map Arguments
| TF1 Arg Name | TF2 Arg Name | Note |
| --- | --- | --- |
| `name` | `name` | - |
| `tensor` | `data` | - |
| - | `step` | Explicit int64-castable monotonic step value. If omitted, this defaults to |
: : : [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step). : | `collections` | Not Supported | - | | `family` | Removed | Please use [`tf.name_scope`](../../../name_scope) instead to | : : : manage summary name prefix. : | - | `description` | Optional long-form `str` description | : : : for the summary. Markdown is supported.: : : : Defaults to empty. :
Description
-----------
The generated Summary has a Tensor.proto containing the input Tensor.
| Args |
| `name` | A name for the generated node. Will also serve as the series name in TensorBoard. |
| `tensor` | A real numeric Tensor containing a single value. |
| `collections` | Optional list of graph collections keys. The new summary op is added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. |
| `family` | Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. |
| Returns |
| A scalar `Tensor` of type `string`. Which contains a `Summary` protobuf. |
| Raises |
| `ValueError` | If tensor has the wrong shape or type. |
tensorflow tf.compat.v1.summary.FileWriterCache tf.compat.v1.summary.FileWriterCache
====================================
Cache for file writers.
This class caches file writers, one per directory.
Methods
-------
### `clear`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer_cache.py#L36-L44)
```
@staticmethod
clear()
```
Clear cached summary writers. Currently only used for unit tests.
### `get`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer_cache.py#L46-L60)
```
@staticmethod
get(
logdir
)
```
Returns the FileWriter for the specified directory.
| Args |
| `logdir` | str, name of the directory. |
| Returns |
| A `FileWriter`. |
tensorflow tf.compat.v1.summary.text tf.compat.v1.summary.text
=========================
Summarizes textual data.
```
tf.compat.v1.summary.text(
name, tensor, collections=None
)
```
Migrate to TF2
--------------
For compatibility purposes, when invoked in TF2 where the outermost context is eager mode, this API will check if there is a suitable TF2 summary writer context available, and if so will forward this call to that writer instead. A "suitable" writer context means that the writer is set as the default writer, and there is an associated non-empty value for `step` (see [`tf.summary.SummaryWriter.as_default`](../../../summary/summarywriter#as_default), [`tf.summary.experimental.set_step`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step) or alternatively [`tf.compat.v1.train.create_global_step`](../train/create_global_step)). For the forwarded call, the arguments here will be passed to the TF2 implementation of [`tf.summary.text`](../../../summary/text), and the return value will be an empty bytestring tensor, to avoid duplicate summary writing. This forwarding is best-effort and not all arguments will be preserved.
To migrate to TF2, please use [`tf.summary.text`](../../../summary/text) instead. Please check [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete steps for migration.
#### How to Map Arguments
| TF1 Arg Name | TF2 Arg Name | Note |
| --- | --- | --- |
| `name` | `name` | - |
| `tensor` | `data` | - |
| - | `step` | Explicit int64-castable monotonic step value. If omitted, this defaults to |
: : : [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step). : | `collections` | Not Supported | - | | - | `description` | Optional long-form `str` description | : : : for the summary. Markdown is supported.: : : : Defaults to empty. :
Description
-----------
Text data summarized via this plugin will be visible in the Text Dashboard in TensorBoard. The standard TensorBoard Text Dashboard will render markdown in the strings, and will automatically organize 1d and 2d tensors into tables. If a tensor with more than 2 dimensions is provided, a 2d subarray will be displayed along with a warning message. (Note that this behavior is not intrinsic to the text summary api, but rather to the default TensorBoard text plugin.)
| Args |
| `name` | A name for the generated node. Will also serve as a series name in TensorBoard. |
| `tensor` | a string-type Tensor to summarize. |
| `collections` | Optional list of ops.GraphKeys. The collections to add the summary to. Defaults to [\_ops.GraphKeys.SUMMARIES] |
| Returns |
| A TensorSummary op that is configured so that TensorBoard will recognize that it contains textual data. The TensorSummary is a scalar `Tensor` of type `string` which contains `Summary` protobufs. |
| Raises |
| `ValueError` | If tensor has the wrong type. |
tensorflow tf.compat.v1.summary.image tf.compat.v1.summary.image
==========================
Outputs a `Summary` protocol buffer with images.
```
tf.compat.v1.summary.image(
name, tensor, max_outputs=3, collections=None, family=None
)
```
Migrate to TF2
--------------
For compatibility purposes, when invoked in TF2 where the outermost context is eager mode, this API will check if there is a suitable TF2 summary writer context available, and if so will forward this call to that writer instead. A "suitable" writer context means that the writer is set as the default writer, and there is an associated non-empty value for `step` (see [`tf.summary.SummaryWriter.as_default`](../../../summary/summarywriter#as_default), [`tf.summary.experimental.set_step`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step) or alternatively [`tf.compat.v1.train.create_global_step`](../train/create_global_step)). For the forwarded call, the arguments here will be passed to the TF2 implementation of [`tf.summary.image`](../../../summary/image), and the return value will be an empty bytestring tensor, to avoid duplicate summary writing. This forwarding is best-effort and not all arguments will be preserved. Additionally:
* The TF2 op does not do any of the normalization steps described above. Rather than rescaling data that's outside the expected range, it simply clips it.
* The TF2 op just outputs the data under a single tag that contains multiple samples, rather than multiple tags (i.e. no "/0" or "/1" suffixes).
To migrate to TF2, please use [`tf.summary.image`](../../../summary/image) instead. Please check [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete steps for migration.
#### How to Map Arguments
| TF1 Arg Name | TF2 Arg Name | Note |
| --- | --- | --- |
| `name` | `name` | - |
| `tensor` | `data` | - |
| - | `step` | Explicit int64-castable monotonic step value. If omitted, this defaults to |
: : : [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step). : | `max_outputs` | `max_outputs` | - | | `collections` | Not Supported | - | | `family` | Removed | Please use [`tf.name_scope`](../../../name_scope) instead | : : : to manage summary name prefix. : | - | `description` | Optional long-form `str` description | : : : for the summary. Markdown is supported.: : : : Defaults to empty. :
Description
-----------
The summary has up to `max_outputs` summary values containing images. The images are built from `tensor` which must be 4-D with shape `[batch_size, height, width, channels]` and where `channels` can be:
* 1: `tensor` is interpreted as Grayscale.
* 3: `tensor` is interpreted as RGB.
* 4: `tensor` is interpreted as RGBA.
The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range `[0, 255]`. `uint8` values are unchanged. The op uses two different normalization algorithms:
* If the input values are all positive, they are rescaled so the largest one is 255.
* If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.
The `tag` in the outputted Summary.Value protobufs is generated based on the name, with a suffix depending on the max\_outputs setting:
* If `max_outputs` is 1, the summary value tag is '*name*/image'.
* If `max_outputs` is greater than 1, the summary value tags are generated sequentially as '*name*/image/0', '*name*/image/1', etc.
| Args |
| `name` | A name for the generated node. Will also serve as a series name in TensorBoard. |
| `tensor` | A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height, width, channels]` where `channels` is 1, 3, or 4. |
| `max_outputs` | Max number of batch elements to generate images for. |
| `collections` | Optional list of ops.GraphKeys. The collections to add the summary to. Defaults to [\_ops.GraphKeys.SUMMARIES] |
| `family` | Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. |
| Returns |
| A scalar `Tensor` of type `string`. The serialized `Summary` protocol buffer. |
tensorflow tf.compat.v1.summary.merge tf.compat.v1.summary.merge
==========================
Merges summaries.
```
tf.compat.v1.summary.merge(
inputs, collections=None, name=None
)
```
Migrate to TF2
--------------
This API is not compatible with eager execution or [`tf.function`](../../../function). To migrate to TF2, this API can be omitted entirely, because in TF2 individual summary ops, like [`tf.summary.scalar()`](../../../summary/scalar), write directly to the default summary writer if one is active. Thus, it's not necessary to merge summaries or to manually add the resulting merged summary output to the writer. See the usage example shown below.
For a comprehensive [`tf.summary`](../../../summary) migration guide, please follow [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x).
#### TF1 & TF2 Usage Example
TF1:
```
dist = tf.compat.v1.placeholder(tf.float32, [100])
tf.compat.v1.summary.histogram(name="distribution", values=dist)
writer = tf.compat.v1.summary.FileWriter("/tmp/tf1_summary_example")
summaries = tf.compat.v1.summary.merge_all()
sess = tf.compat.v1.Session()
for step in range(100):
mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])
summ = sess.run(summaries, feed_dict={dist: mean_moving_normal})
writer.add_summary(summ, global_step=step)
```
TF2:
```
writer = tf.summary.create_file_writer("/tmp/tf2_summary_example")
for step in range(100):
mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])
with writer.as_default(step=step):
tf.summary.histogram(name='distribution', data=mean_moving_normal)
```
Description
-----------
This op creates a [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) protocol buffer that contains the union of all the values in the input summaries.
When the Op is run, it reports an `InvalidArgument` error if multiple values in the summaries to merge use the same tag.
| Args |
| `inputs` | A list of `string` `Tensor` objects containing serialized `Summary` protocol buffers. |
| `collections` | Optional list of graph collections keys. The new summary op is added to these collections. Defaults to `[]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A scalar `Tensor` of type `string`. The serialized `Summary` protocol buffer resulting from the merging. |
| Raises |
| `RuntimeError` | If called with eager mode enabled. |
tensorflow tf.compat.v1.summary.initialize tf.compat.v1.summary.initialize
===============================
Initializes summary writing for graph execution mode.
```
tf.compat.v1.summary.initialize(
graph=None, session=None
)
```
This operation is a no-op when executing eagerly.
This helper method provides a higher-level alternative to using `tf.contrib.summary.summary_writer_initializer_op` and `tf.contrib.summary.graph`.
Most users will also want to call [`tf.compat.v1.train.create_global_step`](../train/create_global_step) which can happen before or after this function is called.
| Args |
| `graph` | A [`tf.Graph`](../../../graph) or [`tf.compat.v1.GraphDef`](../graphdef) to output to the writer. This function will not write the default graph by default. When writing to an event log file, the associated step will be zero. |
| `session` | So this method can call `tf.Session.run`. This defaults to [`tf.compat.v1.get_default_session`](../get_default_session). |
| Raises |
| `RuntimeError` | If the current thread has no default `tf.contrib.summary.SummaryWriter`. |
| `ValueError` | If session wasn't passed and no default session. |
tensorflow tf.compat.v1.summary.audio tf.compat.v1.summary.audio
==========================
Outputs a `Summary` protocol buffer with audio.
```
tf.compat.v1.summary.audio(
name, tensor, sample_rate, max_outputs=3, collections=None, family=None
)
```
Migrate to TF2
--------------
For compatibility purposes, when invoked in TF2 where the outermost context is eager mode, this API will check if there is a suitable TF2 summary writer context available, and if so will forward this call to that writer instead. A "suitable" writer context means that the writer is set as the default writer, and there is an associated non-empty value for `step` (see [`tf.summary.SummaryWriter.as_default`](../../../summary/summarywriter#as_default), [`tf.summary.experimental.set_step`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step) or alternatively [`tf.compat.v1.train.create_global_step`](../train/create_global_step)). For the forwarded call, the arguments here will be passed to the TF2 implementation of [`tf.summary.audio`](../../../summary/audio), and the return value will be an empty bytestring tensor, to avoid duplicate summary writing. This forwarding is best-effort and not all arguments will be preserved. Additionally:
* The TF2 op just outputs the data under a single tag that contains multiple samples, rather than multiple tags (i.e. no "/0" or "/1" suffixes).
To migrate to TF2, please use [`tf.summary.audio`](../../../summary/audio) instead. Please check [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete steps for migration.
#### How to Map Arguments
| TF1 Arg Name | TF2 Arg Name | Note |
| --- | --- | --- |
| `name` | `name` | - |
| `tensor` | `data` | Input for this argument now must be three-dimensional `[k, t, c]`, where `k` is the number of audio clips, `t` is the number of frames, and `c` is the number of channels. Two-dimensional input is no longer supported. |
| `sample_rate` | `sample_rate` | - |
| - | `step` | Explicit int64-castable monotonic step value. If omitted, this defaults to |
: : : [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step). : | `max_outputs` | `max_outputs` | - | | `collections` | Not Supported | - | | `family` | Removed | Please use [`tf.name_scope`](../../../name_scope) instead to | : : : manage summary name prefix. : | - | `encoding` | Optional constant str for the desired | : : : encoding. Check the docs for : : : : [`tf.summary.audio`](../../../summary/audio) for latest supported: : : : audio formats. : | - | `description` | Optional long-form `str` description | : : : for the summary. Markdown is supported.: : : : Defaults to empty. :
Description
-----------
The summary has up to `max_outputs` summary values containing audio. The audio is built from `tensor` which must be 3-D with shape `[batch_size, frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.
The `tag` in the outputted Summary.Value protobufs is generated based on the name, with a suffix depending on the max\_outputs setting:
* If `max_outputs` is 1, the summary value tag is '*name*/audio'.
* If `max_outputs` is greater than 1, the summary value tags are generated sequentially as '*name*/audio/0', '*name*/audio/1', etc
| Args |
| `name` | A name for the generated node. Will also serve as a series name in TensorBoard. |
| `tensor` | A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]` or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`. |
| `sample_rate` | A Scalar `float32` `Tensor` indicating the sample rate of the signal in hertz. |
| `max_outputs` | Max number of batch elements to generate audio for. |
| `collections` | Optional list of ops.GraphKeys. The collections to add the summary to. Defaults to [\_ops.GraphKeys.SUMMARIES] |
| `family` | Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. |
| Returns |
| A scalar `Tensor` of type `string`. The serialized `Summary` protocol buffer. |
tensorflow tf.compat.v1.summary.FileWriter tf.compat.v1.summary.FileWriter
===============================
Writes `Summary` protocol buffers to event files.
```
tf.compat.v1.summary.FileWriter(
logdir,
graph=None,
max_queue=10,
flush_secs=120,
graph_def=None,
filename_suffix=None,
session=None
)
```
Migrate to TF2
--------------
This API is not compatible with eager execution or [`tf.function`](../../../function). To migrate to TF2, please use [`tf.summary.create_file_writer`](../../../summary/create_file_writer) instead for summary management. To specify the summary step, you can manage the context with [`tf.summary.SummaryWriter`](../../../summary/summarywriter), which is returned by [`tf.summary.create_file_writer()`](../../../summary/create_file_writer). Or, you can also use the `step` argument of summary functions such as [`tf.summary.histogram`](../../../summary/histogram). See the usage example shown below.
For a comprehensive [`tf.summary`](../../../summary) migration guide, please follow [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x).
#### How to Map Arguments
| TF1 Arg Name | TF2 Arg Name | Note |
| --- | --- | --- |
| `logdir` | `logdir` | - |
| `graph` | Not supported | - |
| `max_queue` | `max_queue` | - |
| `flush_secs` | `flush_millis` | The unit of time is changed from seconds to milliseconds. |
| `graph_def` | Not supported | - |
| `filename_suffix` | `filename_suffix` | - |
| `name` | `name` | - |
#### TF1 & TF2 Usage Example
TF1:
```
dist = tf.compat.v1.placeholder(tf.float32, [100])
tf.compat.v1.summary.histogram(name="distribution", values=dist)
writer = tf.compat.v1.summary.FileWriter("/tmp/tf1_summary_example")
summaries = tf.compat.v1.summary.merge_all()
sess = tf.compat.v1.Session()
for step in range(100):
mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])
summ = sess.run(summaries, feed_dict={dist: mean_moving_normal})
writer.add_summary(summ, global_step=step)
```
TF2:
```
writer = tf.summary.create_file_writer("/tmp/tf2_summary_example")
for step in range(100):
mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])
with writer.as_default(step=step):
tf.summary.histogram(name='distribution', data=mean_moving_normal)
```
Description
-----------
The `FileWriter` class provides a mechanism to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.
When constructed with a [`tf.compat.v1.Session`](../session) parameter, a `FileWriter` instead forms a compatibility layer over new graph-based summaries to facilitate the use of new summary writing with pre-existing code that expects a `FileWriter` instance.
This class is not thread-safe.
| Args |
| `logdir` | A string. Directory where event file will be written. |
| `graph` | A `Graph` object, such as `sess.graph`. |
| `max_queue` | Integer. Size of the queue for pending events and summaries. |
| `flush_secs` | Number. How often, in seconds, to flush the pending events and summaries to disk. |
| `graph_def` | DEPRECATED: Use the `graph` argument instead. |
| `filename_suffix` | A string. Every event file's name is suffixed with `suffix`. |
| `session` | A [`tf.compat.v1.Session`](../session) object. See details above. |
| Raises |
| `RuntimeError` | If called with eager execution enabled. |
Methods
-------
### `add_event`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L446-L453)
```
add_event(
event
)
```
Adds an event to the event file.
| Args |
| `event` | An `Event` protocol buffer. |
### `add_graph`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L159-L210)
```
add_graph(
graph, global_step=None, graph_def=None
)
```
Adds a `Graph` to the event file.
The graph described by the protocol buffer will be displayed by TensorBoard. Most users pass a graph in the constructor instead.
| Args |
| `graph` | A `Graph` object, such as `sess.graph`. |
| `global_step` | Number. Optional global step counter to record with the graph. |
| `graph_def` | DEPRECATED. Use the `graph` parameter instead. |
| Raises |
| `ValueError` | If both graph and graph\_def are passed to the method. |
### `add_meta_graph`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L225-L245)
```
add_meta_graph(
meta_graph_def, global_step=None
)
```
Adds a `MetaGraphDef` to the event file.
The `MetaGraphDef` allows running the given graph via `saver.import_meta_graph()`.
| Args |
| `meta_graph_def` | A `MetaGraphDef` object, often as returned by `saver.export_meta_graph()`. |
| `global_step` | Number. Optional global step counter to record with the graph. |
| Raises |
| `TypeError` | If both `meta_graph_def` is not an instance of `MetaGraphDef`. |
### `add_run_metadata`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L247-L269)
```
add_run_metadata(
run_metadata, tag, global_step=None
)
```
Adds a metadata information for a single session.run() call.
| Args |
| `run_metadata` | A `RunMetadata` protobuf object. |
| `tag` | The tag name for this metadata. |
| `global_step` | Number. Optional global step counter to record with the StepStats. |
| Raises |
| `ValueError` | If the provided tag was already used for this type of event. |
### `add_session_log`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L140-L152)
```
add_session_log(
session_log, global_step=None
)
```
Adds a `SessionLog` protocol buffer to the event file.
This method wraps the provided session in an `Event` protocol buffer and adds it to the event file.
| Args |
| `session_log` | A `SessionLog` protocol buffer. |
| `global_step` | Number. Optional global step value to record with the summary. |
### `add_summary`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L97-L138)
```
add_summary(
summary, global_step=None
)
```
Adds a `Summary` protocol buffer to the event file.
This method wraps the provided summary in an `Event` protocol buffer and adds it to the event file.
You can pass the result of evaluating any summary op, using `tf.Session.run` or [`tf.Tensor.eval`](../../../tensor#eval), to this function. Alternatively, you can pass a [`tf.compat.v1.Summary`](../summary) protocol buffer that you populate with your own data. The latter is commonly done to report evaluation results in event files.
| Args |
| `summary` | A `Summary` protocol buffer, optionally serialized as a string. |
| `global_step` | Number. Optional global step value to record with the summary. |
### `close`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L466-L472)
```
close()
```
Flushes the event file to disk and close the file.
Call this method when you do not need the summary writer anymore.
### `flush`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L455-L464)
```
flush()
```
Flushes the event file to disk.
Call this method to make sure that all pending events have been written to disk.
### `get_logdir`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L432-L434)
```
get_logdir()
```
Returns the directory where event file will be written.
### `reopen`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L474-L483)
```
reopen()
```
Reopens the EventFileWriter.
Can be called after `close()` to add more events in the same directory. The events will go into a new events file.
Does nothing if the EventFileWriter was not closed.
### `__enter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L424-L426)
```
__enter__()
```
Make usable with "with" statement.
### `__exit__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/summary/writer/writer.py#L428-L430)
```
__exit__(
unused_type, unused_value, unused_traceback
)
```
Make usable with "with" statement.
| programming_docs |
tensorflow tf.compat.v1.summary.SummaryDescription tf.compat.v1.summary.SummaryDescription
=======================================
A ProtocolMessage
| Attributes |
| `type_hint` | `string type_hint` |
tensorflow tf.compat.v1.summary.tensor_summary tf.compat.v1.summary.tensor\_summary
====================================
Outputs a `Summary` protocol buffer with a serialized tensor.proto.
```
tf.compat.v1.summary.tensor_summary(
name,
tensor,
summary_description=None,
collections=None,
summary_metadata=None,
family=None,
display_name=None
)
```
| Args |
| `name` | A name for the generated node. If display\_name is not set, it will also serve as the tag name in TensorBoard. (In that case, the tag name will inherit tf name scopes.) |
| `tensor` | A tensor of any type and shape to serialize. |
| `summary_description` | A long description of the summary sequence. Markdown is supported. |
| `collections` | Optional list of graph collections keys. The new summary op is added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. |
| `summary_metadata` | Optional SummaryMetadata proto (which describes which plugins may use the summary value). |
| `family` | Optional; if provided, used as the prefix of the summary tag, which controls the name used for display on TensorBoard when display\_name is not set. |
| `display_name` | A string used to name this data in TensorBoard. If this is not set, then the node name will be used instead. |
| Returns |
| A scalar `Tensor` of type `string`. The serialized `Summary` protocol buffer. |
tensorflow tf.compat.v1.summary.all_v2_summary_ops tf.compat.v1.summary.all\_v2\_summary\_ops
==========================================
Returns all V2-style summary ops defined in the current default graph.
```
tf.compat.v1.summary.all_v2_summary_ops()
```
This includes ops from TF 2.0 tf.summary and TF 1.x tf.contrib.summary (except for `tf.contrib.summary.graph` and `tf.contrib.summary.import_event`), but does *not* include TF 1.x tf.summary ops.
| Returns |
| List of summary ops, or None if called under eager execution. |
tensorflow tf.compat.v1.summary.TaggedRunMetadata tf.compat.v1.summary.TaggedRunMetadata
======================================
A ProtocolMessage
| Attributes |
| `run_metadata` | `bytes run_metadata` |
| `tag` | `string tag` |
tensorflow tf.compat.v1.summary.histogram tf.compat.v1.summary.histogram
==============================
Outputs a `Summary` protocol buffer with a histogram.
```
tf.compat.v1.summary.histogram(
name, values, collections=None, family=None
)
```
Migrate to TF2
--------------
For compatibility purposes, when invoked in TF2 where the outermost context is eager mode, this API will check if there is a suitable TF2 summary writer context available, and if so will forward this call to that writer instead. A "suitable" writer context means that the writer is set as the default writer, and there is an associated non-empty value for `step` (see [`tf.summary.SummaryWriter.as_default`](../../../summary/summarywriter#as_default), [`tf.summary.experimental.set_step`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step) or alternatively [`tf.compat.v1.train.create_global_step`](../train/create_global_step)). For the forwarded call, the arguments here will be passed to the TF2 implementation of [`tf.summary.histogram`](../../../summary/histogram), and the return value will be an empty bytestring tensor, to avoid duplicate summary writing. This forwarding is best-effort and not all arguments will be preserved.
To migrate to TF2, please use [`tf.summary.histogram`](../../../summary/histogram) instead. Please check [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete steps for migration.
#### How to Map Arguments
| TF1 Arg Name | TF2 Arg Name | Note |
| --- | --- | --- |
| `name` | `name` | - |
| `values` | `data` | - |
| - | `step` | Explicit int64-castable monotonic step value. If omitted, this defaults to |
: : : [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) : | - | `buckets` | Optional positive `int` specifying | : : : the histogram bucket number. : | `collections` | Not Supported | - | | `family` | Removed | Please use [`tf.name_scope`](../../../name_scope) instead | : : : to manage summary name prefix. : | - | `description` | Optional long-form `str` description | : : : for the summary. Markdown is supported.: : : : Defaults to empty. :
Description
-----------
Adding a histogram summary makes it possible to visualize your data's distribution in TensorBoard. You can see a detailed explanation of the TensorBoard histogram dashboard [here](https://www.tensorflow.org/get_started/tensorboard_histograms).
The generated [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) has one summary value containing a histogram for `values`.
This op reports an `InvalidArgument` error if any value is not finite.
| Args |
| `name` | A name for the generated node. Will also serve as a series name in TensorBoard. |
| `values` | A real numeric `Tensor`. Any shape. Values to use to build the histogram. |
| `collections` | Optional list of graph collections keys. The new summary op is added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. |
| `family` | Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. |
| Returns |
| A scalar `Tensor` of type `string`. The serialized `Summary` protocol buffer. |
tensorflow tf.compat.v1.summary.merge_all tf.compat.v1.summary.merge\_all
===============================
Merges all summaries collected in the default graph.
```
tf.compat.v1.summary.merge_all(
key=_ops.GraphKeys.SUMMARIES, scope=None, name=None
)
```
Migrate to TF2
--------------
This API is not compatible with eager execution or [`tf.function`](../../../function). To migrate to TF2, this API can be omitted entirely, because in TF2 individual summary ops, like [`tf.summary.scalar()`](../../../summary/scalar), write directly to the default summary writer if one is active. Thus, it's not necessary to merge summaries or to manually add the resulting merged summary output to the writer. See the usage example shown below.
For a comprehensive [`tf.summary`](../../../summary) migration guide, please follow [Migrating tf.summary usage to TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x).
#### TF1 & TF2 Usage Example
TF1:
```
dist = tf.compat.v1.placeholder(tf.float32, [100])
tf.compat.v1.summary.histogram(name="distribution", values=dist)
writer = tf.compat.v1.summary.FileWriter("/tmp/tf1_summary_example")
summaries = tf.compat.v1.summary.merge_all()
sess = tf.compat.v1.Session()
for step in range(100):
mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])
summ = sess.run(summaries, feed_dict={dist: mean_moving_normal})
writer.add_summary(summ, global_step=step)
```
TF2:
```
writer = tf.summary.create_file_writer("/tmp/tf2_summary_example")
for step in range(100):
mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])
with writer.as_default(step=step):
tf.summary.histogram(name='distribution', data=mean_moving_normal)
```
Description
-----------
| Args |
| `key` | `GraphKey` used to collect the summaries. Defaults to `GraphKeys.SUMMARIES`. |
| `scope` | Optional scope used to filter the summary ops, using `re.match`. |
| `name` | A name for the operation (optional). |
| Returns |
| If no summaries were collected, returns None. Otherwise returns a scalar `Tensor` of type `string` containing the serialized `Summary` protocol buffer resulting from the merging. |
| Raises |
| `RuntimeError` | If called with eager execution enabled. |
tensorflow tf.compat.v1.summary.get_summary_description tf.compat.v1.summary.get\_summary\_description
==============================================
Given a TensorSummary node\_def, retrieve its SummaryDescription.
```
tf.compat.v1.summary.get_summary_description(
node_def
)
```
When a Summary op is instantiated, a SummaryDescription of associated metadata is stored in its NodeDef. This method retrieves the description.
| Args |
| `node_def` | the node\_def\_pb2.NodeDef of a TensorSummary op |
| Returns |
| a summary\_pb2.SummaryDescription |
| Raises |
| `ValueError` | if the node is not a summary op. |
eager compatibility
-------------------
Not compatible with eager execution. To write TensorBoard summaries under eager execution, use `tf.contrib.summary` instead.
tensorflow tf.compat.v1.initializers.lecun_normal tf.compat.v1.initializers.lecun\_normal
=======================================
LeCun normal initializer.
```
tf.compat.v1.initializers.lecun_normal(
seed=None
)
```
It draws samples from a truncated normal distribution centered on 0 with standard deviation (after truncation) given by `stddev = sqrt(1 / fan_in)` where `fan_in` is the number of input units in the weight tensor.
| Args |
| `seed` | A Python integer. Used to seed the random generator. |
| Returns |
| An initializer. |
#### References:
* Self-Normalizing Neural Networks, [Klambauer et al., 2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks)
([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf))
* Efficient Backprop, [Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)
tensorflow tf.compat.v1.initializers.he_uniform tf.compat.v1.initializers.he\_uniform
=====================================
He uniform variance scaling initializer.
```
tf.compat.v1.initializers.he_uniform(
seed=None
)
```
It draws samples from a uniform distribution within [-limit, limit] where `limit` is `sqrt(6 / fan_in)` where `fan_in` is the number of input units in the weight tensor.
| Args |
| `seed` | A Python integer. Used to seed the random generator. |
| Returns |
| An initializer. |
#### References:
[He et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html)
([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf))
tensorflow tf.compat.v1.initializers.lecun_uniform tf.compat.v1.initializers.lecun\_uniform
========================================
LeCun uniform initializer.
```
tf.compat.v1.initializers.lecun_uniform(
seed=None
)
```
It draws samples from a uniform distribution within [-limit, limit] where `limit` is `sqrt(3 / fan_in)` where `fan_in` is the number of input units in the weight tensor.
| Args |
| `seed` | A Python integer. Used to seed the random generator. |
| Returns |
| An initializer. |
#### References:
* Self-Normalizing Neural Networks, [Klambauer et al., 2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks)
([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf))
* Efficient Backprop, [Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)
tensorflow tf.compat.v1.initializers.he_normal tf.compat.v1.initializers.he\_normal
====================================
He normal initializer.
```
tf.compat.v1.initializers.he_normal(
seed=None
)
```
It draws samples from a truncated normal distribution centered on 0 with standard deviation (after truncation) given by `stddev = sqrt(2 / fan_in)` where `fan_in` is the number of input units in the weight tensor.
| Args |
| `seed` | A Python integer. Used to seed the random generator. |
| Returns |
| An initializer. |
#### References:
[He et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html)
([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf))
tensorflow Module: tf.types.experimental Module: tf.types.experimental
=============================
Public API for tf.types.experimental namespace.
Classes
-------
[`class Callable`](experimental/callable): Base class for TF callables like those created by tf.function.
[`class ConcreteFunction`](experimental/concretefunction): Base class for graph functions.
[`class GenericFunction`](experimental/genericfunction): Base class for polymorphic graph functions.
[`class SupportsTracingProtocol`](experimental/supportstracingprotocol): A protocol allowing custom classes to control tf.function retracing.
[`class TraceType`](experimental/tracetype): Represents the type of object(s) for tf.function tracing purposes.
[`class TracingContext`](experimental/tracingcontext): Contains information scoped to the tracing of multiple objects.
Type Aliases
------------
[`TensorLike`](experimental/tensorlike)
tensorflow tf.types.experimental.Callable tf.types.experimental.Callable
==============================
Base class for TF callables like those created by tf.function.
>
> **Note:** Callables are conceptually very similar to [`tf.Operation`](../../operation): a [`tf.Operation`](../../operation) is a kind of callable.
>
Methods
-------
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/core.py#L87-L102)
```
__call__(
*args, **kwargs
)
```
Executes this callable.
This behaves like a regular op - in eager mode, it immediately starts execution, returning results. In graph mode, it creates ops which return symbolic TensorFlow values (like [`tf.Tensor`](../../tensor), [`tf.data.Dataset`](../../data/dataset), etc.). For example, [`tf.function`](../../function) callables typically generate a [`tf.raw_ops.PartitionedCall`](../../raw_ops/partitionedcall) op, but not always - the exact operations being generated are an internal implementation detail.
| Args |
| `*args` | positional argument for this call |
| `**kwargs` | keyword arguments for this call |
| Returns |
| The execution results. |
tensorflow tf.types.experimental.ConcreteFunction tf.types.experimental.ConcreteFunction
======================================
Base class for graph functions.
Inherits From: [`Callable`](callable)
A `ConcreteFunction` encapsulates a single graph function definition and is differentiable under [`tf.GradientTape`](../../gradienttape) contexts.
Methods
-------
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/core.py#L87-L102)
```
__call__(
*args, **kwargs
)
```
Executes this callable.
This behaves like a regular op - in eager mode, it immediately starts execution, returning results. In graph mode, it creates ops which return symbolic TensorFlow values (like [`tf.Tensor`](../../tensor), [`tf.data.Dataset`](../../data/dataset), etc.). For example, [`tf.function`](../../function) callables typically generate a [`tf.raw_ops.PartitionedCall`](../../raw_ops/partitionedcall) op, but not always - the exact operations being generated are an internal implementation detail.
| Args |
| `*args` | positional argument for this call |
| `**kwargs` | keyword arguments for this call |
| Returns |
| The execution results. |
tensorflow tf.types.experimental.GenericFunction tf.types.experimental.GenericFunction
=====================================
Base class for polymorphic graph functions.
Inherits From: [`Callable`](callable)
Graph functions are Python callable objects that dispatch calls to a TensorFlow graph. Polymorphic graph functions can be backed by multiple TF graphs, and automatically select the appropriate specialization based on the type of input they were called with. They may also create specializations on the fly if necessary, for example by tracing.
Also see [`tf.function`](../../function).
Methods
-------
### `experimental_get_compiler_ir`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/core.py#L175-L237)
```
experimental_get_compiler_ir(
*args, **kwargs
)
```
Returns compiler IR for the compiled function.
This API is intended *only* for debugging as there are no guarantees on backwards compatibility of returned IR or the allowed values of `stage`.
| Args |
| `*args` | Arguments used for compilation; same arguments as used for calling the function. Need to be eager tensors. |
| `**kwargs` | Keyword arguments used for compilation. |
| Returns |
| Function callable with the following kwargs: * `stage` at which the compiler IR should be serialized. Allowed values are:
+ `hlo`: HLO output after conversion from TF (<https://www.tensorflow.org/xla/operation_semantics>).
+ `hlo_serialized`: Like stage=`hlo`, but the output is a serialized HLO module proto (a bytes object).
+ `optimized_hlo`: HLO after compiler optimizations.
+ `optimized_hlo_serialized`: Like stage=`optimized_hlo`, but the output is a serialized HLO module proto (a bytes object).
+ `optimized_hlo_dot`: optimized HLO in DOT format suitable for Graphviz.
* `device_name` can be either None, in which case the preferred device is used for compilation, or a device name. It can be a full device name, or a partial one, e.g., `/device:CPU:0`.
For example, for
```
@tf.function(jit_compile=True)
def f(x):
return x + 1
f.experimental_get_compiler_ir(tf.random.normal([10, 10])(stage='hlo')
```
the output is:
```
HloModule a_inference_f_13__.9
ENTRY %a_inference_f_13__.9 (arg0.1: f32[10,10]) -> f32[10,10] {
%arg0.1 = f32[10,10]{1,0} parameter(0), parameter_replication={false}
%reshape.2 = f32[10,10]{1,0} reshape(f32[10,10]{1,0} %arg0.1)
%constant.3 = f32[] constant(1)
%broadcast.4 = f32[10,10]{1,0} broadcast(f32[] %constant.3)
%add.5 = f32[10,10]{1,0} add(f32[10,10]{1,0} %reshape.2,
f32[10,10]{1,0} %broadcast.4)
%reshape.6 = f32[10,10]{1,0} reshape(f32[10,10]{1,0} %add.5)
%tuple.7 = (f32[10,10]{1,0}) tuple(f32[10,10]{1,0} %reshape.6)
ROOT %get-tuple-element.8 = f32[10,10]{1,0}
get-tuple-element((f32[10,10]{1,0}) %tuple.7), index=0
}
```
|
| Raises |
| `ValueError` | If an invalid `stage` is selected or if applied to a function which is not compiled (`jit_compile=True` is not set). |
| `TypeError` | When called with input in graph mode. |
### `get_concrete_function`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/core.py#L128-L173)
```
get_concrete_function(
*args, **kwargs
) -> tf.types.experimental.ConcreteFunction
```
Returns a `ConcreteFunction` specialized to input types.
The arguments specified by `args` and `kwargs` follow normal function call rules. The returned `ConcreteFunction` has the same set of positional and keyword arguments as `self`, but their types are compatible to the types specified by `args` and `kwargs` (though not neccessarily equal).
```
@tf.function
def f(x):
return x
f_concrete = f.get_concrete_function(tf.constant(1.0))
f_concrete = f.get_concrete_function(x=tf.constant(1.0))
```
Unlike normal calls, `get_concrete_function` allow type specifiers instead of TensorFlow objects, so for example [`tf.Tensor`](../../tensor)s may be replaced with [`tf.TensorSpec`](../../tensorspec)s.
```
@tf.function
def f(x):
return x
f_concrete = f.get_concrete_function(tf.TensorSpec([], tf.float64))
```
If the function definition allows only one specialization, `args` and `kwargs` may be omitted altogether.
```
@tf.function(input_signature=[tf.TensorSpec(None, tf.float32)])
def f(x):
return x
f_concrete = f.get_concrete_function()
```
The returned `ConcreteFunction` can be called normally:
```
f_concrete(tf.constant(1.0))
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
f_concrete(x=tf.constant(1.0))
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
```
| Args |
| `*args` | inputs to specialize on. |
| `**kwargs` | inputs to specialize on. |
| Returns |
| A `ConcreteFunction`. |
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/core.py#L87-L102)
```
__call__(
*args, **kwargs
)
```
Executes this callable.
This behaves like a regular op - in eager mode, it immediately starts execution, returning results. In graph mode, it creates ops which return symbolic TensorFlow values (like [`tf.Tensor`](../../tensor), [`tf.data.Dataset`](../../data/dataset), etc.). For example, [`tf.function`](../../function) callables typically generate a [`tf.raw_ops.PartitionedCall`](../../raw_ops/partitionedcall) op, but not always - the exact operations being generated are an internal implementation detail.
| Args |
| `*args` | positional argument for this call |
| `**kwargs` | keyword arguments for this call |
| Returns |
| The execution results. |
| programming_docs |
tensorflow tf.types.experimental.TraceType tf.types.experimental.TraceType
===============================
Represents the type of object(s) for tf.function tracing purposes.
`TraceType` is an abstract class that other classes might inherit from to provide information regarding associated class(es) for the purposes of tf.function tracing. The typing logic provided through this mechanism will be used to make decisions regarding usage of cached concrete functions and retracing.
For example, if we have the following tf.function and classes:
```
@tf.function
def get_mixed_flavor(fruit_a, fruit_b):
return fruit_a.flavor + fruit_b.flavor
class Fruit:
flavor = tf.constant([0, 0])
class Apple(Fruit):
flavor = tf.constant([1, 2])
class Mango(Fruit):
flavor = tf.constant([3, 4])
```
tf.function does not know when to re-use an existing concrete function in regards to the `Fruit` class so naively it retraces for every new instance.
```
get_mixed_flavor(Apple(), Mango()) # Traces a new concrete function
get_mixed_flavor(Apple(), Mango()) # Traces a new concrete function again
```
However, we, as the designers of the `Fruit` class, know that each subclass has a fixed flavor and we can reuse an existing traced concrete function if it was the same subclass. Avoiding such unnecessary tracing of concrete functions can have significant performance benefits.
```
class FruitTraceType(tf.types.experimental.TraceType):
def __init__(self, fruit_type):
self.fruit_type = fruit_type
def is_subtype_of(self, other):
return (type(other) is FruitTraceType and
self.fruit_type is other.fruit_type)
def most_specific_common_supertype(self, others):
return self if all(self == other for other in others) else None
class Fruit:
def __tf_tracing_type__(self, context):
return FruitTraceType(type(self))
```
Now if we try calling it again:
```
get_mixed_flavor(Apple(), Mango()) # Traces a new concrete function
get_mixed_flavor(Apple(), Mango()) # Re-uses the traced concrete function
```
Methods
-------
### `is_subtype_of`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/trace.py#L99-L123)
```
@abc.abstractmethod
is_subtype_of(
other: 'TraceType'
) -> bool
```
Returns True if `self` is a subtype of `other`.
For example, [`tf.function`](../../function) uses subtyping for dispatch: if `a.is_subtype_of(b)` is True, then an argument of `TraceType` `a` can be used as argument to a `ConcreteFunction` traced with an a `TraceType` `b`.
| Args |
| `other` | A TraceType object to be compared against. |
#### Example:
```
class Dimension(TraceType):
def __init__(self, value: Optional[int]):
self.value = value
def is_subtype_of(self, other):
# Either the value is the same or other has a generalized value that
# can represent any specific ones.
return (self.value == other.value) or (other.value is None)
```
### `most_specific_common_supertype`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/trace.py#L125-L154)
```
@abc.abstractmethod
most_specific_common_supertype(
others: Sequence['TraceType']
) -> Optional['TraceType']
```
Returns the most specific supertype of `self` and `others`, if exists.
The returned `TraceType` is a supertype of `self` and `others`, that is, they are all subtypes (see `is_subtype_of`) of it. It is also most specific, that is, there it has no subtype that is also a common supertype of `self` and `others`.
If `self` and `others` have no common supertype, this returns `None`.
| Args |
| `others` | A sequence of TraceTypes. |
#### Example:
```
class Dimension(TraceType):
def __init__(self, value: Optional[int]):
self.value = value
def most_specific_common_supertype(self, other):
# Either the value is the same or other has a generalized value that
# can represent any specific ones.
if self.value == other.value:
return self.value
else:
return Dimension(None)
```
### `__eq__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/trace.py#L196-L198)
```
@abc.abstractmethod
__eq__(
other
) -> bool
```
Return self==value.
tensorflow tf.types.experimental.SupportsTracingProtocol tf.types.experimental.SupportsTracingProtocol
=============================================
A protocol allowing custom classes to control tf.function retracing.
```
tf.types.experimental.SupportsTracingProtocol(
*args, **kwargs
)
```
Methods
-------
### `__tf_tracing_type__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/types/trace.py#L218-L235)
```
@abc.abstractmethod
__tf_tracing_type__(
context: tf.types.experimental.TracingContext
) -> tf.types.experimental.TraceType
```
Returns the tracing type of this object.
The tracing type is used to build the signature of a tf.function when traced, and to match arguments with existing signatures. When a Function object is called, tf.function looks at the tracing type of the call arguments. If an existing signature of matching type exists, it will be used. Otherwise, a new function is traced, and its signature will use the tracing type of the call arguments.
| Args |
| `context` | a context object created for each function call for tracking information about the call arguments as a whole |
| Returns |
| The tracing type of this object. |
tensorflow tf.types.experimental.TracingContext tf.types.experimental.TracingContext
====================================
Contains information scoped to the tracing of multiple objects.
`TracingContext` is a container class for flags and variables that have any kind of influence on the tracing behaviour of the class implementing the **tf\_tracing\_type**. This context will be shared across all **tf\_tracing\_type** calls while constructing the TraceType for a particular set of objects.
tensorflow tf.types.experimental.TensorLike tf.types.experimental.TensorLike
================================
This symbol is a **type alias**.
Union of all types that can be converted to a [`tf.Tensor`](../../tensor) by [`tf.convert_to_tensor`](../../convert_to_tensor).
#### Source:
```
TensorLike = Union[
tensorflow.python.types.core.Tensor,
tensorflow.python.types.core.TensorProtocol,
int,
float,
bool,
str,
bytes,
complex,
tuple,
list,
numpy.ndarray,
numpy.generic
]
```
This definition may be used in user code. Additional types may be added in the future as more input types are supported.
#### Example:
```
def foo(x: TensorLike):
pass
```
This definition passes static type verification for:
```
foo(tf.constant([1, 2, 3]))
foo([1, 2, 3])
foo(np.array([1, 2, 3]))
```
tensorflow tf.saved_model.Asset tf.saved\_model.Asset
=====================
Represents a file asset to hermetically include in a SavedModel.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.saved_model.Asset`](https://www.tensorflow.org/api_docs/python/tf/saved_model/Asset)
```
tf.saved_model.Asset(
path
)
```
A SavedModel can include arbitrary files, called assets, that are needed for its use. For example a vocabulary file used initialize a lookup table.
When a trackable object is exported via [`tf.saved_model.save()`](save), all the `Asset`s reachable from it are copied into the SavedModel assets directory. Upon loading, the assets and the serialized functions that depend on them will refer to the correct filepaths inside the SavedModel directory.
#### Example:
```
filename = tf.saved_model.Asset("file.txt")
@tf.function(input_signature=[])
def func():
return tf.io.read_file(filename)
trackable_obj = tf.train.Checkpoint()
trackable_obj.func = func
trackable_obj.filename = filename
tf.saved_model.save(trackable_obj, "/tmp/saved_model")
# The created SavedModel is hermetic, it does not depend on
# the original file and can be moved to another path.
tf.io.gfile.remove("file.txt")
tf.io.gfile.rename("/tmp/saved_model", "/tmp/new_location")
reloaded_obj = tf.saved_model.load("/tmp/new_location")
print(reloaded_obj.func())
```
| Attributes |
| `asset_path` | A path, or a 0-D [`tf.string`](../../tf#string) tensor with path to the asset. |
tensorflow tf.saved_model.load tf.saved\_model.load
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/load.py#L689-L783) |
Load a SavedModel from `export_dir`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.saved_model.load_v2`](https://www.tensorflow.org/api_docs/python/tf/saved_model/load)
```
tf.saved_model.load(
export_dir, tags=None, options=None
)
```
Signatures associated with the SavedModel are available as functions:
```
imported = tf.saved_model.load(path)
f = imported.signatures["serving_default"]
print(f(x=tf.constant([[1.]])))
```
Objects exported with [`tf.saved_model.save`](save) additionally have trackable objects and functions assigned to attributes:
```
exported = tf.train.Checkpoint(v=tf.Variable(3.))
exported.f = tf.function(
lambda x: exported.v * x,
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
tf.saved_model.save(exported, path)
imported = tf.saved_model.load(path)
assert 3. == imported.v.numpy()
assert 6. == imported.f(x=tf.constant(2.)).numpy()
```
*Loading Keras models*
Keras models are trackable, so they can be saved to SavedModel. The object returned by [`tf.saved_model.load`](load) is not a Keras object (i.e. doesn't have `.fit`, `.predict`, etc. methods). A few attributes and functions are still available: `.variables`, `.trainable_variables` and `.__call__`.
```
model = tf.keras.Model(...)
tf.saved_model.save(model, path)
imported = tf.saved_model.load(path)
outputs = imported(inputs)
```
Use [`tf.keras.models.load_model`](../keras/models/load_model) to restore the Keras model.
*Importing SavedModels from TensorFlow 1.x*
SavedModels from [`tf.estimator.Estimator`](../estimator/estimator) or 1.x SavedModel APIs have a flat graph instead of [`tf.function`](../function) objects. These SavedModels will be loaded with the following attributes:
* `.signatures`: A dictionary mapping signature names to functions.
* `.prune(feeds, fetches)`: A method which allows you to extract functions for new subgraphs. This is equivalent to importing the SavedModel and naming feeds and fetches in a Session from TensorFlow 1.x.
```
imported = tf.saved_model.load(path_to_v1_saved_model)
pruned = imported.prune("x:0", "out:0")
pruned(tf.ones([]))
```
See [`tf.compat.v1.wrap_function`](../compat/v1/wrap_function) for details.
* `.variables`: A list of imported variables.
* `.graph`: The whole imported graph.
* `.restore(save_path)`: A function that restores variables from a checkpoint saved from `tf.compat.v1.Saver`.
*Consuming SavedModels asynchronously*
When consuming SavedModels asynchronously (the producer is a separate process), the SavedModel directory will appear before all files have been written, and [`tf.saved_model.load`](load) will fail if pointed at an incomplete SavedModel. Rather than checking for the directory, check for "saved\_model\_dir/saved\_model.pb". This file is written atomically as the last [`tf.saved_model.save`](save) file operation.
| Args |
| `export_dir` | The SavedModel directory to load from. |
| `tags` | A tag or sequence of tags identifying the MetaGraph to load. Optional if the SavedModel contains a single MetaGraph, as for those exported from [`tf.saved_model.save`](save). |
| `options` | [`tf.saved_model.LoadOptions`](loadoptions) object that specifies options for loading. |
| Returns |
| A trackable object with a `signatures` attribute mapping from signature keys to functions. If the SavedModel was exported by [`tf.saved_model.save`](save), it also points to trackable objects, functions, debug info which it has been saved. |
| Raises |
| `ValueError` | If `tags` don't match a MetaGraph in the SavedModel. |
tensorflow tf.saved_model.LoadOptions tf.saved\_model.LoadOptions
===========================
Options for loading a SavedModel.
```
tf.saved_model.LoadOptions(
allow_partial_checkpoint=False,
experimental_io_device=None,
experimental_skip_checkpoint=False
)
```
This function may be used in the `options` argument in functions that load a SavedModel ([`tf.saved_model.load`](load), [`tf.keras.models.load_model`](../keras/models/load_model)).
| Args |
| `allow_partial_checkpoint` | bool. Defaults to `False`. When enabled, allows the SavedModel checkpoint to not entirely match the loaded object. |
| `experimental_io_device` | string. Applies in a distributed setting. Tensorflow device to use to access the filesystem. If `None` (default) then for each variable the filesystem is accessed from the CPU:0 device of the host where that variable is assigned. If specified, the filesystem is instead accessed from that device for all variables. This is for example useful if you want to load from a local directory, such as "/tmp" when running in a distributed setting. In that case pass a device for the host where the "/tmp" directory is accessible. |
| `experimental_skip_checkpoint` | bool. Defaults to `False`. If set to `True`, checkpoints will not be restored. Note that this in the majority of cases will generate an unusable model. |
| Attributes |
| `allow_partial_checkpoint` | |
| `experimental_io_device` | |
| `experimental_skip_checkpoint` | |
tensorflow tf.saved_model.save tf.saved\_model.save
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/save.py#L1106-L1291) |
Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.saved_model.experimental.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save), [`tf.compat.v1.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save)
```
tf.saved_model.save(
obj, export_dir, signatures=None, options=None
)
```
The `obj` must inherit from the [`Trackable` class](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/tracking/base.py#L591).
#### Example usage:
```
class Adder(tf.Module):
@tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.float32)])
def add(self, x):
return x + x
```
```
model = Adder()
tf.saved_model.save(model, '/tmp/adder')
```
The resulting SavedModel is then servable with an input named "x", a scalar with dtype float32.
*Signatures*
Signatures define the input and output types for a computation. The optional save `signatures` argument controls which methods in `obj` will be available to programs which consume `SavedModel`s, for example, serving APIs. Python functions may be decorated with [`@tf.function(input_signature=...)`](../function) and passed as signatures directly, or lazily with a call to `get_concrete_function` on the method decorated with [`@tf.function`](../function).
#### Example:
```
class Adder(tf.Module):
@tf.function
def add(self, x):
return x + x
```
```
model = Adder()
tf.saved_model.save(
model, '/tmp/adder',signatures=model.add.get_concrete_function(
tf.TensorSpec([], tf.float32)))
```
If a [`@tf.function`](../function) does not have an input signature and `get_concrete_function` is not called on that method, the function will not be directly callable in the restored SavedModel.
#### Example:
```
class Adder(tf.Module):
@tf.function
def add(self, x):
return x + x
```
```
model = Adder()
tf.saved_model.save(model, '/tmp/adder')
restored = tf.saved_model.load('/tmp/adder')
restored.add(1.)
Traceback (most recent call last):
ValueError: Found zero restored functions for caller function.
```
If the `signatures` argument is omitted, `obj` will be searched for [`@tf.function`](../function)-decorated methods. If exactly one traced [`@tf.function`](../function) is found, that method will be used as the default signature for the SavedModel. Else, any [`@tf.function`](../function) attached to `obj` or its dependencies will be exported for use with [`tf.saved_model.load`](load).
When invoking a signature in an exported SavedModel, `Tensor` arguments are identified by name. These names will come from the Python function's argument names by default. They may be overridden by specifying a `name=...` argument in the corresponding [`tf.TensorSpec`](../tensorspec) object. Explicit naming is required if multiple `Tensor`s are passed through a single argument to the Python function.
The outputs of functions used as `signatures` must either be flat lists, in which case outputs will be numbered, or a dictionary mapping string keys to `Tensor`, in which case the keys will be used to name outputs.
Signatures are available in objects returned by [`tf.saved_model.load`](load) as a `.signatures` attribute. This is a reserved attribute: [`tf.saved_model.save`](save) on an object with a custom `.signatures` attribute will raise an exception.
\_Using [`tf.saved*model.save*`](save) with Keras models
While Keras has its own [saving and loading API](https://www.tensorflow.org/guide/keras/save_and_serialize), this function can be used to export Keras models. For example, exporting with a signature specified:
```
class Adder(tf.keras.Model):
@tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
def concat(self, x):
return x + x
```
```
model = Adder()
tf.saved_model.save(model, '/tmp/adder')
```
Exporting from a function without a fixed signature:
```
class Adder(tf.keras.Model):
@tf.function
def concat(self, x):
return x + x
```
```
model = Adder()
tf.saved_model.save(
model, '/tmp/adder',
signatures=model.concat.get_concrete_function(
tf.TensorSpec(shape=[], dtype=tf.string, name="string_input")))
```
[`tf.keras.Model`](../keras/model) instances constructed from inputs and outputs already have a signature and so do not require a [`@tf.function`](../function) decorator or a `signatures` argument. If neither are specified, the model's forward pass is exported.
```
x = tf.keras.layers.Input((4,), name="x")
y = tf.keras.layers.Dense(5, name="out")(x)
model = tf.keras.Model(x, y)
tf.saved_model.save(model, '/tmp/saved_model/')
```
The exported SavedModel takes "x" with shape [None, 4] and returns "out" with shape [None, 5]
*Variables and Checkpoints*
Variables must be tracked by assigning them to an attribute of a tracked object or to an attribute of `obj` directly. TensorFlow objects (e.g. layers from [`tf.keras.layers`](../keras/layers), optimizers from [`tf.train`](../train)) track their variables automatically. This is the same tracking scheme that [`tf.train.Checkpoint`](../train/checkpoint) uses, and an exported `Checkpoint` object may be restored as a training checkpoint by pointing [`tf.train.Checkpoint.restore`](../train/checkpoint#restore) to the SavedModel's "variables/" subdirectory.
[`tf.function`](../function) does not hard-code device annotations from outside the function body, instead of using the calling context's device. This means for example that exporting a model that runs on a GPU and serving it on a CPU will generally work, with some exceptions:
* [`tf.device`](../device) annotations inside the body of the function will be hard-coded in the exported model; this type of annotation is discouraged.
* Device-specific operations, e.g. with "cuDNN" in the name or with device-specific layouts, may cause issues.
* For `ConcreteFunctions`, active distribution strategies will cause device placements to be hard-coded in the function.
SavedModels exported with [`tf.saved_model.save`](save) [strip default-valued attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes) automatically, which removes one source of incompatibilities when the consumer of a SavedModel is running an older TensorFlow version than the producer. There are however other sources of incompatibilities which are not handled automatically, such as when the exported model contains operations which the consumer does not have definitions for.
| Args |
| `obj` | A trackable object (e.g. tf.Module or tf.train.Checkpoint) to export. |
| `export_dir` | A directory in which to write the SavedModel. |
| `signatures` | Optional, one of three types: * a [`tf.function`](../function) with an input signature specified, which will use the default serving signature key,
* the result of `f.get_concrete_function` on a [`@tf.function`](../function)-decorated function `f`, in which case `f` will be used to generate a signature for the SavedModel under the default serving signature key,
* a dictionary, which maps signature keys to either [`tf.function`](../function) instances with input signatures or concrete functions. Keys of such a dictionary may be arbitrary strings, but will typically be from the `tf.saved_model.signature_constants` module.
|
| `options` | [`tf.saved_model.SaveOptions`](saveoptions) object for configuring save options. |
| Raises |
| `ValueError` | If `obj` is not trackable. |
eager compatibility
-------------------
Not well supported when graph building. From TensorFlow 1.x, [`tf.compat.v1.enable_eager_execution()`](../compat/v1/enable_eager_execution) should run first. Calling tf.saved\_model.save in a loop when graph building from TensorFlow 1.x will add new save operations to the default graph each iteration.
May not be called from within a function body.
| programming_docs |
tensorflow Module: tf.saved_model.experimental Module: tf.saved\_model.experimental
====================================
Public API for tf.saved\_model.experimental namespace.
Classes
-------
[`class TrackableResource`](experimental/trackableresource): Holds a Tensor which a tf.function can capture.
[`class VariablePolicy`](experimental/variablepolicy): Enum defining options for variable handling when saving.
tensorflow tf.saved_model.SaveOptions tf.saved\_model.SaveOptions
===========================
Options for saving to SavedModel.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.saved_model.SaveOptions`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions)
```
tf.saved_model.SaveOptions(
namespace_whitelist=None,
save_debug_info=False,
function_aliases=None,
experimental_io_device=None,
experimental_variable_policy=None,
experimental_custom_gradients=True
)
```
This function may be used in the `options` argument in functions that save a SavedModel ([`tf.saved_model.save`](save), [`tf.keras.models.save_model`](../keras/models/save_model)).
| Args |
| `namespace_whitelist` | List of strings containing op namespaces to whitelist when saving a model. Saving an object that uses namespaced ops must explicitly add all namespaces to the whitelist. The namespaced ops must be registered into the framework when loading the SavedModel. If no whitelist is provided, all namespaced ops will be allowed. |
| `save_debug_info` | Boolean indicating whether debug information is saved. If True, then a debug/saved\_model\_debug\_info.pb file will be written with the contents of a GraphDebugInfo binary protocol buffer containing stack trace information for all ops and functions that are saved. |
| `function_aliases` | Python dict. Mapping from string to object returned by @tf.function. A single tf.function can generate many ConcreteFunctions. If a downstream tool wants to refer to all concrete functions generated by a single tf.function you can use the `function_aliases` argument to store a map from the alias name to all concrete function names. E.g.
```
class Adder(tf.Module):
@tf.function
def double(self, x):
return x + x
```
```
model = Adder()
model.double.get_concrete_function(
tf.TensorSpec(shape=[], dtype=tf.float32, name="float_input"))
model.double.get_concrete_function(
tf.TensorSpec(shape=[], dtype=tf.string, name="string_input"))
```
```
options = tf.saved_model.SaveOptions(
function_aliases={'double': model.double})
tf.saved_model.save(model, '/tmp/adder', options=options)
```
|
| `experimental_io_device` | string. Applies in a distributed setting. Tensorflow device to use to access the filesystem. If `None` (default) then for each variable the filesystem is accessed from the CPU:0 device of the host where that variable is assigned. If specified, the filesystem is instead accessed from that device for all variables. This is for example useful if you want to save to a local directory, such as "/tmp" when running in a distributed setting. In that case pass a device for the host where the "/tmp" directory is accessible. |
| `experimental_variable_policy` | The policy to apply to variables when saving. This is either a [`saved_model.experimental.VariablePolicy`](experimental/variablepolicy) enum instance or one of its value strings (case is not important). See that enum documentation for details. A value of `None` corresponds to the default policy. |
| `experimental_custom_gradients` | Boolean. When True, will save traced gradient functions for the functions decorated by [`tf.custom_gradient`](../custom_gradient). Defaults to `True`. |
| Attributes |
| `experimental_custom_gradients` | |
| `experimental_io_device` | |
| `experimental_variable_policy` | |
| `function_aliases` | |
| `namespace_whitelist` | |
| `save_debug_info` | |
tensorflow tf.saved_model.contains_saved_model tf.saved\_model.contains\_saved\_model
======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/loader_impl.py#L249-L267) |
Checks whether the provided export directory could contain a SavedModel.
```
tf.saved_model.contains_saved_model(
export_dir
)
```
Note that the method does not load any data by itself. If the method returns `false`, the export directory definitely does not contain a SavedModel. If the method returns `true`, the export directory may contain a SavedModel but provides no guarantee that it can be loaded.
| Args |
| `export_dir` | Absolute path to possible export location. For example, '/my/foo/model'. |
| Returns |
| True if the export directory contains SavedModel files, False otherwise. |
tensorflow tf.saved_model.experimental.VariablePolicy tf.saved\_model.experimental.VariablePolicy
===========================================
Enum defining options for variable handling when saving.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.saved_model.experimental.VariablePolicy`](https://www.tensorflow.org/api_docs/python/tf/saved_model/experimental/VariablePolicy)
NONE No policy applied: Distributed variables are saved as one variable, with no device attached.
SAVE\_VARIABLE\_DEVICES When saving variables, also save their device assignment. This is useful if one wants to hardcode devices in saved models, but it also makes them non-portable if soft device placement is disabled (more details in [`tf.config.set_soft_device_placement`](../../config/set_soft_device_placement)). This is currently not fully supported by [`saved_model.load`](../load), and is mainly intended to be used when one will be reading the saved model at a lower API level. In the example below, the graph saved by the call to [`saved_model.save`](../save) will have the variable devices correctly specified:
```
exported = tf.train.Checkpoint()
with tf.device('/GPU:0'):
exported.x_gpu = tf.Variable(1.0)
with tf.device('/CPU:0'):
exported.x_cpu = tf.Variable(1.0)
tf.saved_model.save(exported, export_dir,
options = tf.saved_model.SaveOptions(
experimental_variable_policy=
tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))
```
Distributed variables are still saved as one variable under this policy.
EXPAND\_DISTRIBUTED\_VARIABLES Distributed variables will be saved with information about their components, allowing for their restoration on load. Also, the saved graph will contain references to those variables. This is useful when one wants to use the model for training in environments where the original distribution strategy is not available.
| Class Variables |
| EXPAND\_DISTRIBUTED\_VARIABLES | `<VariablePolicy.EXPAND_DISTRIBUTED_VARIABLES: 'expand_distributed_variables'>` |
| NONE | `<VariablePolicy.NONE: None>` |
| SAVE\_VARIABLE\_DEVICES | `<VariablePolicy.SAVE_VARIABLE_DEVICES: 'save_variable_devices'>` |
tensorflow tf.saved_model.experimental.TrackableResource tf.saved\_model.experimental.TrackableResource
==============================================
Holds a Tensor which a tf.function can capture.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.saved_model.experimental.TrackableResource`](https://www.tensorflow.org/api_docs/python/tf/saved_model/experimental/TrackableResource)
```
tf.saved_model.experimental.TrackableResource(
device=''
)
```
A TrackableResource is most useful for stateful Tensors that require initialization, such as [`tf.lookup.StaticHashTable`](../../lookup/statichashtable). `TrackableResource`s are discovered by traversing the graph of object attributes, e.g. during [`tf.saved_model.save`](../save).
A TrackableResource has three methods to override:
* `_create_resource` should create the resource tensor handle.
* `_initialize` should initialize the resource held at `self.resource_handle`.
* `_destroy_resource` is called upon a `TrackableResource`'s destruction and should decrement the resource's ref count. For most resources, this should be done with a call to [`tf.raw_ops.DestroyResourceOp`](../../raw_ops/destroyresourceop).
#### Example usage:
```
class DemoResource(tf.saved_model.experimental.TrackableResource):
def __init__(self):
super().__init__()
self._initialize()
def _create_resource(self):
return tf.raw_ops.VarHandleOp(dtype=tf.float32, shape=[2])
def _initialize(self):
tf.raw_ops.AssignVariableOp(
resource=self.resource_handle, value=tf.ones([2]))
def _destroy_resource(self):
tf.raw_ops.DestroyResourceOp(resource=self.resource_handle)
class DemoModule(tf.Module):
def __init__(self):
self.resource = DemoResource()
def increment(self, tensor):
return tensor + tf.raw_ops.ReadVariableOp(
resource=self.resource.resource_handle, dtype=tf.float32)
demo = DemoModule()
demo.increment([5, 1])
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([6., 2.], dtype=float32)>
```
| Args |
| `device` | A string indicating a required placement for this resource, e.g. "CPU" if this resource must be created on a CPU device. A blank device allows the user to place resource creation, so generally this should be blank unless the resource only makes sense on one device. |
| Attributes |
| `resource_handle` | Returns the resource handle associated with this Resource. |
tensorflow tf.linalg.LinearOperatorInversion tf.linalg.LinearOperatorInversion
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_inversion.py#L27-L216) |
`LinearOperator` representing the inverse of another operator.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorInversion`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorInversion)
```
tf.linalg.LinearOperatorInversion(
operator,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name=None
)
```
This operator represents the inverse of another operator.
```
# Create a 2 x 2 linear operator.
operator = LinearOperatorFullMatrix([[1., 0.], [0., 2.]])
operator_inv = LinearOperatorInversion(operator)
operator_inv.to_dense()
==> [[1., 0.]
[0., 0.5]]
operator_inv.shape
==> [2, 2]
operator_inv.log_abs_determinant()
==> - log(2)
x = ... Shape [2, 4] Tensor
operator_inv.matmul(x)
==> Shape [2, 4] Tensor, equal to operator.solve(x)
```
#### Performance
The performance of `LinearOperatorInversion` depends on the underlying operators performance: `solve` and `matmul` are swapped, and determinant is inverted.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `operator` | `LinearOperator` object. If `operator.is_non_singular == False`, an exception is raised. We do allow `operator.is_non_singular == None`, in which case this operator will have `is_non_singular == None`. Similarly for `is_self_adjoint` and `is_positive_definite`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. Default is `operator.name + "_inv"`. |
| Raises |
| `ValueError` | If `operator.is_non_singular` is False. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `operator` | The operator before inversion. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.matvec tf.linalg.matvec
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L3717-L3814) |
Multiplies matrix `a` by vector `b`, producing `a` \* `b`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.matvec`](https://www.tensorflow.org/api_docs/python/tf/linalg/matvec)
```
tf.linalg.matvec(
a,
b,
transpose_a=False,
adjoint_a=False,
a_is_sparse=False,
b_is_sparse=False,
name=None
)
```
The matrix `a` must, following any transpositions, be a tensor of rank >= 2, with `shape(a)[-1] == shape(b)[-1]`, and `shape(a)[:-2]` able to broadcast with `shape(b)[:-1]`.
Both `a` and `b` must be of the same type. The supported types are: `float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`.
Matrix `a` can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to `True`. These are `False` by default.
If one or both of the inputs contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. This optimization is only available for plain matrices/vectors (rank-2/1 tensors) with datatypes `bfloat16` or `float32`.
#### For example:
```
# 2-D tensor `a`
# [[1, 2, 3],
# [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
# 1-D tensor `b`
# [7, 9, 11]
b = tf.constant([7, 9, 11], shape=[3])
# `a` * `b`
# [ 58, 64]
c = tf.linalg.matvec(a, b)
# 3-D tensor `a`
# [[[ 1, 2, 3],
# [ 4, 5, 6]],
# [[ 7, 8, 9],
# [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),
shape=[2, 2, 3])
# 2-D tensor `b`
# [[13, 14, 15],
# [16, 17, 18]]
b = tf.constant(np.arange(13, 19, dtype=np.int32),
shape=[2, 3])
# `a` * `b`
# [[ 86, 212],
# [410, 563]]
c = tf.linalg.matvec(a, b)
```
| Args |
| `a` | `Tensor` of type `float16`, `float32`, `float64`, `int32`, `complex64`, `complex128` and rank > 1. |
| `b` | `Tensor` with same type as `a` and compatible dimensions. |
| `transpose_a` | If `True`, `a` is transposed before multiplication. |
| `adjoint_a` | If `True`, `a` is conjugated and transposed before multiplication. |
| `a_is_sparse` | If `True`, `a` is treated as a sparse matrix. |
| `b_is_sparse` | If `True`, `b` is treated as a sparse matrix. |
| `name` | Name for the operation (optional). |
| Returns |
| A `Tensor` of the same type as `a` and `b` where each inner-most vector is the product of the corresponding matrices in `a` and vectors in `b`, e.g. if all transpose or adjoint attributes are `False`: `output`[..., i] = sum\_k (`a`[..., i, k] \* `b`[..., k]), for all indices i. |
| `Note` | This is matrix-vector product, not element-wise product. |
| Raises |
| `ValueError` | If transpose\_a and adjoint\_a are both set to True. |
tensorflow tf.linalg.lu_solve tf.linalg.lu\_solve
===================
Solves systems of linear eqns `A X = RHS`, given LU factorizations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.lu_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/lu_solve)
```
tf.linalg.lu_solve(
lower_upper, perm, rhs, validate_args=False, name=None
)
```
>
> **Note:** this function does not verify the implied matrix is actually invertible nor is this condition checked even when `validate_args=True`.
>
| Args |
| `lower_upper` | `lu` as returned by [`tf.linalg.lu`](lu), i.e., if `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`. |
| `perm` | `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`. |
| `rhs` | Matrix-shaped float `Tensor` representing targets for which to solve; `A X = RHS`. To handle vector cases, use: `lu_solve(..., rhs[..., tf.newaxis])[..., 0]`. |
| `validate_args` | Python `bool` indicating whether arguments should be checked for correctness. Note: this function does not verify the implied matrix is actually invertible, even when `validate_args=True`. Default value: `False` (i.e., don't validate arguments). |
| `name` | Python `str` name given to ops managed by this object. Default value: `None` (i.e., 'lu\_solve'). |
| Returns |
| `x` | The `X` in `A @ X = RHS`. |
#### Examples
```
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
x = [[[1., 2],
[3, 4]],
[[7, 8],
[3, 4]]]
inv_x = tf.linalg.lu_solve(*tf.linalg.lu(x), rhs=tf.eye(2))
tf.assert_near(tf.matrix_inverse(x), inv_x)
# ==> True
```
tensorflow tf.linalg.LinearOperatorBlockLowerTriangular tf.linalg.LinearOperatorBlockLowerTriangular
============================================
Combines `LinearOperators` into a blockwise lower-triangular matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorBlockLowerTriangular)
```
tf.linalg.LinearOperatorBlockLowerTriangular(
operators,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorBlockLowerTriangular'
)
```
This operator is initialized with a nested list of linear operators, which are combined into a new `LinearOperator` whose underlying matrix representation is square and has each operator on or below the main diagonal, and zero's elsewhere. Each element of the outer list is a list of `LinearOperators` corresponding to a row-partition of the blockwise structure. The number of `LinearOperator`s in row-partion `i` must be equal to `i`.
For example, a blockwise `3 x 3` `LinearOperatorBlockLowerTriangular` is initialized with the list `[[op_00], [op_10, op_11], [op_20, op_21, op_22]]`, where the `op_ij`, `i < 3, j <= i`, are `LinearOperator` instances. The `LinearOperatorBlockLowerTriangular` behaves as the following blockwise matrix, where `0` represents appropriately-sized [batch] matrices of zeros:
```
[[op_00, 0, 0],
[op_10, op_11, 0],
[op_20, op_21, op_22]]
```
Each `op_jj` on the diagonal is required to represent a square matrix, and hence will have shape `batch_shape_j + [M_j, M_j]`. `LinearOperator`s in row `j` of the blockwise structure must have `range_dimension` equal to that of `op_jj`, and `LinearOperators` in column `j` must have `domain_dimension` equal to that of `op_jj`.
If each `op_jj` on the diagonal has shape `batch_shape_j + [M_j, M_j]`, then the combined operator has shape `broadcast_batch_shape + [sum M_j, sum M_j]`, where `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`, `j = 0, 1, ..., J`, assuming the intermediate batch shapes broadcast. Even if the combined shape is well defined, the combined operator's methods may fail due to lack of broadcasting ability in the defining operators' methods.
For example, to create a 4 x 4 linear operator combined of three 2 x 2 operators:
```
>>> operator_0 = tf.linalg.LinearOperatorFullMatrix([[1., 2.], [3., 4.]])
>>> operator_1 = tf.linalg.LinearOperatorFullMatrix([[1., 0.], [0., 1.]])
>>> operator_2 = tf.linalg.LinearOperatorLowerTriangular([[5., 6.], [7., 8]])
>>> operator = LinearOperatorBlockLowerTriangular(
... [[operator_0], [operator_1, operator_2]])
```
```
operator.to_dense()
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[1., 2., 0., 0.],
[3., 4., 0., 0.],
[1., 0., 5., 0.],
[0., 1., 7., 8.]], dtype=float32)>
```
```
operator.shape
TensorShape([4, 4])
```
```
operator.log_abs_determinant()
<tf.Tensor: shape=(), dtype=float32, numpy=4.3820267>
```
```
x0 = [[1., 6.], [-3., 4.]]
x1 = [[0., 2.], [4., 0.]]
x = tf.concat([x0, x1], 0) # Shape [2, 4] Tensor
operator.matmul(x)
<tf.Tensor: shape=(4, 2), dtype=float32, numpy=
array([[-5., 14.],
[-9., 34.],
[ 1., 16.],
[29., 18.]], dtype=float32)>
```
The above `matmul` is equivalent to:
```
>>> tf.concat([operator_0.matmul(x0),
... operator_1.matmul(x0) + operator_2.matmul(x1)], axis=0)
<tf.Tensor: shape=(4, 2), dtype=float32, numpy=
array([[-5., 14.],
[-9., 34.],
[ 1., 16.],
[29., 18.]], dtype=float32)>
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [M, N], with b >= 0
x.shape = [B1,...,Bb] + [N, R], with R >= 0.
```
#### For example:
Create a [2, 3] batch of 4 x 4 linear operators:
```
>>> matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])
>>> operator_44 = tf.linalg.LinearOperatorFullMatrix(matrix_44)
```
Create a [1, 3] batch of 5 x 4 linear operators:
```
>>> matrix_54 = tf.random.normal(shape=[1, 3, 5, 4])
>>> operator_54 = tf.linalg.LinearOperatorFullMatrix(matrix_54)
```
Create a [1, 3] batch of 5 x 5 linear operators:
```
>>> matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])
>>> operator_55 = tf.linalg.LinearOperatorFullMatrix(matrix_55)
```
Combine to create a [2, 3] batch of 9 x 9 operators:
```
>>> operator_99 = LinearOperatorBlockLowerTriangular(
... [[operator_44], [operator_54, operator_55]])
>>> operator_99.shape
TensorShape([2, 3, 9, 9])
```
Create a shape [2, 1, 9] batch of vectors and apply the operator to it.
```
>>> x = tf.random.normal(shape=[2, 1, 9])
>>> y = operator_99.matvec(x)
>>> y.shape
TensorShape([2, 3, 9])
```
Create a blockwise list of vectors and apply the operator to it. A blockwise list is returned.
```
>>> x4 = tf.random.normal(shape=[2, 1, 4])
>>> x5 = tf.random.normal(shape=[2, 3, 5])
>>> y_blockwise = operator_99.matvec([x4, x5])
>>> y_blockwise[0].shape
TensorShape([2, 3, 4])
>>> y_blockwise[1].shape
TensorShape([2, 3, 5])
```
#### Performance
Suppose `operator` is a `LinearOperatorBlockLowerTriangular` consisting of `D` row-partitions and `D` column-partitions, such that the total number of operators is `N = D * (D + 1) // 2`.
* `operator.matmul` has complexity equal to the sum of the `matmul` complexities of the individual operators.
* `operator.solve` has complexity equal to the sum of the `solve` complexities of the operators on the diagonal and the `matmul` complexities of the operators off the diagonal.
* `operator.determinant` has complexity equal to the sum of the `determinant` complexities of the operators on the diagonal.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `operators` | Iterable of iterables of `LinearOperator` objects, each with the same `dtype`. Each element of `operators` corresponds to a row- partition, in top-to-bottom order. The operators in each row-partition are filled in left-to-right. For example, `operators = [[op_0], [op_1, op_2], [op_3, op_4, op_5]]` creates a `LinearOperatorBlockLowerTriangular` with full block structure `[[op_0, 0, 0], [op_1, op_2, 0], [op_3, op_4, op_5]]`. The number of operators in the `i`th row must be equal to `i`, such that each operator falls on or below the diagonal of the blockwise structure. `LinearOperator`s that fall on the diagonal (the last elements of each row) must be square. The other `LinearOperator`s must have domain dimension equal to the domain dimension of the `LinearOperator`s in the same column-partition, and range dimension equal to the range dimension of the `LinearOperator`s in the same row-partition. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. This will raise a `ValueError` if set to `False`. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `TypeError` | If all operators do not have the same `dtype`. |
| `ValueError` | If `operators` is empty, contains an erroneous number of elements, or contains operators with incompatible shapes. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `operators` | |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_lower_triangular.py#L398-L460)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator`, `Tensor` with compatible shape and same `dtype` as `self`, or a blockwise iterable of `LinearOperator`s or `Tensor`s. See class docstring for definition of shape compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`, or if `x` is blockwise, a list of `Tensor`s with shapes that concatenate to `[..., M, R]`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_lower_triangular.py#L526-L575)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`, or an iterable of `Tensor`s. `Tensor`s are treated a [batch] vectors, meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_lower_triangular.py#L591-L764)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
Given the blockwise `n + 1`-by-`n + 1` linear operator:
op = [[A\_00 0 ... 0 ... 0], [A\_10 A\_11 ... 0 ... 0], ... [A\_k0 A\_k1 ... A\_kk ... 0], ... [A\_n0 A\_n1 ... A\_nk ... A\_nn]]
we find `x = op.solve(y)` by observing that
`y_k = A_k0.matmul(x_0) + A_k1.matmul(x_1) + ... + A_kk.matmul(x_k)`
and therefore
`x_k = A_kk.solve(y_k - A_k0.matmul(x_0) - ... - A_k(k-1).matmul(x_(k-1)))`
where `x_k` and `y_k` are the `k`th blocks obtained by decomposing `x` and `y` along their appropriate axes.
We first solve `x_0 = A_00.solve(y_0)`. Proceeding inductively, we solve for `x_k`, `k = 1..n`, given `x_0..x_(k-1)`.
The adjoint case is solved similarly, beginning with `x_n = A_nn.solve(y_n, adjoint=True)` and proceeding backwards.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape, or a list of `Tensor`s. `Tensor`s are treated like a [batch] matrices meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_lower_triangular.py#L766-L826)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator, or list of `Tensor`s (for blockwise operators). `Tensor`s are treated as [batch] vectors, meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.tridiagonal_matmul tf.linalg.tridiagonal\_matmul
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L667-L745) |
Multiplies tridiagonal matrix by matrix.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.tridiagonal_matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/tridiagonal_matmul)
```
tf.linalg.tridiagonal_matmul(
diagonals, rhs, diagonals_format='compact', name=None
)
```
`diagonals` is representation of 3-diagonal NxN matrix, which depends on `diagonals_format`.
In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with two inner-most dimensions representing the square tridiagonal matrices. Elements outside of the three diagonals will be ignored.
If `sequence` format, `diagonals` is list or tuple of three tensors: `[superdiag, maindiag, subdiag]`, each having shape [..., M]. Last element of `superdiag` first element of `subdiag` are ignored.
In `compact` format the three diagonals are brought together into one tensor of shape `[..., 3, M]`, with last two dimensions containing superdiagonals, diagonals, and subdiagonals, in order. Similarly to `sequence` format, elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored.
The `sequence` format is recommended as the one with the best performance.
`rhs` is matrix to the right of multiplication. It has shape `[..., M, N]`.
#### Example:
```
superdiag = tf.constant([-1, -1, 0], dtype=tf.float64)
maindiag = tf.constant([2, 2, 2], dtype=tf.float64)
subdiag = tf.constant([0, -1, -1], dtype=tf.float64)
diagonals = [superdiag, maindiag, subdiag]
rhs = tf.constant([[1, 1], [1, 1], [1, 1]], dtype=tf.float64)
x = tf.linalg.tridiagonal_matmul(diagonals, rhs, diagonals_format='sequence')
```
| Args |
| `diagonals` | A `Tensor` or tuple of `Tensor`s describing left-hand sides. The shape depends of `diagonals_format`, see description above. Must be `float32`, `float64`, `complex64`, or `complex128`. |
| `rhs` | A `Tensor` of shape [..., M, N] and with the same dtype as `diagonals`. |
| `diagonals_format` | one of `sequence`, or `compact`. Default is `compact`. |
| `name` | A name to give this `Op` (optional). |
| Returns |
| A `Tensor` of shape [..., M, N] containing the result of multiplication. |
| Raises |
| `ValueError` | An unsupported type is provided as input, or when the input tensors have incorrect shapes. |
tensorflow tf.linalg.LinearOperatorHouseholder tf.linalg.LinearOperatorHouseholder
===================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_householder.py#L33-L271) |
`LinearOperator` acting like a [batch] of Householder transformations.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorHouseholder`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorHouseholder)
```
tf.linalg.LinearOperatorHouseholder(
reflection_axis,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorHouseholder'
)
```
This operator acts like a [batch] of householder reflections with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
`LinearOperatorHouseholder` is initialized with a (batch) vector.
A Householder reflection, defined via a vector `v`, which reflects points in `R^n` about the hyperplane orthogonal to `v` and through the origin.
```
# Create a 2 x 2 householder transform.
vec = [1 / np.sqrt(2), 1. / np.sqrt(2)]
operator = LinearOperatorHouseholder(vec)
operator.to_dense()
==> [[0., -1.]
[-1., -0.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `reflection_axis` | Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. The vector defining the hyperplane to reflect about. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. This is autoset to true |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> This is autoset to false. |
| `is_square` | Expect that this operator acts like square [batch] matrices. This is autoset to true. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `ValueError` | `is_self_adjoint` is not `True`, `is_positive_definite` is not `False` or `is_square` is not `True`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `reflection_axis` | |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorTridiag tf.linalg.LinearOperatorTridiag
===============================
`LinearOperator` acting like a [batch] square tridiagonal matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorTridiag`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorTridiag)
```
tf.linalg.LinearOperatorTridiag(
diagonals,
diagonals_format=_COMPACT,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorTridiag'
)
```
This operator acts like a [batch] square tridiagonal matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x M` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
#### Example usage:
Create a 3 x 3 tridiagonal linear operator.
```
superdiag = [3., 4., 5.]
diag = [1., -1., 2.]
subdiag = [6., 7., 8]
operator = tf.linalg.LinearOperatorTridiag(
[superdiag, diag, subdiag],
diagonals_format='sequence')
operator.to_dense()
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[ 1., 3., 0.],
[ 7., -1., 4.],
[ 0., 8., 2.]], dtype=float32)>
operator.shape
TensorShape([3, 3])
```
Scalar Tensor output.
```
operator.log_abs_determinant()
<tf.Tensor: shape=(), dtype=float32, numpy=4.3307333>
```
Create a [2, 3] batch of 4 x 4 linear operators.
```
diagonals = tf.random.normal(shape=[2, 3, 3, 4])
operator = tf.linalg.LinearOperatorTridiag(
diagonals,
diagonals_format='compact')
```
Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible since the batch dimensions, [2, 1], are broadcast to operator.batch\_shape = [2, 3].
```
y = tf.random.normal(shape=[2, 1, 4, 2])
x = operator.solve(y)
x
<tf.Tensor: shape=(2, 3, 4, 2), dtype=float32, numpy=...,
dtype=float32)>
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb].
```
#### Performance
Suppose `operator` is a `LinearOperatorTridiag` of shape `[N, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` will take O(N \* R) time.
* `operator.solve(x)` will take O(N \* R) time.
If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `diagonals` | `Tensor` or list of `Tensor`s depending on `diagonals_format`. If `diagonals_format=sequence`, this is a list of three `Tensor`'s each with shape `[B1, ..., Bb, N]`, `b >= 0, N >= 0`, representing the superdiagonal, diagonal and subdiagonal in that order. Note the superdiagonal is padded with an element in the last position, and the subdiagonal is padded with an element in the front. If `diagonals_format=matrix` this is a `[B1, ... Bb, N, N]` shaped `Tensor` representing the full tridiagonal matrix. If `diagonals_format=compact` this is a `[B1, ... Bb, 3, N]` shaped `Tensor` with the second to last dimension indexing the superdiagonal, diagonal and subdiagonal in that order. Note the superdiagonal is padded with an element in the last position, and the subdiagonal is padded with an element in the front. In every case, these `Tensor`s are all floating dtype. |
| `diagonals_format` | one of `matrix`, `sequence`, or `compact`. Default is `compact`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `diag.dtype` is real, this is auto-set to `True`. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `TypeError` | If `diag.dtype` is not an allowed type. |
| `ValueError` | If `diag.dtype` is real, and `is_self_adjoint` is not `True`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `diagonals` | |
| `diagonals_format` | |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorCirculant3D tf.linalg.LinearOperatorCirculant3D
===================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L968-L1124) |
`LinearOperator` acting like a nested block circulant matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorCirculant3D`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorCirculant3D)
```
tf.linalg.LinearOperatorCirculant3D(
spectrum,
input_output_dtype=tf.dtypes.complex64,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=True,
name='LinearOperatorCirculant3D'
)
```
This operator acts like a block circulant matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
#### Description in terms of block circulant matrices
If `A` is nested block circulant, with block sizes `N0, N1, N2` (`N0 * N1 * N2 = N`): `A` has a block structure, composed of `N0 x N0` blocks, with each block an `N1 x N1` block circulant matrix.
For example, with `W`, `X`, `Y`, `Z` each block circulant,
```
A = |W Z Y X|
|X W Z Y|
|Y X W Z|
|Z Y X W|
```
Note that `A` itself will not in general be circulant.
#### Description in terms of the frequency spectrum
There is an equivalent description in terms of the [batch] spectrum `H` and Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch dimensions.
If `H.shape = [N0, N1, N2]`, (`N0 * N1 * N2 = N`): Loosely speaking, matrix multiplication is equal to the action of a Fourier multiplier: `A u = IDFT3[ H DFT3[u] ]`. Precisely speaking, given `[N, R]` matrix `u`, let `DFT3[u]` be the `[N0, N1, N2, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, N2, R]` and taking a three dimensional DFT across the first three dimensions. Let `IDFT3` be the inverse of `DFT3`. Matrix multiplication may be expressed columnwise:
`(A u)_r = IDFT3[ H * (DFT3[u])_r ]`
#### Operator properties deduced from the spectrum.
* This operator is positive definite if and only if `Real{H} > 0`.
A general property of Fourier transforms is the correspondence between Hermitian functions and real valued transforms.
Suppose `H.shape = [B1,...,Bb, N0, N1, N2]`, we say that `H` is a Hermitian spectrum if, with `%` meaning modulus division,
```
H[..., n0 % N0, n1 % N1, n2 % N2]
= ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1, (-n2) % N2] ].
```
* This operator corresponds to a real matrix if and only if `H` is Hermitian.
* This operator is self-adjoint if and only if `H` is real.
See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer.
### Examples
See `LinearOperatorCirculant` and `LinearOperatorCirculant2D` for examples.
#### Performance
Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` is `O(R*N*Log[N])`
* `operator.solve(x)` is `O(R*N*Log[N])`
* `operator.determinant()` involves a size `N` `reduce_prod`.
If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `spectrum` | Shape `[B1,...,Bb, N]` `Tensor`. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. Type can be different than `input_output_dtype` |
| `input_output_dtype` | `dtype` for input/output. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `spectrum` is real, this will always be true. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the real part of all eigenvalues is positive. We do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix> #Extension\_for\_non\_symmetric\_matrices |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name to prepend to all ops created by this class. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `block_depth` | Depth of recursively defined circulant blocks defining this `Operator`. With `A` the dense representation of this `Operator`, `block_depth = 1` means `A` is symmetric circulant. For example,
```
A = |w z y x|
|x w z y|
|y x w z|
|z y x w|
```
`block_depth = 2` means `A` is block symmetric circulant with symmetric circulant blocks. For example, with `W`, `X`, `Y`, `Z` symmetric circulant,
```
A = |W Z Y X|
|X W Z Y|
|Y X W Z|
|Z Y X W|
```
`block_depth = 3` means `A` is block symmetric circulant with block symmetric circulant blocks. |
| `block_shape` | |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `spectrum` | |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_hermitian_spectrum`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L335-L355)
```
assert_hermitian_spectrum(
name='assert_hermitian_spectrum'
)
```
Returns an `Op` that asserts this operator has Hermitian spectrum.
This operator corresponds to a real-valued matrix if and only if its spectrum is Hermitian.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Op` that asserts this operator has Hermitian spectrum. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `block_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L175-L179)
```
block_shape_tensor()
```
Shape of the block dimensions of `self.spectrum`.
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `convolution_kernel`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L290-L304)
```
convolution_kernel(
name='convolution_kernel'
)
```
Convolution kernel corresponding to `self.spectrum`.
The `D` dimensional DFT of this kernel is the frequency domain spectrum of this operator.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| `Tensor` with `dtype` `self.dtype`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorAdjoint tf.linalg.LinearOperatorAdjoint
===============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_adjoint.py#L30-L231) |
`LinearOperator` representing the adjoint of another operator.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorAdjoint`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorAdjoint)
```
tf.linalg.LinearOperatorAdjoint(
operator,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name=None
)
```
This operator represents the adjoint of another operator.
```
# Create a 2 x 2 linear operator.
operator = LinearOperatorFullMatrix([[1 - i., 3.], [0., 1. + i]])
operator_adjoint = LinearOperatorAdjoint(operator)
operator_adjoint.to_dense()
==> [[1. + i, 0.]
[3., 1 - i]]
operator_adjoint.shape
==> [2, 2]
operator_adjoint.log_abs_determinant()
==> - log(2)
x = ... Shape [2, 4] Tensor
operator_adjoint.matmul(x)
==> Shape [2, 4] Tensor, equal to operator.matmul(x, adjoint=True)
```
#### Performance
The performance of `LinearOperatorAdjoint` depends on the underlying operators performance.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `operator` | `LinearOperator` object. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. Default is `operator.name + "_adjoint"`. |
| Raises |
| `ValueError` | If `operator.is_non_singular` is False. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `operator` | The operator before taking the adjoint. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.cross tf.linalg.cross
===============
Compute the pairwise cross product.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.cross`](https://www.tensorflow.org/api_docs/python/tf/linalg/cross), [`tf.compat.v1.linalg.cross`](https://www.tensorflow.org/api_docs/python/tf/linalg/cross)
```
tf.linalg.cross(
a, b, name=None
)
```
`a` and `b` must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.
| Args |
| `a` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. A tensor containing 3-element vectors. |
| `b` | A `Tensor`. Must have the same type as `a`. Another tensor, of same type and shape as `a`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `a`. |
tensorflow tf.linalg.lu_matrix_inverse tf.linalg.lu\_matrix\_inverse
=============================
Computes the inverse given the LU decomposition(s) of one or more matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.lu_matrix_inverse`](https://www.tensorflow.org/api_docs/python/tf/linalg/lu_matrix_inverse)
```
tf.linalg.lu_matrix_inverse(
lower_upper, perm, validate_args=False, name=None
)
```
This op is conceptually identical to,
```
inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))
tf.assert_near(tf.matrix_inverse(X), inv_X)
# ==> True
```
>
> **Note:** this function does not verify the implied matrix is actually invertible nor is this condition checked even when `validate_args=True`.
>
| Args |
| `lower_upper` | `lu` as returned by [`tf.linalg.lu`](lu), i.e., if `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`. |
| `perm` | `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`. |
| `validate_args` | Python `bool` indicating whether arguments should be checked for correctness. Note: this function does not verify the implied matrix is actually invertible, even when `validate_args=True`. Default value: `False` (i.e., don't validate arguments). |
| `name` | Python `str` name given to ops managed by this object. Default value: `None` (i.e., 'lu\_matrix\_inverse'). |
| Returns |
| `inv_x` | The matrix\_inv, i.e., `tf.matrix_inverse(tf.linalg.lu_reconstruct(lu, perm))`. |
#### Examples
```
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
x = [[[3., 4], [1, 2]],
[[7., 8], [3, 4]]]
inv_x = tf.linalg.lu_matrix_inverse(*tf.linalg.lu(x))
tf.assert_near(tf.matrix_inverse(x), inv_x)
# ==> True
```
tensorflow tf.linalg.triangular_solve tf.linalg.triangular\_solve
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg_ops.py#L80-L140) |
Solve systems of linear equations with upper or lower triangular matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.triangular_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/triangular_solve), [`tf.compat.v1.matrix_triangular_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/triangular_solve)
```
tf.linalg.triangular_solve(
matrix, rhs, lower=True, adjoint=False, name=None
)
```
`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. If `lower` is `True` then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If `lower` is `False` then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. `rhs` is a tensor of shape `[..., M, N]`.
The output is a tensor of shape `[..., M, N]`. If `adjoint` is `True` then the innermost matrices in output satisfy matrix equations `sum_k matrix[..., i, k] * output[..., k, j] = rhs[..., i, j]`. If `adjoint` is `False` then the innermost matrices in output satisfy matrix equations `sum_k adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.
#### Example:
```
a = tf.constant([[3, 0, 0, 0],
[2, 1, 0, 0],
[1, 0, 1, 0],
[1, 1, 1, 1]], dtype=tf.float32)
```
```
b = tf.constant([[4], [2], [4], [2]], dtype=tf.float32)
x = tf.linalg.triangular_solve(a, b, lower=True)
x
<tf.Tensor: shape=(4, 1), dtype=float32, numpy=
array([[ 1.3333334 ],
[-0.66666675],
[ 2.6666665 ],
[-1.3333331 ]], dtype=float32)>
tf.matmul(a, x)
<tf.Tensor: shape=(4, 1), dtype=float32, numpy=
array([[4.],
[2.],
[4.],
[2.]], dtype=float32)>
```
| Args |
| `matrix` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `rhs` | A `Tensor`. Must have the same type as `matrix`. Shape is `[..., M, N]`. |
| `lower` | An optional `bool`. Defaults to `True`. Boolean indicating whether the innermost matrices in matrix are lower or upper triangular. |
| `adjoint` | An optional `bool`. Defaults to `False`. Boolean indicating whether to solve with matrix or its (block-wise) adjoint. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as matrix, and shape is `[..., M, N]`. |
tensorflow tf.linalg.det tf.linalg.det
=============
Computes the determinant of one or more square matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.det`](https://www.tensorflow.org/api_docs/python/tf/linalg/det), [`tf.compat.v1.matrix_determinant`](https://www.tensorflow.org/api_docs/python/tf/linalg/det)
```
tf.linalg.det(
input, name=None
)
```
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices `[..., :, :]`.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.LinearOperatorLowerTriangular tf.linalg.LinearOperatorLowerTriangular
=======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_lower_triangular.py#L32-L216) |
`LinearOperator` acting like a [batch] square lower triangular matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorLowerTriangular`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorLowerTriangular)
```
tf.linalg.LinearOperatorLowerTriangular(
tril,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorLowerTriangular'
)
```
This operator acts like a [batch] lower triangular matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix.
`LinearOperatorLowerTriangular` is initialized with a `Tensor` having dimensions `[B1,...,Bb, N, N]`. The upper triangle of the last two dimensions is ignored.
```
# Create a 2 x 2 lower-triangular linear operator.
tril = [[1., 2.], [3., 4.]]
operator = LinearOperatorLowerTriangular(tril)
# The upper triangle is ignored.
operator.to_dense()
==> [[1., 0.]
[3., 4.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor
# Create a [2, 3] batch of 4 x 4 linear operators.
tril = tf.random.normal(shape=[2, 3, 4, 4])
operator = LinearOperatorLowerTriangular(tril)
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [B1,...,Bb] + [N, R], with R >= 0.
```
#### Performance
Suppose `operator` is a `LinearOperatorLowerTriangular` of shape `[N, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` involves `N^2 * R` multiplications.
* `operator.solve(x)` involves `N * R` size `N` back-substitutions.
* `operator.determinant()` involves a size `N` `reduce_prod`.
If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `tril` | Shape `[B1,...,Bb, N, N]` with `b >= 0`, `N >= 0`. The lower triangular part of `tril` defines this operator. The strictly upper triangle is ignored. |
| `is_non_singular` | Expect that this operator is non-singular. This operator is non-singular if and only if its diagonal elements are all non-zero. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. This operator is self-adjoint only if it is diagonal with real-valued diagonal entries. In this case it is advised to use `LinearOperatorDiag`. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `ValueError` | If `is_square` is `False`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorZeros tf.linalg.LinearOperatorZeros
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_zeros.py#L40-L479) |
`LinearOperator` acting like a [batch] zero matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorZeros`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorZeros)
```
tf.linalg.LinearOperatorZeros(
num_rows,
num_columns=None,
batch_shape=None,
dtype=None,
is_non_singular=False,
is_self_adjoint=True,
is_positive_definite=False,
is_square=True,
assert_proper_shapes=False,
name='LinearOperatorZeros'
)
```
This operator acts like a [batch] zero matrix `A` with shape `[B1,...,Bb, N, M]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x M` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
`LinearOperatorZeros` is initialized with `num_rows`, and optionally `num_columns,`batch\_shape`, and`dtype`arguments. If`num\_columns`is`None`, then this operator will be initialized as a square matrix. If`batch\_shape`is`None`, this operator efficiently passes through all arguments. If`batch\_shape` is provided, broadcasting may occur, which will require making copies.
```
# Create a 2 x 2 zero matrix.
operator = LinearOperatorZero(num_rows=2, dtype=tf.float32)
operator.to_dense()
==> [[0., 0.]
[0., 0.]]
operator.shape
==> [2, 2]
operator.determinant()
==> 0.
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor, same as x.
# Create a 2-batch of 2x2 zero matrices
operator = LinearOperatorZeros(num_rows=2, batch_shape=[2])
operator.to_dense()
==> [[[0., 0.]
[0., 0.]],
[[0., 0.]
[0., 0.]]]
# Here, even though the operator has a batch shape, the input is the same as
# the output, so x can be passed through without a copy. The operator is able
# to detect that no broadcast is necessary because both x and the operator
# have statically defined shape.
x = ... Shape [2, 2, 3]
operator.matmul(x)
==> Shape [2, 2, 3] Tensor, same as tf.zeros_like(x)
# Here the operator and x have different batch_shape, and are broadcast.
# This requires a copy, since the output is different size than the input.
x = ... Shape [1, 2, 3]
operator.matmul(x)
==> Shape [2, 2, 3] Tensor, equal to tf.zeros_like([x, x])
```
### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, M], with b >= 0
x.shape = [C1,...,Cc] + [M, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `num_rows` | Scalar non-negative integer `Tensor`. Number of rows in the corresponding zero matrix. |
| `num_columns` | Scalar non-negative integer `Tensor`. Number of columns in the corresponding zero matrix. If `None`, defaults to the value of `num_rows`. |
| `batch_shape` | Optional `1-D` integer `Tensor`. The shape of the leading dimensions. If `None`, this operator has no leading dimensions. |
| `dtype` | Data type of the matrix that this operator represents. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `assert_proper_shapes` | Python `bool`. If `False`, only perform static checks that initialization and method arguments have proper shape. If `True`, and static checks are inconclusive, add asserts to the graph. |
| `name` | A name for this `LinearOperator` |
| Raises |
| `ValueError` | If `num_rows` is determined statically to be non-scalar, or negative. |
| `ValueError` | If `num_columns` is determined statically to be non-scalar, or negative. |
| `ValueError` | If `batch_shape` is determined statically to not be 1-D, or negative. |
| `ValueError` | If any of the following is not `True`: `{is_self_adjoint, is_non_singular, is_positive_definite}`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_zeros.py#L349-L359)
```
add_to_tensor(
mat, name='add_to_tensor'
)
```
Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
| Args |
| `mat` | `Tensor` with same `dtype` and shape broadcastable to `self`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.tensor_diag_part tf.linalg.tensor\_diag\_part
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L2770-L2812) |
Returns the diagonal part of the tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.diag_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/tensor_diag_part), [`tf.compat.v1.linalg.tensor_diag_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/tensor_diag_part)
```
tf.linalg.tensor_diag_part(
input, name=None
)
```
This operation returns a tensor with the `diagonal` part of the `input`. The `diagonal` part is computed as follows:
Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a tensor of rank `k` with dimensions `[D1,..., Dk]` where:
`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.
For a rank 2 tensor, [`linalg.diag_part`](diag_part) and [`linalg.tensor_diag_part`](tensor_diag_part) produce the same result. For rank 3 and higher, linalg.diag\_part extracts the diagonal of each inner-most matrix in the tensor. An example where they differ is given below.
```
x = [[[[1111,1112],[1121,1122]],
[[1211,1212],[1221,1222]]],
[[[2111, 2112], [2121, 2122]],
[[2211, 2212], [2221, 2222]]]
]
tf.linalg.tensor_diag_part(x)
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[1111, 1212],
[2121, 2222]], dtype=int32)>
tf.linalg.diag_part(x).shape
TensorShape([2, 2, 2])
```
| Args |
| `input` | A `Tensor` with rank `2k`. |
| `name` | A name for the operation (optional). |
| Returns |
| A Tensor containing diagonals of `input`. Has the same type as `input`, and rank `k`. |
tensorflow tf.linalg.eigh tf.linalg.eigh
==============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg_ops.py#L436-L457) |
Computes the eigen decomposition of a batch of self-adjoint matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.eigh`](https://www.tensorflow.org/api_docs/python/tf/linalg/eigh), [`tf.compat.v1.self_adjoint_eig`](https://www.tensorflow.org/api_docs/python/tf/linalg/eigh)
```
tf.linalg.eigh(
tensor, name=None
)
```
Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in `tensor` such that `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.
| Args |
| `tensor` | `Tensor` of shape `[..., N, N]`. Only the lower triangular part of each inner inner matrix is referenced. |
| `name` | string, optional name of the operation. |
| Returns |
| `e` | Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order. |
| `v` | Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in `tensor` |
tensorflow tf.linalg.global_norm tf.linalg.global\_norm
======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/clip_ops.py#L236-L285) |
Computes the global norm of multiple tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.global_norm`](https://www.tensorflow.org/api_docs/python/tf/linalg/global_norm), [`tf.compat.v1.linalg.global_norm`](https://www.tensorflow.org/api_docs/python/tf/linalg/global_norm)
```
tf.linalg.global_norm(
t_list, name=None
)
```
Given a tuple or list of tensors `t_list`, this operation returns the global norm of the elements in all tensors in `t_list`. The global norm is computed as:
`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`
Any entries in `t_list` that are of type None are ignored.
| Args |
| `t_list` | A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. |
| `name` | A name for the operation (optional). |
| Returns |
| A 0-D (scalar) `Tensor` of type `float`. |
| Raises |
| `TypeError` | If `t_list` is not a sequence. |
tensorflow tf.linalg.cholesky_solve tf.linalg.cholesky\_solve
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg_ops.py#L143-L189) |
Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.cholesky_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky_solve), [`tf.compat.v1.linalg.cholesky_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky_solve)
```
tf.linalg.cholesky_solve(
chol, rhs, name=None
)
```
Specifically, returns `X` from `A X = RHS`, where `A = L L^T`, `L` is the `chol` arg and `RHS` is the `rhs` arg.
```
# Solve 10 separate 2x2 linear systems:
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 1
chol = tf.linalg.cholesky(A) # shape 10 x 2 x 2
X = tf.linalg.cholesky_solve(chol, RHS) # shape 10 x 2 x 1
# tf.matmul(A, X) ~ RHS
X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
# Solve five linear systems (K = 5) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 5
...
X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
```
| Args |
| `chol` | A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`. Cholesky factorization of `A`, e.g. `chol = tf.linalg.cholesky(A)`. For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions of `chol` are used. The strictly upper part is assumed to be zero and not accessed. |
| `rhs` | A `Tensor`, same type as `chol`, shape is `[..., M, K]`. |
| `name` | A name to give this `Op`. Defaults to `cholesky_solve`. |
| Returns |
| Solution to `A x = rhs`, shape `[..., M, K]`. |
tensorflow tf.linalg.sqrtm tf.linalg.sqrtm
===============
Computes the matrix square root of one or more square matrices:
#### View aliases
**Main aliases**
[`tf.matrix_square_root`](https://www.tensorflow.org/api_docs/python/tf/linalg/sqrtm)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.sqrtm`](https://www.tensorflow.org/api_docs/python/tf/linalg/sqrtm), [`tf.compat.v1.matrix_square_root`](https://www.tensorflow.org/api_docs/python/tf/linalg/sqrtm)
```
tf.linalg.sqrtm(
input, name=None
)
```
matmul(sqrtm(A), sqrtm(A)) = A
The input matrix should be invertible. If the input matrix is real, it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed).
The matrix square root is computed by first reducing the matrix to quasi-triangular form with the real Schur decomposition. The square root of the quasi-triangular matrix is then computed directly. Details of the algorithm can be found in: Nicholas J. Higham, "Computing real square roots of a real matrix", Linear Algebra Appl., 1987.
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the matrix square root for all input submatrices `[..., :, :]`.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.LinearOperatorIdentity tf.linalg.LinearOperatorIdentity
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_identity.py#L99-L488) |
`LinearOperator` acting like a [batch] square identity matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorIdentity`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorIdentity)
```
tf.linalg.LinearOperatorIdentity(
num_rows,
batch_shape=None,
dtype=None,
is_non_singular=True,
is_self_adjoint=True,
is_positive_definite=True,
is_square=True,
assert_proper_shapes=False,
name='LinearOperatorIdentity'
)
```
This operator acts like a [batch] identity matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
`LinearOperatorIdentity` is initialized with `num_rows`, and optionally `batch_shape`, and `dtype` arguments. If `batch_shape` is `None`, this operator efficiently passes through all arguments. If `batch_shape` is provided, broadcasting may occur, which will require making copies.
```
# Create a 2 x 2 identity matrix.
operator = LinearOperatorIdentity(num_rows=2, dtype=tf.float32)
operator.to_dense()
==> [[1., 0.]
[0., 1.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> 0.
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor, same as x.
y = tf.random.normal(shape=[3, 2, 4])
# Note that y.shape is compatible with operator.shape because operator.shape
# is broadcast to [3, 2, 2].
# This broadcast does NOT require copying data, since we can infer that y
# will be passed through without changing shape. We are always able to infer
# this if the operator has no batch_shape.
x = operator.solve(y)
==> Shape [3, 2, 4] Tensor, same as y.
# Create a 2-batch of 2x2 identity matrices
operator = LinearOperatorIdentity(num_rows=2, batch_shape=[2])
operator.to_dense()
==> [[[1., 0.]
[0., 1.]],
[[1., 0.]
[0., 1.]]]
# Here, even though the operator has a batch shape, the input is the same as
# the output, so x can be passed through without a copy. The operator is able
# to detect that no broadcast is necessary because both x and the operator
# have statically defined shape.
x = ... Shape [2, 2, 3]
operator.matmul(x)
==> Shape [2, 2, 3] Tensor, same as x
# Here the operator and x have different batch_shape, and are broadcast.
# This requires a copy, since the output is different size than the input.
x = ... Shape [1, 2, 3]
operator.matmul(x)
==> Shape [2, 2, 3] Tensor, equal to [x, x]
```
### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
### Performance
If `batch_shape` initialization arg is `None`:
* `operator.matmul(x)` is `O(1)`
* `operator.solve(x)` is `O(1)`
* `operator.determinant()` is `O(1)`
If `batch_shape` initialization arg is provided, and static checks cannot rule out the need to broadcast:
* `operator.matmul(x)` is `O(D1*...*Dd*N*R)`
* `operator.solve(x)` is `O(D1*...*Dd*N*R)`
* `operator.determinant()` is `O(B1*...*Bb)`
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `num_rows` | Scalar non-negative integer `Tensor`. Number of rows in the corresponding identity matrix. |
| `batch_shape` | Optional `1-D` integer `Tensor`. The shape of the leading dimensions. If `None`, this operator has no leading dimensions. |
| `dtype` | Data type of the matrix that this operator represents. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `assert_proper_shapes` | Python `bool`. If `False`, only perform static checks that initialization and method arguments have proper shape. If `True`, and static checks are inconclusive, add asserts to the graph. |
| `name` | A name for this `LinearOperator` |
| Raises |
| `ValueError` | If `num_rows` is determined statically to be non-scalar, or negative. |
| `ValueError` | If `batch_shape` is determined statically to not be 1-D, or negative. |
| `ValueError` | If any of the following is not `True`: `{is_self_adjoint, is_non_singular, is_positive_definite}`. |
| `TypeError` | If `num_rows` or `batch_shape` is ref-type (e.g. Variable). |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_identity.py#L395-L409)
```
add_to_tensor(
mat, name='add_to_tensor'
)
```
Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
| Args |
| `mat` | `Tensor` with same `dtype` and shape broadcastable to `self`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.tensor_diag tf.linalg.tensor\_diag
======================
Returns a diagonal tensor with a given diagonal values.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.diag`](https://www.tensorflow.org/api_docs/python/tf/linalg/tensor_diag), [`tf.compat.v1.linalg.tensor_diag`](https://www.tensorflow.org/api_docs/python/tf/linalg/tensor_diag)
```
tf.linalg.tensor_diag(
diagonal, name=None
)
```
Given a `diagonal`, this operation returns a tensor with the `diagonal` and everything else padded with zeros. The diagonal is computed as follows:
Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.
#### For example:
```
# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
```
| Args |
| `diagonal` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. Rank k tensor where k is at most 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `diagonal`. |
tensorflow tf.linalg.LinearOperatorCirculant2D tf.linalg.LinearOperatorCirculant2D
===================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L780-L963) |
`LinearOperator` acting like a block circulant matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorCirculant2D`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorCirculant2D)
```
tf.linalg.LinearOperatorCirculant2D(
spectrum,
input_output_dtype=tf.dtypes.complex64,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=True,
name='LinearOperatorCirculant2D'
)
```
This operator acts like a block circulant matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
#### Description in terms of block circulant matrices
If `A` is block circulant, with block sizes `N0, N1` (`N0 * N1 = N`): `A` has a block circulant structure, composed of `N0 x N0` blocks, with each block an `N1 x N1` circulant matrix.
For example, with `W`, `X`, `Y`, `Z` each circulant,
```
A = |W Z Y X|
|X W Z Y|
|Y X W Z|
|Z Y X W|
```
Note that `A` itself will not in general be circulant.
#### Description in terms of the frequency spectrum
There is an equivalent description in terms of the [batch] spectrum `H` and Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch dimensions.
If `H.shape = [N0, N1]`, (`N0 * N1 = N`): Loosely speaking, matrix multiplication is equal to the action of a Fourier multiplier: `A u = IDFT2[ H DFT2[u] ]`. Precisely speaking, given `[N, R]` matrix `u`, let `DFT2[u]` be the `[N0, N1, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, R]` and taking a two dimensional DFT across the first two dimensions. Let `IDFT2` be the inverse of `DFT2`. Matrix multiplication may be expressed columnwise:
`(A u)_r = IDFT2[ H * (DFT2[u])_r ]`
#### Operator properties deduced from the spectrum.
* This operator is positive definite if and only if `Real{H} > 0`.
A general property of Fourier transforms is the correspondence between Hermitian functions and real valued transforms.
Suppose `H.shape = [B1,...,Bb, N0, N1]`, we say that `H` is a Hermitian spectrum if, with `%` indicating modulus division,
```
H[..., n0 % N0, n1 % N1] = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1 ].
```
* This operator corresponds to a real matrix if and only if `H` is Hermitian.
* This operator is self-adjoint if and only if `H` is real.
See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer.
### Example of a self-adjoint positive definite operator
```
# spectrum is real ==> operator is self-adjoint
# spectrum is positive ==> operator is positive definite
spectrum = [[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]]
operator = LinearOperatorCirculant2D(spectrum)
# IFFT[spectrum]
operator.convolution_kernel()
==> [[5.0+0.0j, -0.5-.3j, -0.5+.3j],
[-1.5-.9j, 0, 0],
[-1.5+.9j, 0, 0]]
operator.to_dense()
==> Complex self adjoint 9 x 9 matrix.
```
#### Example of defining in terms of a real convolution kernel,
```
# convolution_kernel is real ==> spectrum is Hermitian.
convolution_kernel = [[1., 2., 1.], [5., -1., 1.]]
spectrum = tf.signal.fft2d(tf.cast(convolution_kernel, tf.complex64))
# spectrum is shape [2, 3] ==> operator is shape [6, 6]
# spectrum is Hermitian ==> operator is real.
operator = LinearOperatorCirculant2D(spectrum, input_output_dtype=tf.float32)
```
#### Performance
Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` is `O(R*N*Log[N])`
* `operator.solve(x)` is `O(R*N*Log[N])`
* `operator.determinant()` involves a size `N` `reduce_prod`.
If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `spectrum` | Shape `[B1,...,Bb, N]` `Tensor`. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. Type can be different than `input_output_dtype` |
| `input_output_dtype` | `dtype` for input/output. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `spectrum` is real, this will always be true. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix> #Extension\_for\_non\_symmetric\_matrices |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name to prepend to all ops created by this class. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `block_depth` | Depth of recursively defined circulant blocks defining this `Operator`. With `A` the dense representation of this `Operator`, `block_depth = 1` means `A` is symmetric circulant. For example,
```
A = |w z y x|
|x w z y|
|y x w z|
|z y x w|
```
`block_depth = 2` means `A` is block symmetric circulant with symmetric circulant blocks. For example, with `W`, `X`, `Y`, `Z` symmetric circulant,
```
A = |W Z Y X|
|X W Z Y|
|Y X W Z|
|Z Y X W|
```
`block_depth = 3` means `A` is block symmetric circulant with block symmetric circulant blocks. |
| `block_shape` | |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `spectrum` | |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_hermitian_spectrum`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L335-L355)
```
assert_hermitian_spectrum(
name='assert_hermitian_spectrum'
)
```
Returns an `Op` that asserts this operator has Hermitian spectrum.
This operator corresponds to a real-valued matrix if and only if its spectrum is Hermitian.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Op` that asserts this operator has Hermitian spectrum. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `block_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L175-L179)
```
block_shape_tensor()
```
Shape of the block dimensions of `self.spectrum`.
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `convolution_kernel`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L290-L304)
```
convolution_kernel(
name='convolution_kernel'
)
```
Convolution kernel corresponding to `self.spectrum`.
The `D` dimensional DFT of this kernel is the frequency domain spectrum of this operator.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| `Tensor` with `dtype` `self.dtype`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorToeplitz tf.linalg.LinearOperatorToeplitz
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_toeplitz.py#L34-L280) |
`LinearOperator` acting like a [batch] of toeplitz matrices.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorToeplitz`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorToeplitz)
```
tf.linalg.LinearOperatorToeplitz(
col,
row,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorToeplitz'
)
```
This operator acts like a [batch] Toeplitz matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
#### Description in terms of toeplitz matrices
Toeplitz means that `A` has constant diagonals. Hence, `A` can be generated with two vectors. One represents the first column of the matrix, and the other represents the first row.
Below is a 4 x 4 example:
```
A = |a b c d|
|e a b c|
|f e a b|
|g f e a|
```
#### Example of a Toeplitz operator.
```
# Create a 3 x 3 Toeplitz operator.
col = [1., 2., 3.]
row = [1., 4., -9.]
operator = LinearOperatorToeplitz(col, row)
operator.to_dense()
==> [[1., 4., -9.],
[2., 1., 4.],
[3., 2., 1.]]
operator.shape
==> [3, 3]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [3, 4] Tensor
operator.matmul(x)
==> Shape [3, 4] Tensor
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `col` | Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. The first column of the operator. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. Note that the first entry of `col` is assumed to be the same as the first entry of `row`. |
| `row` | Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. The first row of the operator. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. Note that the first entry of `row` is assumed to be the same as the first entry of `col`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `diag.dtype` is real, this is auto-set to `True`. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `col` | |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `row` | |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.slogdet tf.linalg.slogdet
=================
Computes the sign and the log of the absolute value of the determinant of
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.slogdet`](https://www.tensorflow.org/api_docs/python/tf/linalg/slogdet)
```
tf.linalg.slogdet(
input, name=None
)
```
one or more square matrices.
The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions form square matrices. The outputs are two tensors containing the signs and absolute values of the log determinants for all N input submatrices `[..., :, :]` such that `determinant = sign*exp(log_abs_determinant)`. The `log_abs_determinant` is computed as `det(P)*sum(log(diag(LU)))` where `LU` is the `LU` decomposition of the input and `P` is the corresponding permutation matrix.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. Shape is `[N, M, M]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (sign, log\_abs\_determinant). |
| `sign` | A `Tensor`. Has the same type as `input`. |
| `log_abs_determinant` | A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.eig tf.linalg.eig
=============
Computes the eigen decomposition of a batch of matrices.
#### View aliases
**Main aliases**
[`tf.eig`](https://www.tensorflow.org/api_docs/python/tf/linalg/eig)
```
tf.linalg.eig(
tensor, name=None
)
```
The eigenvalues and eigenvectors for a non-Hermitian matrix in general are complex. The eigenvectors are not guaranteed to be linearly independent.
Computes the eigenvalues and right eigenvectors of the innermost N-by-N matrices in `tensor` such that `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.
| Args |
| `tensor` | `Tensor` of shape `[..., N, N]`. Only the lower triangular part of each inner inner matrix is referenced. |
| `name` | string, optional name of the operation. |
| Returns |
| `e` | Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order. |
| `v` | Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in `tensor` |
tensorflow tf.linalg.set_diag tf.linalg.set\_diag
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L2815-L2949) |
Returns a batched matrix tensor with new batched diagonal values.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.set_diag`](https://www.tensorflow.org/api_docs/python/tf/linalg/set_diag), [`tf.compat.v1.matrix_set_diag`](https://www.tensorflow.org/api_docs/python/tf/linalg/set_diag)
```
tf.linalg.set_diag(
input,
diagonal,
name='set_diag',
k=0,
align='RIGHT_LEFT'
)
```
Given `input` and `diagonal`, this operation returns a tensor with the same shape and values as `input`, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in `diagonal`.
`input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or `k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`. Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`. `num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`. `max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`, `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`
The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`. If `k` is scalar or `k[0] == k[1]`:
```
output[i, j, ..., l, m, n]
= diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]
input[i, j, ..., l, m, n] ; otherwise
```
Otherwise,
```
output[i, j, ..., l, m, n]
= diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
input[i, j, ..., l, m, n] ; otherwise
```
where `d = n - m`, `diag_index = k[1] - d`, and `index_in_diag = n - max(d, 0) + offset`.
`offset` is zero except when the alignment of the diagonal is to the right.
```
offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
and `d >= 0`) or
(`align` in {LEFT_RIGHT, RIGHT_RIGHT}
and `d <= 0`)
0 ; otherwise
```
where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.
#### For example:
```
# The main diagonal.
input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4)
[7, 7, 7, 7],
[7, 7, 7, 7]],
[[7, 7, 7, 7],
[7, 7, 7, 7],
[7, 7, 7, 7]]])
diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3)
[4, 5, 6]])
tf.matrix_set_diag(input, diagonal)
==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)
[7, 2, 7, 7],
[7, 7, 3, 7]],
[[4, 7, 7, 7],
[7, 5, 7, 7],
[7, 7, 6, 7]]]
# A superdiagonal (per batch).
tf.matrix_set_diag(input, diagonal, k = 1)
==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4)
[7, 7, 2, 7],
[7, 7, 7, 3]],
[[7, 4, 7, 7],
[7, 7, 5, 7],
[7, 7, 7, 6]]]
# A band of diagonals.
diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3)
[6, 5, 8],
[1, 2, 3],
[0, 4, 5]],
[[1, 2, 0],
[5, 6, 4],
[6, 1, 2],
[0, 3, 4]]])
tf.matrix_set_diag(input, diagonals, k = (-1, 2))
==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)
[4, 2, 5, 1],
[7, 5, 3, 8]],
[[6, 5, 1, 7],
[3, 1, 6, 2],
[7, 4, 2, 4]]]
# RIGHT_LEFT alignment.
diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3)
[6, 5, 8],
[1, 2, 3],
[4, 5, 0]],
[[0, 1, 2],
[5, 6, 4],
[6, 1, 2],
[3, 4, 0]]])
tf.matrix_set_diag(input, diagonals, k = (-1, 2), align="RIGHT_LEFT")
==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)
[4, 2, 5, 1],
[7, 5, 3, 8]],
[[6, 5, 1, 7],
[3, 1, 6, 2],
[7, 4, 2, 4]]]
```
| Args |
| `input` | A `Tensor` with rank `k + 1`, where `k >= 1`. |
| `diagonal` | A `Tensor` with rank `k`, when `d_lower == d_upper`, or `k + 1`, otherwise. `k >= 1`. |
| `name` | A name for the operation (optional). |
| `k` | Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. `k` can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. `k[0]` must not be larger than `k[1]`. |
| `align` | Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: "RIGHT\_LEFT" (default), "LEFT\_RIGHT", "LEFT\_LEFT", and "RIGHT\_RIGHT". "RIGHT\_LEFT" aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses "LEFT\_RIGHT", which is the opposite alignment. |
tensorflow tf.linalg.matrix_rank tf.linalg.matrix\_rank
======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L765-L801) |
Compute the matrix rank of one or more matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.matrix_rank`](https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_rank)
```
tf.linalg.matrix_rank(
a, tol=None, validate_args=False, name=None
)
```
| Args |
| `a` | (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be pseudo-inverted. |
| `tol` | Threshold below which the singular value is counted as 'zero'. Default value: `None` (i.e., `eps * max(rows, cols) * max(singular_val)`). |
| `validate_args` | When `True`, additional assertions might be embedded in the graph. Default value: `False` (i.e., no graph assertions are added). |
| `name` | Python `str` prefixed to ops created by this function. Default value: 'matrix\_rank'. |
| Returns |
| `matrix_rank` | (Batch of) `int32` scalars representing the number of non-zero singular values. |
tensorflow tf.linalg.svd tf.linalg.svd
=============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg_ops.py#L484-L552) |
Computes the singular value decompositions of one or more matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.svd`](https://www.tensorflow.org/api_docs/python/tf/linalg/svd), [`tf.compat.v1.svd`](https://www.tensorflow.org/api_docs/python/tf/linalg/svd)
```
tf.linalg.svd(
tensor, full_matrices=False, compute_uv=True, name=None
)
```
Computes the SVD of each inner matrix in `tensor` such that `tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(conj(v[..., :, :]))`
```
# a is a tensor.
# s is a tensor of singular values.
# u is a tensor of left singular vectors.
# v is a tensor of right singular vectors.
s, u, v = svd(a)
s = svd(a, compute_uv=False)
```
| Args |
| `tensor` | `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and `N`. |
| `full_matrices` | If true, compute full-sized `u` and `v`. If false (the default), compute only the leading `P` singular vectors. Ignored if `compute_uv` is `False`. |
| `compute_uv` | If `True` then left and right singular vectors will be computed and returned in `u` and `v`, respectively. Otherwise, only the singular values will be computed, which can be significantly faster. |
| `name` | string, optional name of the operation. |
| Returns |
| `s` | Singular values. Shape is `[..., P]`. The values are sorted in reverse order of magnitude, so s[..., 0] is the largest value, s[..., 1] is the second largest, etc. |
| `u` | Left singular vectors. If `full_matrices` is `False` (default) then shape is `[..., M, P]`; if `full_matrices` is `True` then shape is `[..., M, M]`. Not returned if `compute_uv` is `False`. |
| `v` | Right singular vectors. If `full_matrices` is `False` (default) then shape is `[..., N, P]`. If `full_matrices` is `True` then shape is `[..., N, N]`. Not returned if `compute_uv` is `False`. |
numpy compatibility
-------------------
Mostly equivalent to numpy.linalg.svd, except that
* The order of output arguments here is `s`, `u`, `v` when `compute_uv` is `True`, as opposed to `u`, `s`, `v` for numpy.linalg.svd.
* full\_matrices is `False` by default as opposed to `True` for numpy.linalg.svd.
* tf.linalg.svd uses the standard definition of the SVD \(A = U \Sigma V^H\), such that the left singular vectors of `a` are the columns of `u`, while the right singular vectors of `a` are the columns of `v`. On the other hand, numpy.linalg.svd returns the adjoint \(V^H\) as the third output argument.
```
import tensorflow as tf
import numpy as np
s, u, v = tf.linalg.svd(a)
tf_a_approx = tf.matmul(u, tf.matmul(tf.linalg.diag(s), v, adjoint_b=True))
u, s, v_adj = np.linalg.svd(a, full_matrices=False)
np_a_approx = np.dot(u, np.dot(np.diag(s), v_adj))
# tf_a_approx and np_a_approx should be numerically close.
```
tensorflow tf.linalg.LinearOperatorDiag tf.linalg.LinearOperatorDiag
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_diag.py#L31-L266) |
`LinearOperator` acting like a [batch] square diagonal matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorDiag`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorDiag)
```
tf.linalg.LinearOperatorDiag(
diag,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorDiag'
)
```
This operator acts like a [batch] diagonal matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
`LinearOperatorDiag` is initialized with a (batch) vector.
```
# Create a 2 x 2 diagonal linear operator.
diag = [1., -1.]
operator = LinearOperatorDiag(diag)
operator.to_dense()
==> [[1., 0.]
[0., -1.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor
# Create a [2, 3] batch of 4 x 4 linear operators.
diag = tf.random.normal(shape=[2, 3, 4])
operator = LinearOperatorDiag(diag)
# Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible
# since the batch dimensions, [2, 1], are broadcast to
# operator.batch_shape = [2, 3].
y = tf.random.normal(shape=[2, 1, 4, 2])
x = operator.solve(y)
==> operator.matmul(x) = y
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
#### Performance
Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` involves `N * R` multiplications.
* `operator.solve(x)` involves `N` divisions and `N * R` multiplications.
* `operator.determinant()` involves a size `N` `reduce_prod`.
If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `diag` | Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. The diagonal of the operator. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `diag.dtype` is real, this is auto-set to `True`. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `TypeError` | If `diag.dtype` is not an allowed type. |
| `ValueError` | If `diag.dtype` is real, and `is_self_adjoint` is not `True`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `diag` | |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorFullMatrix tf.linalg.LinearOperatorFullMatrix
==================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_full_matrix.py#L30-L195) |
`LinearOperator` that wraps a [batch] matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorFullMatrix`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorFullMatrix)
```
tf.linalg.LinearOperatorFullMatrix(
matrix,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorFullMatrix'
)
```
This operator wraps a [batch] matrix `A` (which is a `Tensor`) with shape `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `M x N` matrix.
```
# Create a 2 x 2 linear operator.
matrix = [[1., 2.], [3., 4.]]
operator = LinearOperatorFullMatrix(matrix)
operator.to_dense()
==> [[1., 2.]
[3., 4.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor
# Create a [2, 3] batch of 4 x 4 linear operators.
matrix = tf.random.normal(shape=[2, 3, 4, 4])
operator = LinearOperatorFullMatrix(matrix)
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [M, N], with b >= 0
x.shape = [B1,...,Bb] + [N, R], with R >= 0.
```
#### Performance
`LinearOperatorFullMatrix` has exactly the same performance as would be achieved by using standard `TensorFlow` matrix ops. Intelligent choices are made based on the following initialization hints.
* If `dtype` is real, and `is_self_adjoint` and `is_positive_definite`, a Cholesky factorization is used for the determinant and solve.
In all cases, suppose `operator` is a `LinearOperatorFullMatrix` of shape `[M, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` is `O(M * N * R)`.
* If `M=N`, `operator.solve(x)` is `O(N^3 * R)`.
* If `M=N`, `operator.determinant()` is `O(N^3)`.
If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `matrix` | Shape `[B1,...,Bb, M, N]` with `b >= 0`, `M, N >= 0`. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `TypeError` | If `diag.dtype` is not an allowed type. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.inv tf.linalg.inv
=============
Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.inv`](https://www.tensorflow.org/api_docs/python/tf/linalg/inv), [`tf.compat.v1.matrix_inverse`](https://www.tensorflow.org/api_docs/python/tf/linalg/inv)
```
tf.linalg.inv(
input, adjoint=False, name=None
)
```
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices `[..., :, :]`.
The op uses LU decomposition with partial pivoting to compute the inverses.
If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `adjoint` | An optional `bool`. Defaults to `False`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.matrix_transpose tf.linalg.matrix\_transpose
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L2378-L2455) |
Transposes last two dimensions of tensor `a`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.matrix_transpose`](https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_transpose), [`tf.compat.v1.linalg.transpose`](https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_transpose), [`tf.compat.v1.matrix_transpose`](https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_transpose)
```
tf.linalg.matrix_transpose(
a, name='matrix_transpose', conjugate=False
)
```
#### For example:
```
x = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.linalg.matrix_transpose(x) # [[1, 4],
# [2, 5],
# [3, 6]]
x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],
[4 + 4j, 5 + 5j, 6 + 6j]])
tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j],
# [2 - 2j, 5 - 5j],
# [3 - 3j, 6 - 6j]]
# Matrix with two batch dimensions.
# x.shape is [1, 2, 3, 4]
# tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3]
```
Note that [`tf.matmul`](matmul) provides kwargs allowing for transpose of arguments. This is done with minimal cost, and is preferable to using this function. E.g.
```
# Good! Transpose is taken at minimal additional cost.
tf.matmul(matrix, b, transpose_b=True)
# Inefficient!
tf.matmul(matrix, tf.linalg.matrix_transpose(b))
```
| Args |
| `a` | A `Tensor` with `rank >= 2`. |
| `name` | A name for the operation (optional). |
| `conjugate` | Optional bool. Setting it to `True` is mathematically equivalent to tf.math.conj(tf.linalg.matrix\_transpose(input)). |
| Returns |
| A transposed batch matrix `Tensor`. |
| Raises |
| `ValueError` | If `a` is determined statically to have `rank < 2`. |
numpy compatibility
-------------------
In `numpy` transposes are memory-efficient constant time operations as they simply return a new view of the same data with adjusted `strides`.
TensorFlow does not support strides, [`linalg.matrix_transpose`](matrix_transpose) returns a new tensor with the items permuted.
tensorflow tf.linalg.qr tf.linalg.qr
============
Computes the QR decompositions of one or more matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.qr`](https://www.tensorflow.org/api_docs/python/tf/linalg/qr), [`tf.compat.v1.qr`](https://www.tensorflow.org/api_docs/python/tf/linalg/qr)
```
tf.linalg.qr(
input, full_matrices=False, name=None
)
```
Computes the QR decomposition of each inner matrix in `tensor` such that `tensor[..., :, :] = q[..., :, :] * r[..., :,:])`
Currently, the gradient for the QR decomposition is well-defined only when the first `P` columns of the inner matrix are linearly independent, where `P` is the minimum of `M` and `N`, the 2 inner-most dimmensions of `tensor`.
```
# a is a tensor.
# q is a tensor of orthonormal matrices.
# r is a tensor of upper triangular matrices.
q, r = qr(a)
q_full, r_full = qr(a, full_matrices=True)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. A tensor of shape `[..., M, N]` whose inner-most 2 dimensions form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`. |
| `full_matrices` | An optional `bool`. Defaults to `False`. If true, compute full-sized `q` and `r`. If false (the default), compute only the leading `P` columns of `q`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (q, r). |
| `q` | A `Tensor`. Has the same type as `input`. |
| `r` | A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.eigvalsh tf.linalg.eigvalsh
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg_ops.py#L460-L481) |
Computes the eigenvalues of one or more self-adjoint matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.eigvalsh`](https://www.tensorflow.org/api_docs/python/tf/linalg/eigvalsh), [`tf.compat.v1.self_adjoint_eigvals`](https://www.tensorflow.org/api_docs/python/tf/linalg/eigvalsh)
```
tf.linalg.eigvalsh(
tensor, name=None
)
```
>
> **Note:** If your program backpropagates through this function, you should replace it with a call to tf.linalg.eigh (possibly ignoring the second output) to avoid computing the eigen decomposition twice. This is because the eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See \_SelfAdjointEigV2Grad in linalg\_grad.py.
>
| Args |
| `tensor` | `Tensor` of shape `[..., N, N]`. |
| `name` | string, optional name of the operation. |
| Returns |
| `e` | Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N` eigenvalues of `tensor[..., :, :]`. |
tensorflow Module: tf.linalg.experimental Module: tf.linalg.experimental
==============================
Public API for tf.linalg.experimental namespace.
Functions
---------
[`conjugate_gradient(...)`](experimental/conjugate_gradient): Conjugate gradient solver.
tensorflow tf.linalg.diag tf.linalg.diag
==============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L2458-L2624) |
Returns a batched diagonal tensor with given batched diagonal values.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.diag`](https://www.tensorflow.org/api_docs/python/tf/linalg/diag), [`tf.compat.v1.matrix_diag`](https://www.tensorflow.org/api_docs/python/tf/linalg/diag)
```
tf.linalg.diag(
diagonal,
name='diag',
k=0,
num_rows=-1,
num_cols=-1,
padding_value=0,
align='RIGHT_LEFT'
)
```
Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th diagonals of a matrix, with everything else padded with `padding`. `num_rows` and `num_cols` specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix is square and infers its size from `k` and the innermost dimension of `diagonal`. If only one of them is specified, the op assumes the unspecified value is the smallest possible based on other criteria.
Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor has rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only one diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has rank `r` with shape `[I, J, ..., L, num_rows, num_cols]`.
The second innermost dimension of `diagonal` has double meaning. When `k` is scalar or `k[0] == k[1]`, `M` is part of the batch size [I, J, ..., M], and the output tensor is:
```
output[i, j, ..., l, m, n]
= diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper
padding_value ; otherwise
```
Otherwise, `M` is treated as the number of diagonals for the matrix in the same batch (`M = k[1]-k[0]+1`), and the output tensor is:
```
output[i, j, ..., l, m, n]
= diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
padding_value ; otherwise
```
where `d = n - m`, `diag_index = k[1] - d`, and `index_in_diag = n - max(d, 0) + offset`.
`offset` is zero except when the alignment of the diagonal is to the right.
```
offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
and `d >= 0`) or
(`align` in {LEFT_RIGHT, RIGHT_RIGHT}
and `d <= 0`)
0 ; otherwise
```
where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.
#### For example:
```
# The main diagonal.
diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4)
[5, 6, 7, 8]])
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4)
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]],
[[5, 0, 0, 0],
[0, 6, 0, 0],
[0, 0, 7, 0],
[0, 0, 0, 8]]]
# A superdiagonal (per batch).
diagonal = np.array([[1, 2, 3], # Input shape: (2, 3)
[4, 5, 6]])
tf.matrix_diag(diagonal, k = 1)
==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4)
[0, 0, 2, 0],
[0, 0, 0, 3],
[0, 0, 0, 0]],
[[0, 4, 0, 0],
[0, 0, 5, 0],
[0, 0, 0, 6],
[0, 0, 0, 0]]]
# A tridiagonal band (per batch).
diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3)
[1, 2, 3],
[0, 4, 5]],
[[2, 3, 0],
[6, 7, 9],
[0, 9, 1]]])
tf.matrix_diag(diagonals, k = (-1, 1))
==> [[[1, 8, 0], # Output shape: (2, 3, 3)
[4, 2, 9],
[0, 5, 3]],
[[6, 2, 0],
[9, 7, 3],
[0, 1, 9]]]
# RIGHT_LEFT alignment.
diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3)
[1, 2, 3],
[4, 5, 0]],
[[0, 2, 3],
[6, 7, 9],
[9, 1, 0]]])
tf.matrix_diag(diagonals, k = (-1, 1), align="RIGHT_LEFT")
==> [[[1, 8, 0], # Output shape: (2, 3, 3)
[4, 2, 9],
[0, 5, 3]],
[[6, 2, 0],
[9, 7, 3],
[0, 1, 9]]]
# Rectangular matrix.
diagonal = np.array([1, 2]) # Input shape: (2)
tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)
==> [[0, 0, 0, 0], # Output shape: (3, 4)
[1, 0, 0, 0],
[0, 2, 0, 0]]
# Rectangular matrix with inferred num_cols and padding_value = 9.
tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)
==> [[9, 9], # Output shape: (3, 2)
[1, 9],
[9, 2]]
```
| Args |
| `diagonal` | A `Tensor` with `rank k >= 1`. |
| `name` | A name for the operation (optional). |
| `k` | Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. `k` can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. `k[0]` must not be larger than `k[1]`. |
| `num_rows` | The number of rows of the output matrix. If it is not provided, the op assumes the output matrix is a square matrix and infers the matrix size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`. |
| `num_cols` | The number of columns of the output matrix. If it is not provided, the op assumes the output matrix is a square matrix and infers the matrix size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`. |
| `padding_value` | The value to fill the area outside the specified diagonal band with. Default is 0. |
| `align` | Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: "RIGHT\_LEFT" (default), "LEFT\_RIGHT", "LEFT\_LEFT", and "RIGHT\_RIGHT". "RIGHT\_LEFT" aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses "LEFT\_RIGHT", which is the opposite alignment. |
| Returns |
| A Tensor. Has the same type as `diagonal`. |
tensorflow tf.linalg.LinearOperatorBlockDiag tf.linalg.LinearOperatorBlockDiag
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_diag.py#L34-L738) |
Combines one or more `LinearOperators` in to a Block Diagonal matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorBlockDiag`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorBlockDiag)
```
tf.linalg.LinearOperatorBlockDiag(
operators,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=True,
name=None
)
```
This operator combines one or more linear operators `[op1,...,opJ]`, building a new `LinearOperator`, whose underlying matrix representation has each operator `opi` on the main diagonal, and zero's elsewhere.
#### Shape compatibility
If `opj` acts like a [batch] matrix `Aj`, then `op_combined` acts like the [batch] matrix formed by having each matrix `Aj` on the main diagonal.
Each `opj` is required to represent a matrix, and hence will have shape `batch_shape_j + [M_j, N_j]`.
If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the combined operator has shape `broadcast_batch_shape + [sum M_j, sum N_j]`, where `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate batch shapes broadcast.
Arguments to `matmul`, `matvec`, `solve`, and `solvevec` may either be single `Tensor`s or lists of `Tensor`s that are interpreted as blocks. The `j`th element of a blockwise list of `Tensor`s must have dimensions that match `opj` for the given method. If a list of blocks is input, then a list of blocks is returned as well.
When the `opj` are not guaranteed to be square, this operator's methods might fail due to the combined operator not being square and/or lack of efficient methods.
```
# Create a 4 x 4 linear operator combined of two 2 x 2 operators.
operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])
operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]])
operator = LinearOperatorBlockDiag([operator_1, operator_2])
operator.to_dense()
==> [[1., 2., 0., 0.],
[3., 4., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]]
operator.shape
==> [4, 4]
operator.log_abs_determinant()
==> scalar Tensor
x1 = ... # Shape [2, 2] Tensor
x2 = ... # Shape [2, 2] Tensor
x = tf.concat([x1, x2], 0) # Shape [2, 4] Tensor
operator.matmul(x)
==> tf.concat([operator_1.matmul(x1), operator_2.matmul(x2)])
# Create a 5 x 4 linear operator combining three blocks.
operator_1 = LinearOperatorFullMatrix([[1.], [3.]])
operator_2 = LinearOperatorFullMatrix([[1., 6.]])
operator_3 = LinearOperatorFullMatrix([[2.], [7.]])
operator = LinearOperatorBlockDiag([operator_1, operator_2, operator_3])
operator.to_dense()
==> [[1., 0., 0., 0.],
[3., 0., 0., 0.],
[0., 1., 6., 0.],
[0., 0., 0., 2.]]
[0., 0., 0., 7.]]
operator.shape
==> [5, 4]
# Create a [2, 3] batch of 4 x 4 linear operators.
matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])
operator_44 = LinearOperatorFullMatrix(matrix)
# Create a [1, 3] batch of 5 x 5 linear operators.
matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])
operator_55 = LinearOperatorFullMatrix(matrix_55)
# Combine to create a [2, 3] batch of 9 x 9 operators.
operator_99 = LinearOperatorBlockDiag([operator_44, operator_55])
# Create a shape [2, 3, 9] vector.
x = tf.random.normal(shape=[2, 3, 9])
operator_99.matmul(x)
==> Shape [2, 3, 9] Tensor
# Create a blockwise list of vectors.
x = [tf.random.normal(shape=[2, 3, 4]), tf.random.normal(shape=[2, 3, 5])]
operator_99.matmul(x)
==> [Shape [2, 3, 4] Tensor, Shape [2, 3, 5] Tensor]
```
#### Performance
The performance of `LinearOperatorBlockDiag` on any operation is equal to the sum of the individual operators' operations.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `operators` | Iterable of `LinearOperator` objects, each with the same `dtype` and composable shape. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. This is true by default, and will raise a `ValueError` otherwise. |
| `name` | A name for this `LinearOperator`. Default is the individual operators names joined with `_o_`. |
| Raises |
| `TypeError` | If all operators do not have the same `dtype`. |
| `ValueError` | If `operators` is empty or are non-square. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `operators` | |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_diag.py#L296-L378)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator`, `Tensor` with compatible shape and same `dtype` as `self`, or a blockwise iterable of `LinearOperator`s or `Tensor`s. See class docstring for definition of shape compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`, or if `x` is blockwise, a list of `Tensor`s with shapes that concatenate to `[..., M, R]`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_diag.py#L410-L459)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matric A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`, or an iterable of `Tensor`s (for blockwise operators). `Tensor`s are treated a [batch] vectors, meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_diag.py#L473-L596)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape, or a list of `Tensor`s (for blockwise operators). `Tensor`s are treated like a [batch] matrices meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_block_diag.py#L598-L659)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator, or list of `Tensor`s (for blockwise operators). `Tensor`s are treated as [batch] vectors, meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.eigh_tridiagonal tf.linalg.eigh\_tridiagonal
===========================
Computes the eigenvalues of a Hermitian tridiagonal matrix.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.eigh_tridiagonal`](https://www.tensorflow.org/api_docs/python/tf/linalg/eigh_tridiagonal)
```
tf.linalg.eigh_tridiagonal(
alpha,
beta,
eigvals_only=True,
select='a',
select_range=None,
tol=None,
name=None
)
```
| Args |
| `alpha` | A real or complex tensor of shape (n), the diagonal elements of the matrix. NOTE: If alpha is complex, the imaginary part is ignored (assumed zero) to satisfy the requirement that the matrix be Hermitian. |
| `beta` | A real or complex tensor of shape (n-1), containing the elements of the first super-diagonal of the matrix. If beta is complex, the first sub-diagonal of the matrix is assumed to be the conjugate of beta to satisfy the requirement that the matrix be Hermitian |
| `eigvals_only` | If False, both eigenvalues and corresponding eigenvectors are computed. If True, only eigenvalues are computed. Default is True. |
| `select` | Optional string with values in {‘a’, ‘v’, ‘i’} (default is 'a') that determines which eigenvalues to calculate: 'a': all eigenvalues. ‘v’: eigenvalues in the interval (min, max] given by `select_range`. 'i’: eigenvalues with indices min <= i <= max. |
| `select_range` | Size 2 tuple or list or tensor specifying the range of eigenvalues to compute together with select. If select is 'a', select\_range is ignored. |
| `tol` | Optional scalar. The absolute tolerance to which each eigenvalue is required. An eigenvalue (or cluster) is considered to have converged if it lies in an interval of this width. If tol is None (default), the value eps\*|T|\_2 is used where eps is the machine precision, and |T|\_2 is the 2-norm of the matrix T. |
| `name` | Optional name of the op. |
| Returns |
| `eig_vals` | The eigenvalues of the matrix in non-decreasing order. |
| `eig_vectors` | If `eigvals_only` is False the eigenvectors are returned in the second output argument. |
| Raises |
| `ValueError` | If input values are invalid. |
| `NotImplemented` | Computing eigenvectors for `eigvals_only` = False is not implemented yet. |
This op implements a subset of the functionality of scipy.linalg.eigh\_tridiagonal.
>
> **Note:** The result is undefined if the input contains +/-inf or NaN, or if any value in beta has a magnitude greater than `numpy.sqrt(numpy.finfo(beta.dtype.as_numpy_dtype).max)`.
>
Add support for outer batch dimensions.
#### Examples
```
import numpy
eigvals = tf.linalg.eigh_tridiagonal([0.0, 0.0, 0.0], [1.0, 1.0])
eigvals_expected = [-numpy.sqrt(2.0), 0.0, numpy.sqrt(2.0)]
tf.assert_near(eigvals_expected, eigvals)
# ==> True
```
tensorflow tf.linalg.logdet tf.linalg.logdet
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L65-L96) |
Computes log of the determinant of a hermitian positive definite matrix.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.logdet`](https://www.tensorflow.org/api_docs/python/tf/linalg/logdet)
```
tf.linalg.logdet(
matrix, name=None
)
```
```
# Compute the determinant of a matrix while reducing the chance of over- or
underflow:
A = ... # shape 10 x 10
det = tf.exp(tf.linalg.logdet(A)) # scalar
```
| Args |
| `matrix` | A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, or `complex128` with shape `[..., M, M]`. |
| `name` | A name to give this `Op`. Defaults to `logdet`. |
| Returns |
| The natural log of the determinant of `matrix`. |
numpy compatibility
-------------------
Equivalent to numpy.linalg.slogdet, although no sign is returned since only hermitian positive definite matrices are supported.
tensorflow tf.linalg.lstsq tf.linalg.lstsq
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg_ops.py#L240-L375) |
Solves one or more linear least-squares problems.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.lstsq`](https://www.tensorflow.org/api_docs/python/tf/linalg/lstsq), [`tf.compat.v1.matrix_solve_ls`](https://www.tensorflow.org/api_docs/python/tf/linalg/lstsq)
```
tf.linalg.lstsq(
matrix, rhs, l2_regularizer=0.0, fast=True, name=None
)
```
`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K` matrices that solve the equations `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares sense.
Below we will use the following notation for each pair of matrix and right-hand sides in the batch:
`matrix`=\(A \in \Re^{m \times n}\), `rhs`=\(B \in \Re^{m \times k}\), `output`=\(X \in \Re^{n \times k}\), `l2_regularizer`=\(\lambda\).
If `fast` is `True`, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares problem \(X = \mathrm{argmin}\_{Z \in \Re^{n \times k} } ||A Z - B||\_F^2 + \lambda ||Z||\_F^2\). If \(m \lt n\) then `output` is computed as \(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}\_{Z \in \Re^{n \times k} } ||Z||\_F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon\_{mach} } }\) or\(\lambda\) is sufficiently large.
If `fast` is `False` an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. If `fast` is `False` then `l2_regularizer` is ignored.
| Args |
| `matrix` | `Tensor` of shape `[..., M, N]`. |
| `rhs` | `Tensor` of shape `[..., M, K]`. |
| `l2_regularizer` | 0-D `double` `Tensor`. Ignored if `fast=False`. |
| `fast` | bool. Defaults to `True`. |
| `name` | string, optional name of the operation. |
| Returns |
| `output` | `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K` matrices that solve the equations `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares sense. |
| Raises |
| `NotImplementedError` | linalg.lstsq is currently disabled for complex128 and l2\_regularizer != 0 due to poor accuracy. |
tensorflow tf.linalg.pinv tf.linalg.pinv
==============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L804-L931) |
Compute the Moore-Penrose pseudo-inverse of one or more matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.pinv`](https://www.tensorflow.org/api_docs/python/tf/linalg/pinv)
```
tf.linalg.pinv(
a, rcond=None, validate_args=False, name=None
)
```
Calculate the [generalized inverse of a matrix](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its singular-value decomposition (SVD) and including all large singular values.
The pseudo-inverse of a matrix `A`, is defined as: 'the matrix that 'solves' [the least-squares problem] `A @ x = b`,' i.e., if `x_hat` is a solution, then `A_pinv` is the matrix such that `x_hat = A_pinv @ b`. It can be shown that if `U @ Sigma @ V.T = A` is the singular value decomposition of `A`, then `A_pinv = V @ inv(Sigma) U^T`. [(Strang, 1980)][1]
This function is analogous to [`numpy.linalg.pinv`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html). It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the default `rcond` is `1e-15`. Here the default is `10. * max(num_rows, num_cols) * np.finfo(dtype).eps`.
| Args |
| `a` | (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be pseudo-inverted. |
| `rcond` | `Tensor` of small singular value cutoffs. Singular values smaller (in modulus) than `rcond` \* largest\_singular\_value (again, in modulus) are set to zero. Must broadcast against `tf.shape(a)[:-2]`. Default value: `10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps`. |
| `validate_args` | When `True`, additional assertions might be embedded in the graph. Default value: `False` (i.e., no graph assertions are added). |
| `name` | Python `str` prefixed to ops created by this function. Default value: 'pinv'. |
| Returns |
| `a_pinv` | (Batch of) pseudo-inverse of input `a`. Has same shape as `a` except rightmost two dimensions are transposed. |
| Raises |
| `TypeError` | if input `a` does not have `float`-like `dtype`. |
| `ValueError` | if input `a` has fewer than 2 dimensions. |
#### Examples
```
import tensorflow as tf
import tensorflow_probability as tfp
a = tf.constant([[1., 0.4, 0.5],
[0.4, 0.2, 0.25],
[0.5, 0.25, 0.35]])
tf.matmul(tf.linalg.pinv(a), a)
# ==> array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], dtype=float32)
a = tf.constant([[1., 0.4, 0.5, 1.],
[0.4, 0.2, 0.25, 2.],
[0.5, 0.25, 0.35, 3.]])
tf.matmul(tf.linalg.pinv(a), a)
# ==> array([[ 0.76, 0.37, 0.21, -0.02],
[ 0.37, 0.43, -0.33, 0.02],
[ 0.21, -0.33, 0.81, 0.01],
[-0.02, 0.02, 0.01, 1. ]], dtype=float32)
```
#### References
[1]: G. Strang. 'Linear Algebra and Its Applications, 2nd Ed.' Academic Press, Inc., 1980, pp. 139-142.
tensorflow tf.linalg.cholesky tf.linalg.cholesky
==================
Computes the Cholesky decomposition of one or more square matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.cholesky`](https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky), [`tf.compat.v1.linalg.cholesky`](https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky)
```
tf.linalg.cholesky(
input, name=None
)
```
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices.
The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.
The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
>
> **Note:** The gradient computation on GPU is faster for large matrices but not for large batch dimensions when the submatrices are small. In this case it might be faster to use the CPU.
>
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.logm tf.linalg.logm
==============
Computes the matrix logarithm of one or more square matrices:
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.logm`](https://www.tensorflow.org/api_docs/python/tf/linalg/logm)
```
tf.linalg.logm(
input, name=None
)
```
\(log(exp(A)) = A\)
This op is only defined for complex matrices. If A is positive-definite and real, then casting to a complex matrix, taking the logarithm and casting back to a real matrix will give the correct result.
This function computes the matrix logarithm using the Schur-Parlett algorithm. Details of the algorithm can be found in Section 11.6.2 of: Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008. ISBN 978-0-898716-46-7.
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the exponential for all input submatrices `[..., :, :]`.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.band_part tf.linalg.band\_part
====================
Copy a tensor setting everything outside a central band in each innermost matrix to zero.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.band_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/band_part), [`tf.compat.v1.matrix_band_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/band_part)
```
tf.linalg.band_part(
input, num_lower, num_upper, name=None
)
```
The `band` part is computed as follows: Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a tensor with the same shape where
`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.
The indicator function
`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper)`.
#### For example:
```
# if 'input' is [[ 0, 1, 2, 3]
# [-1, 0, 1, 2]
# [-2, -1, 0, 1]
# [-3, -2, -1, 0]],
tf.linalg.band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]
[-1, 0, 1, 2]
[ 0, -1, 0, 1]
[ 0, 0, -1, 0]],
tf.linalg.band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
[-1, 0, 1, 0]
[-2, -1, 0, 1]
[ 0, -2, -1, 0]]
```
#### Useful special cases:
```
tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.
tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.
tf.linalg.band_part(input, 0, 0) ==> Diagonal.
```
| Args |
| `input` | A `Tensor`. Rank `k` tensor. |
| `num_lower` | A `Tensor`. Must be one of the following types: `int32`, `int64`. 0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle. |
| `num_upper` | A `Tensor`. Must have the same type as `num_lower`. 0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.linalg.LinearOperatorLowRankUpdate tf.linalg.LinearOperatorLowRankUpdate
=====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_low_rank_update.py#L36-L501) |
Perturb a `LinearOperator` with a rank `K` update.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorLowRankUpdate`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorLowRankUpdate)
```
tf.linalg.LinearOperatorLowRankUpdate(
base_operator,
u,
diag_update=None,
v=None,
is_diag_update_positive=None,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorLowRankUpdate'
)
```
This operator acts like a [batch] matrix `A` with shape `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `M x N` matrix.
`LinearOperatorLowRankUpdate` represents `A = L + U D V^H`, where
```
L, is a LinearOperator representing [batch] M x N matrices
U, is a [batch] M x K matrix. Typically K << M.
D, is a [batch] K x K matrix.
V, is a [batch] N x K matrix. Typically K << N.
V^H is the Hermitian transpose (adjoint) of V.
```
If `M = N`, determinants and solves are done using the matrix determinant lemma and Woodbury identities, and thus require L and D to be non-singular.
Solves and determinants will be attempted unless the "is\_non\_singular" property of L and D is False.
In the event that L and D are positive-definite, and U = V, solves and determinants can be done using a Cholesky factorization.
```
# Create a 3 x 3 diagonal linear operator.
diag_operator = LinearOperatorDiag(
diag_update=[1., 2., 3.], is_non_singular=True, is_self_adjoint=True,
is_positive_definite=True)
# Perturb with a rank 2 perturbation
operator = LinearOperatorLowRankUpdate(
operator=diag_operator,
u=[[1., 2.], [-1., 3.], [0., 0.]],
diag_update=[11., 12.],
v=[[1., 2.], [-1., 3.], [10., 10.]])
operator.shape
==> [3, 3]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [3, 4] Tensor
operator.matmul(x)
==> Shape [3, 4] Tensor
```
### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [M, N], with b >= 0
x.shape = [B1,...,Bb] + [N, R], with R >= 0.
```
### Performance
Suppose `operator` is a `LinearOperatorLowRankUpdate` of shape `[M, N]`, made from a rank `K` update of `base_operator` which performs `.matmul(x)` on `x` having `x.shape = [N, R]` with `O(L_matmul*N*R)` complexity (and similarly for `solve`, `determinant`. Then, if `x.shape = [N, R]`,
* `operator.matmul(x)` is `O(L_matmul*N*R + K*N*R)`
and if `M = N`,
* `operator.solve(x)` is `O(L_matmul*N*R + N*K*R + K^2*R + K^3)`
* `operator.determinant()` is `O(L_determinant + L_solve*N*K + K^2*N + K^3)`
If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular`, `self_adjoint`, `positive_definite`, `diag_update_positive` and `square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `base_operator` | Shape `[B1,...,Bb, M, N]`. |
| `u` | Shape `[B1,...,Bb, M, K]` `Tensor` of same `dtype` as `base_operator`. This is `U` above. |
| `diag_update` | Optional shape `[B1,...,Bb, K]` `Tensor` with same `dtype` as `base_operator`. This is the diagonal of `D` above. Defaults to `D` being the identity operator. |
| `v` | Optional `Tensor` of same `dtype` as `u` and shape `[B1,...,Bb, N, K]` Defaults to `v = u`, in which case the perturbation is symmetric. If `M != N`, then `v` must be set since the perturbation is not square. |
| `is_diag_update_positive` | Python `bool`. If `True`, expect `diag_update > 0`. |
| `is_non_singular` | Expect that this operator is non-singular. Default is `None`, unless `is_positive_definite` is auto-set to be `True` (see below). |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. Default is `None`, unless `base_operator` is self-adjoint and `v = None` (meaning `u=v`), in which case this defaults to `True`. |
| `is_positive_definite` | Expect that this operator is positive definite. Default is `None`, unless `base_operator` is positive-definite `v = None` (meaning `u=v`), and `is_diag_update_positive`, in which case this defaults to `True`. Note that we say an operator is positive definite when the quadratic form `x^H A x` has positive real part for all nonzero `x`. |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `ValueError` | If `is_X` flags are set in an inconsistent way. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `base_operator` | If this operator is `A = L + U D V^H`, this is the `L`. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `diag_operator` | If this operator is `A = L + U D V^H`, this is `D`. |
| `diag_update` | If this operator is `A = L + U D V^H`, this is the diagonal of `D`. |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_diag_update_positive` | If this operator is `A = L + U D V^H`, this hints `D > 0` elementwise. |
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
| `u` | If this operator is `A = L + U D V^H`, this is the `U`. |
| `v` | If this operator is `A = L + U D V^H`, this is the `V`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorCirculant tf.linalg.LinearOperatorCirculant
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L523-L775) |
`LinearOperator` acting like a circulant matrix.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorCirculant`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorCirculant)
```
tf.linalg.LinearOperatorCirculant(
spectrum,
input_output_dtype=tf.dtypes.complex64,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=True,
name='LinearOperatorCirculant'
)
```
This operator acts like a circulant matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
#### Description in terms of circulant matrices
Circulant means the entries of `A` are generated by a single vector, the convolution kernel `h`: `A_{mn} := h_{m-n mod N}`. With `h = [w, x, y, z]`,
```
A = |w z y x|
|x w z y|
|y x w z|
|z y x w|
```
This means that the result of matrix multiplication `v = Au` has `Lth` column given circular convolution between `h` with the `Lth` column of `u`.
#### Description in terms of the frequency spectrum
There is an equivalent description in terms of the [batch] spectrum `H` and Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch dimensions. Define the discrete Fourier transform (DFT) and its inverse by
```
DFT[ h[n] ] = H[k] := sum_{n = 0}^{N - 1} h_n e^{-i 2pi k n / N}
IDFT[ H[k] ] = h[n] = N^{-1} sum_{k = 0}^{N - 1} H_k e^{i 2pi k n / N}
```
From these definitions, we see that
```
H[0] = sum_{n = 0}^{N - 1} h_n
H[1] = "the first positive frequency"
H[N - 1] = "the first negative frequency"
```
Loosely speaking, with `*` element-wise multiplication, matrix multiplication is equal to the action of a Fourier multiplier: `A u = IDFT[ H * DFT[u] ]`. Precisely speaking, given `[N, R]` matrix `u`, let `DFT[u]` be the `[N, R]` matrix with `rth` column equal to the DFT of the `rth` column of `u`. Define the `IDFT` similarly. Matrix multiplication may be expressed columnwise:
`(A u)_r = IDFT[ H * (DFT[u])_r ]`
#### Operator properties deduced from the spectrum.
Letting `U` be the `kth` Euclidean basis vector, and `U = IDFT[u]`. The above formulas show that`A U = H_k * U`. We conclude that the elements of `H` are the eigenvalues of this operator. Therefore
* This operator is positive definite if and only if `Real{H} > 0`.
A general property of Fourier transforms is the correspondence between Hermitian functions and real valued transforms.
Suppose `H.shape = [B1,...,Bb, N]`. We say that `H` is a Hermitian spectrum if, with `%` meaning modulus division,
`H[..., n % N] = ComplexConjugate[ H[..., (-n) % N] ]`
* This operator corresponds to a real matrix if and only if `H` is Hermitian.
* This operator is self-adjoint if and only if `H` is real.
See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer.
#### Example of a self-adjoint positive definite operator
```
# spectrum is real ==> operator is self-adjoint
# spectrum is positive ==> operator is positive definite
spectrum = [6., 4, 2]
operator = LinearOperatorCirculant(spectrum)
# IFFT[spectrum]
operator.convolution_kernel()
==> [4 + 0j, 1 + 0.58j, 1 - 0.58j]
operator.to_dense()
==> [[4 + 0.0j, 1 - 0.6j, 1 + 0.6j],
[1 + 0.6j, 4 + 0.0j, 1 - 0.6j],
[1 - 0.6j, 1 + 0.6j, 4 + 0.0j]]
```
#### Example of defining in terms of a real convolution kernel
```
# convolution_kernel is real ==> spectrum is Hermitian.
convolution_kernel = [1., 2., 1.]]
spectrum = tf.signal.fft(tf.cast(convolution_kernel, tf.complex64))
# spectrum is Hermitian ==> operator is real.
# spectrum is shape [3] ==> operator is shape [3, 3]
# We force the input/output type to be real, which allows this to operate
# like a real matrix.
operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)
operator.to_dense()
==> [[ 1, 1, 2],
[ 2, 1, 1],
[ 1, 2, 1]]
```
#### Example of Hermitian spectrum
```
# spectrum is shape [3] ==> operator is shape [3, 3]
# spectrum is Hermitian ==> operator is real.
spectrum = [1, 1j, -1j]
operator = LinearOperatorCirculant(spectrum)
operator.to_dense()
==> [[ 0.33 + 0j, 0.91 + 0j, -0.24 + 0j],
[-0.24 + 0j, 0.33 + 0j, 0.91 + 0j],
[ 0.91 + 0j, -0.24 + 0j, 0.33 + 0j]
```
#### Example of forcing real `dtype` when spectrum is Hermitian
```
# spectrum is shape [4] ==> operator is shape [4, 4]
# spectrum is real ==> operator is self-adjoint
# spectrum is Hermitian ==> operator is real
# spectrum has positive real part ==> operator is positive-definite.
spectrum = [6., 4, 2, 4]
# Force the input dtype to be float32.
# Cast the output to float32. This is fine because the operator will be
# real due to Hermitian spectrum.
operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)
operator.shape
==> [4, 4]
operator.to_dense()
==> [[4, 1, 0, 1],
[1, 4, 1, 0],
[0, 1, 4, 1],
[1, 0, 1, 4]]
# convolution_kernel = tf.signal.ifft(spectrum)
operator.convolution_kernel()
==> [4, 1, 0, 1]
```
#### Performance
Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`, and `x.shape = [N, R]`. Then
* `operator.matmul(x)` is `O(R*N*Log[N])`
* `operator.solve(x)` is `O(R*N*Log[N])`
* `operator.determinant()` involves a size `N` `reduce_prod`.
If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
#### References:
Toeplitz and Circulant Matrices - A Review: [Gray, 2006](https://www.nowpublishers.com/article/Details/CIT-006) ([pdf](https://ee.stanford.edu/%7Egray/toeplitz.pdf))
| Args |
| `spectrum` | Shape `[B1,...,Bb, N]` `Tensor`. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. Type can be different than `input_output_dtype` |
| `input_output_dtype` | `dtype` for input/output. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `spectrum` is real, this will always be true. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix> #Extension\_for\_non\_symmetric\_matrices |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name to prepend to all ops created by this class. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `block_depth` | Depth of recursively defined circulant blocks defining this `Operator`. With `A` the dense representation of this `Operator`, `block_depth = 1` means `A` is symmetric circulant. For example,
```
A = |w z y x|
|x w z y|
|y x w z|
|z y x w|
```
`block_depth = 2` means `A` is block symmetric circulant with symmetric circulant blocks. For example, with `W`, `X`, `Y`, `Z` symmetric circulant,
```
A = |W Z Y X|
|X W Z Y|
|Y X W Z|
|Z Y X W|
```
`block_depth = 3` means `A` is block symmetric circulant with block symmetric circulant blocks. |
| `block_shape` | |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `spectrum` | |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_hermitian_spectrum`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L335-L355)
```
assert_hermitian_spectrum(
name='assert_hermitian_spectrum'
)
```
Returns an `Op` that asserts this operator has Hermitian spectrum.
This operator corresponds to a real-valued matrix if and only if its spectrum is Hermitian.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Op` that asserts this operator has Hermitian spectrum. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `block_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L175-L179)
```
block_shape_tensor()
```
Shape of the block dimensions of `self.spectrum`.
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `convolution_kernel`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_circulant.py#L290-L304)
```
convolution_kernel(
name='convolution_kernel'
)
```
Convolution kernel corresponding to `self.spectrum`.
The `D` dimensional DFT of this kernel is the frequency domain spectrum of this operator.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| `Tensor` with `dtype` `self.dtype`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.lu_reconstruct tf.linalg.lu\_reconstruct
=========================
The reconstruct one or more matrices from their LU decomposition(s).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.lu_reconstruct`](https://www.tensorflow.org/api_docs/python/tf/linalg/lu_reconstruct)
```
tf.linalg.lu_reconstruct(
lower_upper, perm, validate_args=False, name=None
)
```
| Args |
| `lower_upper` | `lu` as returned by [`tf.linalg.lu`](lu), i.e., if `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`. |
| `perm` | `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`. |
| `validate_args` | Python `bool` indicating whether arguments should be checked for correctness. Default value: `False` (i.e., don't validate arguments). |
| `name` | Python `str` name given to ops managed by this object. Default value: `None` (i.e., 'lu\_reconstruct'). |
| Returns |
| `x` | The original input to [`tf.linalg.lu`](lu), i.e., `x` as in, `lu_reconstruct(*tf.linalg.lu(x))`. |
#### Examples
```
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
x = [[[3., 4], [1, 2]],
[[7., 8], [3, 4]]]
x_reconstructed = tf.linalg.lu_reconstruct(*tf.linalg.lu(x))
tf.assert_near(x, x_reconstructed)
# ==> True
```
tensorflow tf.linalg.LinearOperatorComposition tf.linalg.LinearOperatorComposition
===================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_composition.py#L32-L290) |
Composes one or more `LinearOperators`.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorComposition`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorComposition)
```
tf.linalg.LinearOperatorComposition(
operators,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name=None
)
```
This operator composes one or more linear operators `[op1,...,opJ]`, building a new `LinearOperator` with action defined by:
```
op_composed(x) := op1(op2(...(opJ(x)...))
```
If `opj` acts like [batch] matrix `Aj`, then `op_composed` acts like the [batch] matrix formed with the multiplication `A1 A2...AJ`.
If `opj` has shape `batch_shape_j + [M_j, N_j]`, then we must have `N_j = M_{j+1}`, in which case the composed operator has shape equal to `broadcast_batch_shape + [M_1, N_J]`, where `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate batch shapes broadcast. Even if the composed shape is well defined, the composed operator's methods may fail due to lack of broadcasting ability in the defining operators' methods.
```
# Create a 2 x 2 linear operator composed of two 2 x 2 operators.
operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])
operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]])
operator = LinearOperatorComposition([operator_1, operator_2])
operator.to_dense()
==> [[1., 2.]
[3., 4.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> Shape [2, 4] Tensor
# Create a [2, 3] batch of 4 x 5 linear operators.
matrix_45 = tf.random.normal(shape=[2, 3, 4, 5])
operator_45 = LinearOperatorFullMatrix(matrix)
# Create a [2, 3] batch of 5 x 6 linear operators.
matrix_56 = tf.random.normal(shape=[2, 3, 5, 6])
operator_56 = LinearOperatorFullMatrix(matrix_56)
# Compose to create a [2, 3] batch of 4 x 6 operators.
operator_46 = LinearOperatorComposition([operator_45, operator_56])
# Create a shape [2, 3, 6, 2] vector.
x = tf.random.normal(shape=[2, 3, 6, 2])
operator.matmul(x)
==> Shape [2, 3, 4, 2] Tensor
```
#### Performance
The performance of `LinearOperatorComposition` on any operation is equal to the sum of the individual operators' operations.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `operators` | Iterable of `LinearOperator` objects, each with the same `dtype` and composable shape. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. Default is the individual operators names joined with `_o_`. |
| Raises |
| `TypeError` | If all operators do not have the same `dtype`. |
| `ValueError` | If `operators` is empty. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `operators` | |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.solve tf.linalg.solve
===============
Solves systems of linear equations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/solve), [`tf.compat.v1.matrix_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/solve)
```
tf.linalg.solve(
matrix, rhs, adjoint=False, name=None
)
```
`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. If `adjoint` is `True` then each output matrix satisfies `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.
| Args |
| `matrix` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`. |
| `rhs` | A `Tensor`. Must have the same type as `matrix`. Shape is `[..., M, K]`. |
| `adjoint` | An optional `bool`. Defaults to `False`. Boolean indicating whether to solve with `matrix` or its (block-wise) adjoint. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `matrix`. |
tensorflow tf.linalg.lu tf.linalg.lu
============
Computes the LU decomposition of one or more square matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.lu`](https://www.tensorflow.org/api_docs/python/tf/linalg/lu)
```
tf.linalg.lu(
input,
output_idx_type=tf.dtypes.int32,
name=None
)
```
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices.
The input has to be invertible.
The output consists of two tensors LU and P containing the LU decomposition of all input submatrices `[..., :, :]`. LU encodes the lower triangular and upper triangular factors.
For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose entries correspond to the upper triangular part, including the diagonal, of LU.
P represents a permutation matrix encoded as a list of indices each between `0` and `M-1`, inclusive. If P\_mat denotes the permutation matrix corresponding to P, then the L, U and P satisfies P\_mat \* input = L \* U.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of size `[M, M]`. |
| `output_idx_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.int32, tf.int64`. Defaults to [`tf.int32`](../../tf#int32). |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (lu, p). |
| `lu` | A `Tensor`. Has the same type as `input`. |
| `p` | A `Tensor` of type `output_idx_type`. |
tensorflow tf.linalg.diag_part tf.linalg.diag\_part
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L2627-L2767) |
Returns the batched diagonal part of a batched tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.diag_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/diag_part), [`tf.compat.v1.matrix_diag_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/diag_part)
```
tf.linalg.diag_part(
input,
name='diag_part',
k=0,
padding_value=0,
align='RIGHT_LEFT'
)
```
Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched `input`.
Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`. Let `max_diag_len` be the maximum length among all diagonals to be extracted, `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` Let `num_diags` be the number of diagonals to extract, `num_diags = k[1] - k[0] + 1`.
If `num_diags == 1`, the output tensor is of rank `r - 1` with shape `[I, J, ..., L, max_diag_len]` and values:
```
diagonal[i, j, ..., l, n]
= input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,
padding_value ; otherwise.
```
where `y = max(-k[1], 0)`, `x = max(k[1], 0)`.
Otherwise, the output tensor has rank `r` with dimensions `[I, J, ..., L, num_diags, max_diag_len]` with values:
```
diagonal[i, j, ..., l, m, n]
= input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,
padding_value ; otherwise.
```
where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`.
`offset` is zero except when the alignment of the diagonal is to the right.
```
offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
and `d >= 0`) or
(`align` in {LEFT_RIGHT, RIGHT_RIGHT}
and `d <= 0`)
0 ; otherwise
```
where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.
The input must be at least a matrix.
#### For example:
```
input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4)
[5, 6, 7, 8],
[9, 8, 7, 6]],
[[5, 4, 3, 2],
[1, 2, 3, 4],
[5, 6, 7, 8]]])
# A main diagonal from each batch.
tf.linalg.diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3)
[5, 2, 7]]
# A superdiagonal from each batch.
tf.linalg.diag_part(input, k = 1)
==> [[2, 7, 6], # Output shape: (2, 3)
[4, 3, 8]]
# A band from each batch.
tf.linalg.diag_part(input, k = (-1, 2))
==> [[[3, 8, 0], # Output shape: (2, 4, 3)
[2, 7, 6],
[1, 6, 7],
[0, 5, 8]],
[[3, 4, 0],
[4, 3, 8],
[5, 2, 7],
[0, 1, 6]]]
# RIGHT_LEFT alignment.
tf.linalg.diag_part(input, k = (-1, 2), align="RIGHT_LEFT")
==> [[[0, 3, 8], # Output shape: (2, 4, 3)
[2, 7, 6],
[1, 6, 7],
[5, 8, 0]],
[[0, 3, 4],
[4, 3, 8],
[5, 2, 7],
[1, 6, 0]]]
# max_diag_len can be shorter than the main diagonal.
tf.linalg.diag_part(input, k = (-2, -1))
==> [[[5, 8],
[0, 9]],
[[1, 6],
[0, 5]]]
# padding_value = 9
tf.linalg.diag_part(input, k = (1, 3), padding_value = 9)
==> [[[4, 9, 9], # Output shape: (2, 3, 3)
[3, 8, 9],
[2, 7, 6]],
[[2, 9, 9],
[3, 4, 9],
[4, 3, 8]]]
```
| Args |
| `input` | A `Tensor` with `rank k >= 2`. |
| `name` | A name for the operation (optional). |
| `k` | Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. `k` can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. `k[0]` must not be larger than `k[1]`. |
| `padding_value` | The value to fill the area outside the specified diagonal band with. Default is 0. |
| `align` | Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: "RIGHT\_LEFT" (default), "LEFT\_RIGHT", "LEFT\_LEFT", and "RIGHT\_RIGHT". "RIGHT\_LEFT" aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses "LEFT\_RIGHT", which is the opposite alignment. |
| Returns |
| A Tensor containing diagonals of `input`. Has the same type as `input`. |
| Raises |
| `InvalidArgumentError` | When `k` is out of bound or when `k[0]>k[1:]`. |
tensorflow tf.linalg.eigvals tf.linalg.eigvals
=================
Computes the eigenvalues of one or more matrices.
#### View aliases
**Main aliases**
[`tf.eigvals`](https://www.tensorflow.org/api_docs/python/tf/linalg/eigvals)
```
tf.linalg.eigvals(
tensor, name=None
)
```
>
> **Note:** If your program backpropagates through this function, you should replace it with a call to tf.linalg.eig (possibly ignoring the second output) to avoid computing the eigen decomposition twice. This is because the eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See \_SelfAdjointEigV2Grad in linalg\_grad.py.
>
| Args |
| `tensor` | `Tensor` of shape `[..., N, N]`. |
| `name` | string, optional name of the operation. |
| Returns |
| `e` | Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N` eigenvalues of `tensor[..., :, :]`. |
tensorflow tf.linalg.adjoint tf.linalg.adjoint
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L99-L125) |
Transposes the last two dimensions of and conjugates tensor `matrix`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.adjoint`](https://www.tensorflow.org/api_docs/python/tf/linalg/adjoint)
```
tf.linalg.adjoint(
matrix, name=None
)
```
#### For example:
```
x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],
[4 + 4j, 5 + 5j, 6 + 6j]])
tf.linalg.adjoint(x) # [[1 - 1j, 4 - 4j],
# [2 - 2j, 5 - 5j],
# [3 - 3j, 6 - 6j]]
```
| Args |
| `matrix` | A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, or `complex128` with shape `[..., M, M]`. |
| `name` | A name to give this `Op` (optional). |
| Returns |
| The adjoint (a.k.a. Hermitian transpose a.k.a. conjugate transpose) of matrix. |
tensorflow tf.linalg.normalize tf.linalg.normalize
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L591-L641) |
Normalizes `tensor` along dimension `axis` using specified norm.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.normalize`](https://www.tensorflow.org/api_docs/python/tf/linalg/normalize)
```
tf.linalg.normalize(
tensor, ord='euclidean', axis=None, name=None
)
```
This uses [`tf.linalg.norm`](../norm) to compute the norm along `axis`.
This function can compute several different vector norms (the 1-norm, the Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).
| Args |
| `tensor` | `Tensor` of types `float32`, `float64`, `complex64`, `complex128` |
| `ord` | Order of the norm. Supported values are `'fro'`, `'euclidean'`, `1`, `2`, `np.inf` and any positive real number yielding the corresponding p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if `tensor` is a matrix and equivalent to 2-norm for vectors. Some restrictions apply: a) The Frobenius norm `'fro'` is not defined for vectors, b) If axis is a 2-tuple (matrix norm), only `'euclidean'`, '`fro'`, `1`, `2`, `np.inf` are supported. See the description of `axis` on how to compute norms for a batch of vectors or matrices stored in a tensor. |
| `axis` | If `axis` is `None` (the default), the input is considered a vector and a single vector norm is computed over the entire set of values in the tensor, i.e. `norm(tensor, ord=ord)` is equivalent to `norm(reshape(tensor, [-1]), ord=ord)`. If `axis` is a Python integer, the input is considered a batch of vectors, and `axis` determines the axis in `tensor` over which to compute vector norms. If `axis` is a 2-tuple of Python integers it is considered a batch of matrices and `axis` determines the axes in `tensor` over which to compute a matrix norm. Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are computed. |
| `name` | The name of the op. |
| Returns |
| `normalized` | A normalized `Tensor` with the same shape as `tensor`. |
| `norm` | The computed norms with the same shape and dtype `tensor` but the final axis is 1 instead. Same as running `tf.cast(tf.linalg.norm(tensor, ord, axis keepdims=True), tensor.dtype)`. |
| Raises |
| `ValueError` | If `ord` or `axis` is invalid. |
tensorflow tf.linalg.tridiagonal_solve tf.linalg.tridiagonal\_solve
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L443-L614) |
Solves tridiagonal systems of equations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.tridiagonal_solve`](https://www.tensorflow.org/api_docs/python/tf/linalg/tridiagonal_solve)
```
tf.linalg.tridiagonal_solve(
diagonals,
rhs,
diagonals_format='compact',
transpose_rhs=False,
conjugate_rhs=False,
name=None,
partial_pivoting=True,
perturb_singular=False
)
```
The input can be supplied in various formats: `matrix`, `sequence` and `compact`, specified by the `diagonals_format` arg.
In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with two inner-most dimensions representing the square tridiagonal matrices. Elements outside of the three diagonals will be ignored.
In `sequence` format, `diagonals` are supplied as a tuple or list of three tensors of shapes `[..., N]`, `[..., M]`, `[..., N]` representing superdiagonals, diagonals, and subdiagonals, respectively. `N` can be either `M-1` or `M`; in the latter case, the last element of superdiagonal and the first element of subdiagonal will be ignored.
In `compact` format the three diagonals are brought together into one tensor of shape `[..., 3, M]`, with last two dimensions containing superdiagonals, diagonals, and subdiagonals, in order. Similarly to `sequence` format, elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored.
The `compact` format is recommended as the one with best performance. In case you need to cast a tensor into a compact format manually, use [`tf.gather_nd`](../gather_nd). An example for a tensor of shape [m, m]:
```
rhs = tf.constant([...])
matrix = tf.constant([[...]])
m = matrix.shape[0]
dummy_idx = [0, 0] # An arbitrary element to use as a dummy
indices = [[[i, i + 1] for i in range(m - 1)] + [dummy_idx], # Superdiagonal
[[i, i] for i in range(m)], # Diagonal
[dummy_idx] + [[i + 1, i] for i in range(m - 1)]] # Subdiagonal
diagonals=tf.gather_nd(matrix, indices)
x = tf.linalg.tridiagonal_solve(diagonals, rhs)
```
Regardless of the `diagonals_format`, `rhs` is a tensor of shape `[..., M]` or `[..., M, K]`. The latter allows to simultaneously solve K systems with the same left-hand sides and K different right-hand sides. If `transpose_rhs` is set to `True` the expected shape is `[..., M]` or `[..., K, M]`.
The batch dimensions, denoted as `...`, must be the same in `diagonals` and `rhs`.
The output is a tensor of the same shape as `rhs`: either `[..., M]` or `[..., M, K]`.
The op isn't guaranteed to raise an error if the input matrix is not invertible. [`tf.debugging.check_numerics`](../debugging/check_numerics) can be applied to the output to detect invertibility problems.
>
> **Note:** with large batch sizes, the computation on the GPU may be slow, if either `partial_pivoting=True` or there are multiple right-hand sides (`K > 1`). If this issue arises, consider if it's possible to disable pivoting and have `K = 1`, or, alternatively, consider using CPU.
>
On CPU, solution is computed via Gaussian elimination with or without partial pivoting, depending on `partial_pivoting` parameter. On GPU, Nvidia's cuSPARSE library is used: <https://docs.nvidia.com/cuda/cusparse/index.html#gtsv>
| Args |
| `diagonals` | A `Tensor` or tuple of `Tensor`s describing left-hand sides. The shape depends of `diagonals_format`, see description above. Must be `float32`, `float64`, `complex64`, or `complex128`. |
| `rhs` | A `Tensor` of shape [..., M] or [..., M, K] and with the same dtype as `diagonals`. Note that if the shape of `rhs` and/or `diags` isn't known statically, `rhs` will be treated as a matrix rather than a vector. |
| `diagonals_format` | one of `matrix`, `sequence`, or `compact`. Default is `compact`. |
| `transpose_rhs` | If `True`, `rhs` is transposed before solving (has no effect if the shape of rhs is [..., M]). |
| `conjugate_rhs` | If `True`, `rhs` is conjugated before solving. |
| `name` | A name to give this `Op` (optional). |
| `partial_pivoting` | whether to perform partial pivoting. `True` by default. Partial pivoting makes the procedure more stable, but slower. Partial pivoting is unnecessary in some cases, including diagonally dominant and symmetric positive definite matrices (see e.g. theorem 9.12 in [1]). |
| `perturb_singular` | whether to perturb singular matrices to return a finite result. `False` by default. If true, solutions to systems involving a singular matrix will be computed by perturbing near-zero pivots in the partially pivoted LU decomposition. Specifically, tiny pivots are perturbed by an amount of order `eps * max_{ij} |U(i,j)|` to avoid overflow. Here `U` is the upper triangular part of the LU decomposition, and `eps` is the machine precision. This is useful for solving numerically singular systems when computing eigenvectors by inverse iteration. If `partial_pivoting` is `False`, `perturb_singular` must be `False` as well. |
| Returns |
| A `Tensor` of shape [..., M] or [..., M, K] containing the solutions. If the input matrix is singular, the result is undefined. |
| Raises |
| `ValueError` | Is raised if any of the following conditions hold: 1. An unsupported type is provided as input,
2. the input tensors have incorrect shapes,
3. `perturb_singular` is `True` but `partial_pivoting` is not.
|
| `UnimplementedError` | Whenever `partial_pivoting` is true and the backend is XLA, or whenever `perturb_singular` is true and the backend is XLA or GPU. |
[1] Nicholas J. Higham (2002). Accuracy and Stability of Numerical Algorithms: Second Edition. SIAM. p. 175. ISBN 978-0-89871-802-7.
tensorflow tf.linalg.LinearOperatorScaledIdentity tf.linalg.LinearOperatorScaledIdentity
======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_identity.py#L493-L784) |
`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorScaledIdentity`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorScaledIdentity)
```
tf.linalg.LinearOperatorScaledIdentity(
num_rows,
multiplier,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=True,
assert_proper_shapes=False,
name='LinearOperatorScaledIdentity'
)
```
This operator acts like a scaled [batch] identity matrix `A` with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is a scaled version of the `N x N` identity matrix.
`LinearOperatorIdentity` is initialized with `num_rows`, and a `multiplier` (a `Tensor`) of shape `[B1,...,Bb]`. `N` is set to `num_rows`, and the `multiplier` determines the scale for each batch member.
```
# Create a 2 x 2 scaled identity matrix.
operator = LinearOperatorIdentity(num_rows=2, multiplier=3.)
operator.to_dense()
==> [[3., 0.]
[0., 3.]]
operator.shape
==> [2, 2]
operator.log_abs_determinant()
==> 2 * Log[3]
x = ... Shape [2, 4] Tensor
operator.matmul(x)
==> 3 * x
y = tf.random.normal(shape=[3, 2, 4])
# Note that y.shape is compatible with operator.shape because operator.shape
# is broadcast to [3, 2, 2].
x = operator.solve(y)
==> 3 * x
# Create a 2-batch of 2x2 identity matrices
operator = LinearOperatorIdentity(num_rows=2, multiplier=5.)
operator.to_dense()
==> [[[5., 0.]
[0., 5.]],
[[5., 0.]
[0., 5.]]]
x = ... Shape [2, 2, 3]
operator.matmul(x)
==> 5 * x
# Here the operator and x have different batch_shape, and are broadcast.
x = ... Shape [1, 2, 3]
operator.matmul(x)
==> 5 * x
```
### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
### Performance
* `operator.matmul(x)` is `O(D1*...*Dd*N*R)`
* `operator.solve(x)` is `O(D1*...*Dd*N*R)`
* `operator.determinant()` is `O(D1*...*Dd)`
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `num_rows` | Scalar non-negative integer `Tensor`. Number of rows in the corresponding identity matrix. |
| `multiplier` | `Tensor` of shape `[B1,...,Bb]`, or `[]` (a scalar). |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `assert_proper_shapes` | Python `bool`. If `False`, only perform static checks that initialization and method arguments have proper shape. If `True`, and static checks are inconclusive, add asserts to the graph. |
| `name` | A name for this `LinearOperator` |
| Raises |
| `ValueError` | If `num_rows` is determined statically to be non-scalar, or negative. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `multiplier` | The [batch] scalar `Tensor`, `c` in `cI`. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_identity.py#L737-L760)
```
add_to_tensor(
mat, name='add_to_tensor'
)
```
Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
| Args |
| `mat` | `Tensor` with same `dtype` and shape broadcastable to `self`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.LinearOperatorPermutation tf.linalg.LinearOperatorPermutation
===================================
`LinearOperator` acting like a [batch] of permutation matrices.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorPermutation`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorPermutation)
```
tf.linalg.LinearOperatorPermutation(
perm,
dtype=tf.dtypes.float32,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name='LinearOperatorPermutation'
)
```
This operator acts like a [batch] of permutations with shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `N x N` matrix. This matrix `A` is not materialized, but for purposes of broadcasting this shape will be relevant.
`LinearOperatorPermutation` is initialized with a (batch) vector.
A permutation, is defined by an integer vector `v` whose values are unique and are in the range `[0, ... n]`. Applying the permutation on an input matrix has the folllowing meaning: the value of `v` at index `i` says to move the `v[i]`-th row of the input matrix to the `i`-th row. Because all values are unique, this will result in a permutation of the rows the input matrix. Note, that the permutation vector `v` has the same semantics as [`tf.transpose`](../transpose).
```
# Create a 3 x 3 permutation matrix that swaps the last two columns.
vec = [0, 2, 1]
operator = LinearOperatorPermutation(vec)
operator.to_dense()
==> [[1., 0., 0.]
[0., 0., 1.]
[0., 1., 0.]]
operator.shape
==> [3, 3]
# This will be zero.
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [3, 4] Tensor
operator.matmul(x)
==> Shape [3, 4] Tensor
```
#### Shape compatibility
This operator acts on [batch] matrix with compatible shape. `x` is a batch matrix with compatible shape for `matmul` and `solve` if
```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
```
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `perm` | Shape `[B1,...,Bb, N]` Integer `Tensor` with `b >= 0` `N >= 0`. An integer vector that represents the permutation to apply. Note that this argument is same as [`tf.transpose`](../transpose). However, this permutation is applied on the rows, while the permutation in [`tf.transpose`](../transpose) is applied on the dimensions of the `Tensor`. `perm` is required to have unique entries from `{0, 1, ... N-1}`. |
| `dtype` | The `dtype` of arguments to this operator. Default: `float32`. Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, `complex128`. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. This is autoset to true |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: https://en.wikipedia.org/wiki/Positive-definite\_matrix#Extension\_for\_non-symmetric\_matrices This is autoset to false. |
| `is_square` | Expect that this operator acts like square [batch] matrices. This is autoset to true. |
| `name` | A name for this `LinearOperator`. |
| Raises |
| `ValueError` | `is_self_adjoint` is not `True`, `is_positive_definite` is not `False` or `is_square` is not `True`. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `perm` | |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.matmul tf.linalg.matmul
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L3488-L3714) |
Multiplies matrix `a` by matrix `b`, producing `a` \* `b`.
#### View aliases
**Main aliases**
[`tf.matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul), [`tf.compat.v1.matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul)
```
tf.linalg.matmul(
a,
b,
transpose_a=False,
transpose_b=False,
adjoint_a=False,
adjoint_b=False,
a_is_sparse=False,
b_is_sparse=False,
output_type=None,
name=None
)
```
The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.
Both matrices must be of the same type. The supported types are: `bfloat16`, `float16`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to `True`. These are `False` by default.
If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes `bfloat16` or `float32`.
A simple 2-D tensor matrix multiplication:
```
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a # 2-D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b # 2-D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7, 8],
[ 9, 10],
[11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58, 64],
[139, 154]], dtype=int32)>
```
A batch matrix multiplication with batch shape [2]:
```
a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a # 3-D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b # 3-D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
[229, 244]],
[[508, 532],
[697, 730]]], dtype=int32)>
```
Since python >= 3.5 the @ operator is supported (see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, it simply calls the [`tf.matmul()`](matmul) function, so the following lines are equivalent:
```
d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
```
| Args |
| `a` | [`tf.Tensor`](../tensor) of type `float16`, `float32`, `float64`, `int32`, `complex64`, `complex128` and rank > 1. |
| `b` | [`tf.Tensor`](../tensor) with same type and rank as `a`. |
| `transpose_a` | If `True`, `a` is transposed before multiplication. |
| `transpose_b` | If `True`, `b` is transposed before multiplication. |
| `adjoint_a` | If `True`, `a` is conjugated and transposed before multiplication. |
| `adjoint_b` | If `True`, `b` is conjugated and transposed before multiplication. |
| `a_is_sparse` | If `True`, `a` is treated as a sparse matrix. Notice, this **does not support [`tf.sparse.SparseTensor`](../sparse/sparsetensor)**, it just makes optimizations that assume most values in `a` are zero. See [`tf.sparse.sparse_dense_matmul`](../sparse/sparse_dense_matmul) for some support for [`tf.sparse.SparseTensor`](../sparse/sparsetensor) multiplication. |
| `b_is_sparse` | If `True`, `b` is treated as a sparse matrix. Notice, this **does not support [`tf.sparse.SparseTensor`](../sparse/sparsetensor)**, it just makes optimizations that assume most values in `a` are zero. See [`tf.sparse.sparse_dense_matmul`](../sparse/sparse_dense_matmul) for some support for [`tf.sparse.SparseTensor`](../sparse/sparsetensor) multiplication. |
| `output_type` | The output datatype if needed. Defaults to None in which case the output\_type is the same as input type. Currently only works when input tensors are type (u)int8 and output\_type can be int32. |
| `name` | Name for the operation (optional). |
| Returns |
| A [`tf.Tensor`](../tensor) of the same type as `a` and `b` where each inner-most matrix is the product of the corresponding matrices in `a` and `b`, e.g. if all transpose or adjoint attributes are `False`: `output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, for all indices `i`, `j`. |
| `Note` | This is matrix product, not element-wise product. |
| Raises |
| `ValueError` | If `transpose_a` and `adjoint_a`, or `transpose_b` and `adjoint_b` are both set to `True`. |
| `TypeError` | If output\_type is specified but the types of `a`, `b` and `output_type` is not (u)int8, (u)int8 and int32. |
tensorflow tf.linalg.banded_triangular_solve tf.linalg.banded\_triangular\_solve
===================================
Solve triangular systems of equations with a banded solver.
```
tf.linalg.banded_triangular_solve(
bands, rhs, lower=True, adjoint=False, name=None
)
```
`bands` is a tensor of shape `[..., K, M]`, where `K` represents the number of bands stored. This corresponds to a batch of `M` by `M` matrices, whose `K` subdiagonals (when `lower` is `True`) are stored.
This operator broadcasts the batch dimensions of `bands` and the batch dimensions of `rhs`.
#### Examples:
Storing 2 bands of a 3x3 matrix. Note that first element in the second row is ignored due to the 'LEFT\_RIGHT' padding.
```
x = [[2., 3., 4.], [1., 2., 3.]]
x2 = [[2., 3., 4.], [10000., 2., 3.]]
y = tf.zeros([3, 3])
z = tf.linalg.set_diag(y, x, align='LEFT_RIGHT', k=(-1, 0))
z
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[2., 0., 0.],
[2., 3., 0.],
[0., 3., 4.]], dtype=float32)>
soln = tf.linalg.banded_triangular_solve(x, tf.ones([3, 1]))
soln
<tf.Tensor: shape=(3, 1), dtype=float32, numpy=
array([[0.5 ],
[0. ],
[0.25]], dtype=float32)>
are_equal = soln == tf.linalg.banded_triangular_solve(x2, tf.ones([3, 1]))
tf.reduce_all(are_equal).numpy()
True
are_equal = soln == tf.linalg.triangular_solve(z, tf.ones([3, 1]))
tf.reduce_all(are_equal).numpy()
True
```
Storing 2 superdiagonals of a 4x4 matrix. Because of the 'LEFT\_RIGHT' padding the last element of the first row is ignored.
```
x = [[2., 3., 4., 5.], [-1., -2., -3., -4.]]
y = tf.zeros([4, 4])
z = tf.linalg.set_diag(y, x, align='LEFT_RIGHT', k=(0, 1))
z
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[-1., 2., 0., 0.],
[ 0., -2., 3., 0.],
[ 0., 0., -3., 4.],
[ 0., 0., -0., -4.]], dtype=float32)>
soln = tf.linalg.banded_triangular_solve(x, tf.ones([4, 1]), lower=False)
soln
<tf.Tensor: shape=(4, 1), dtype=float32, numpy=
array([[-4. ],
[-1.5 ],
[-0.6666667],
[-0.25 ]], dtype=float32)>
are_equal = (soln == tf.linalg.triangular_solve(
z, tf.ones([4, 1]), lower=False))
tf.reduce_all(are_equal).numpy()
True
```
| Args |
| `bands` | A `Tensor` describing the bands of the left hand side, with shape `[..., K, M]`. The `K` rows correspond to the diagonal to the `K - 1`-th diagonal (the diagonal is the top row) when `lower` is `True` and otherwise the `K - 1`-th superdiagonal to the diagonal (the diagonal is the bottom row) when `lower` is `False`. The bands are stored with 'LEFT\_RIGHT' alignment, where the superdiagonals are padded on the right and subdiagonals are padded on the left. This is the alignment cuSPARSE uses. See [`tf.linalg.set_diag`](set_diag) for more details. |
| `rhs` | A `Tensor` of shape [..., M] or [..., M, N] and with the same dtype as `diagonals`. Note that if the shape of `rhs` and/or `diags` isn't known statically, `rhs` will be treated as a matrix rather than a vector. |
| `lower` | An optional `bool`. Defaults to `True`. Boolean indicating whether `bands` represents a lower or upper triangular matrix. |
| `adjoint` | An optional `bool`. Defaults to `False`. Boolean indicating whether to solve with the matrix's block-wise adjoint. |
| `name` | A name to give this `Op` (optional). |
| Returns |
| A `Tensor` of shape [..., M] or [..., M, N] containing the solutions. |
tensorflow tf.linalg.LinearOperatorKronecker tf.linalg.LinearOperatorKronecker
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator_kronecker.py#L62-L506) |
Kronecker product between two `LinearOperators`.
Inherits From: [`LinearOperator`](linearoperator), [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperatorKronecker`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperatorKronecker)
```
tf.linalg.LinearOperatorKronecker(
operators,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name=None
)
```
This operator composes one or more linear operators `[op1,...,opJ]`, building a new `LinearOperator` representing the Kronecker product: `op1 x op2 x .. opJ` (we omit parentheses as the Kronecker product is associative).
If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the composed operator will have shape equal to `broadcast_batch_shape + [prod M_j, prod N_j]`, where the product is over all operators.
```
# Create a 4 x 4 linear operator composed of two 2 x 2 operators.
operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])
operator_2 = LinearOperatorFullMatrix([[1., 0.], [2., 1.]])
operator = LinearOperatorKronecker([operator_1, operator_2])
operator.to_dense()
==> [[1., 0., 2., 0.],
[2., 1., 4., 2.],
[3., 0., 4., 0.],
[6., 3., 8., 4.]]
operator.shape
==> [4, 4]
operator.log_abs_determinant()
==> scalar Tensor
x = ... Shape [4, 2] Tensor
operator.matmul(x)
==> Shape [4, 2] Tensor
# Create a [2, 3] batch of 4 x 5 linear operators.
matrix_45 = tf.random.normal(shape=[2, 3, 4, 5])
operator_45 = LinearOperatorFullMatrix(matrix)
# Create a [2, 3] batch of 5 x 6 linear operators.
matrix_56 = tf.random.normal(shape=[2, 3, 5, 6])
operator_56 = LinearOperatorFullMatrix(matrix_56)
# Compose to create a [2, 3] batch of 20 x 30 operators.
operator_large = LinearOperatorKronecker([operator_45, operator_56])
# Create a shape [2, 3, 20, 2] vector.
x = tf.random.normal(shape=[2, 3, 6, 2])
operator_large.matmul(x)
==> Shape [2, 3, 30, 2] Tensor
```
#### Performance
The performance of `LinearOperatorKronecker` on any operation is equal to the sum of the individual operators' operations.
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
| Args |
| `operators` | Iterable of `LinearOperator` objects, each with the same `dtype` and composable shape, representing the Kronecker factors. |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix> #Extension\_for\_non\_symmetric\_matrices |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. Default is the individual operators names joined with `_x_`. |
| Raises |
| `TypeError` | If all operators do not have the same `dtype`. |
| `ValueError` | If `operators` is empty. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `operators` | |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.trace tf.linalg.trace
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L3444-L3485) |
Compute the trace of a tensor `x`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.trace`](https://www.tensorflow.org/api_docs/python/tf/linalg/trace), [`tf.compat.v1.trace`](https://www.tensorflow.org/api_docs/python/tf/linalg/trace)
```
tf.linalg.trace(
x, name=None
)
```
`trace(x)` returns the sum along the main diagonal of each inner-most matrix in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where
`output[i, j, k, ..., l] = trace(x[i, j, k, ..., l, :, :])`
#### For example:
```
x = tf.constant([[1, 2], [3, 4]])
tf.linalg.trace(x) # 5
x = tf.constant([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
tf.linalg.trace(x) # 15
x = tf.constant([[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[-1, -2, -3],
[-4, -5, -6],
[-7, -8, -9]]])
tf.linalg.trace(x) # [15, -15]
```
| Args |
| `x` | tensor. |
| `name` | A name for the operation (optional). |
| Returns |
| The trace of input tensor. |
tensorflow tf.linalg.expm tf.linalg.expm
==============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linalg_impl.py#L229-L344) |
Computes the matrix exponential of one or more square matrices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.expm`](https://www.tensorflow.org/api_docs/python/tf/linalg/expm)
```
tf.linalg.expm(
input, name=None
)
```
\[exp(A) = \sum\_{n=0}^\infty A^n/n!\]
The exponential is computed using a combination of the scaling and squaring method and the Pade approximation. Details can be found in: Nicholas J. Higham, "The scaling and squaring method for the matrix exponential revisited," SIAM J. Matrix Anal. Applic., 26:1179-1193, 2005.
The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the exponential for all input submatrices `[..., :, :]`.
| Args |
| `input` | A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, or `complex128` with shape `[..., M, M]`. |
| `name` | A name to give this `Op` (optional). |
| Returns |
| the matrix exponential of the input. |
| Raises |
| `ValueError` | An unsupported type is provided as input. |
scipy compatibility
-------------------
Equivalent to scipy.linalg.expm
tensorflow tf.linalg.LinearOperator tf.linalg.LinearOperator
========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L51-L1192) |
Base class defining a [batch of] linear operator[s].
Inherits From: [`Module`](../module)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.LinearOperator`](https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperator)
```
tf.linalg.LinearOperator(
dtype,
graph_parents=None,
is_non_singular=None,
is_self_adjoint=None,
is_positive_definite=None,
is_square=None,
name=None,
parameters=None
)
```
Subclasses of `LinearOperator` provide access to common methods on a (batch) matrix, without the need to materialize the matrix. This allows:
* Matrix free computations
* Operators that take advantage of special structure, while providing a consistent API to users.
#### Subclassing
To enable a public method, subclasses should implement the leading-underscore version of the method. The argument signature should be identical except for the omission of `name="..."`. For example, to enable `matmul(x, adjoint=False, name="matmul")` a subclass should implement `_matmul(x, adjoint=False)`.
#### Performance contract
Subclasses should only implement the assert methods (e.g. `assert_non_singular`) if they can be done in less than `O(N^3)` time.
Class docstrings should contain an explanation of computational complexity. Since this is a high-performance library, attention should be paid to detail, and explanations can include constants as well as Big-O notation.
#### Shape compatibility
`LinearOperator` subclasses should operate on a [batch] matrix with compatible shape. Class docstrings should define what is meant by compatible shape. Some subclasses may not support batching.
#### Examples:
`x` is a batch matrix with compatible shape for `matmul` if
```
operator.shape = [B1,...,Bb] + [M, N], b >= 0,
x.shape = [B1,...,Bb] + [N, R]
```
`rhs` is a batch matrix with compatible shape for `solve` if
```
operator.shape = [B1,...,Bb] + [M, N], b >= 0,
rhs.shape = [B1,...,Bb] + [M, R]
```
#### Example docstring for subclasses.
This operator acts like a (batch) matrix `A` with shape `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is an `m x n` matrix. Again, this matrix `A` may not be materialized, but for purposes of identifying and working with compatible arguments the shape is relevant.
#### Examples:
```
some_tensor = ... shape = ????
operator = MyLinOp(some_tensor)
operator.shape()
==> [2, 4, 4]
operator.log_abs_determinant()
==> Shape [2] Tensor
x = ... Shape [2, 4, 5] Tensor
operator.matmul(x)
==> Shape [2, 4, 5] Tensor
```
#### Shape compatibility
This operator acts on batch matrices with compatible shape. FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE
#### Performance
FILL THIS IN
#### Matrix property hints
This `LinearOperator` is initialized with boolean flags of the form `is_X`, for `X = non_singular, self_adjoint, positive_definite, square`. These have the following meaning:
* If `is_X == True`, callers should expect the operator to have the property `X`. This is a promise that should be fulfilled, but is *not* a runtime assert. For example, finite floating point precision may result in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either way.
#### Initialization parameters
All subclasses of `LinearOperator` are expected to pass a `parameters` argument to `super().__init__()`. This should be a `dict` containing the unadulterated arguments passed to the subclass `__init__`. For example, `MyLinearOperator` with an initializer should look like:
```
def __init__(self, operator, is_square=False, name=None):
parameters = dict(
operator=operator,
is_square=is_square,
name=name
)
...
super().__init__(..., parameters=parameters)
```
Users can then access `my_linear_operator.parameters` to see all arguments passed to its initializer.
| Args |
| `dtype` | The type of the this `LinearOperator`. Arguments to `matmul` and `solve` will have to be this type. |
| `graph_parents` | (Deprecated) Python list of graph prerequisites of this `LinearOperator` Typically tensors that are passed during initialization |
| `is_non_singular` | Expect that this operator is non-singular. |
| `is_self_adjoint` | Expect that this operator is equal to its hermitian transpose. If `dtype` is real, this is equivalent to being symmetric. |
| `is_positive_definite` | Expect that this operator is positive definite, meaning the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive-definite. See: <https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices> |
| `is_square` | Expect that this operator acts like square [batch] matrices. |
| `name` | A name for this `LinearOperator`. |
| `parameters` | Python `dict` of parameters used to instantiate this `LinearOperator`. |
| Raises |
| `ValueError` | If any member of graph\_parents is `None` or not a `Tensor`. |
| `ValueError` | If hints are set incorrectly. |
| Attributes |
| `H` | Returns the adjoint of the current `LinearOperator`. Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent. |
| `batch_shape` | `TensorShape` of batch dimensions of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` |
| `domain_dimension` | Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. |
| `dtype` | The `DType` of `Tensor`s handled by this `LinearOperator`. |
| `graph_parents` | List of graph dependencies of this `LinearOperator`. (deprecated)
|
| `is_non_singular` | |
| `is_positive_definite` | |
| `is_self_adjoint` | |
| `is_square` | Return `True/False` depending on if this operator is square. |
| `parameters` | Dictionary of parameters used to instantiate this `LinearOperator`. |
| `range_dimension` | Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. |
| `shape` | `TensorShape` of this `LinearOperator`. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. |
| `tensor_rank` | Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. |
Methods
-------
### `add_to_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1079-L1092)
```
add_to_tensor(
x, name='add_to_tensor'
)
```
Add matrix represented by this operator to `x`. Equivalent to `A + x`.
| Args |
| `x` | `Tensor` with same `dtype` and shape broadcastable to `self.shape`. |
| `name` | A name to give this `Op`. |
| Returns |
| A `Tensor` with broadcast shape and same `dtype` as `self`. |
### `adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L935-L950)
```
adjoint(
name='adjoint'
)
```
Returns the adjoint of the current `LinearOperator`.
Given `A` representing this `LinearOperator`, return `A*`. Note that calling `self.adjoint()` and `self.H` are equivalent.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the adjoint of this `LinearOperator`. |
### `assert_non_singular`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L543-L561)
```
assert_non_singular(
name='assert_non_singular'
)
```
Returns an `Op` that asserts this operator is non singular.
This operator is considered non-singular if
```
ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
```
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular. |
### `assert_positive_definite`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L579-L594)
```
assert_positive_definite(
name='assert_positive_definite'
)
```
Returns an `Op` that asserts this operator is positive definite.
Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite.
| Args |
| `name` | A name to give this `Op`. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite. |
### `assert_self_adjoint`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L606-L620)
```
assert_self_adjoint(
name='assert_self_adjoint'
)
```
Returns an `Op` that asserts this operator is self-adjoint.
Here we check that this operator is *exactly* equal to its hermitian transpose.
| Args |
| `name` | A string name to prepend to created ops. |
| Returns |
| An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not self-adjoint. |
### `batch_shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L357-L372)
```
batch_shape_tensor(
name='batch_shape_tensor'
)
```
Shape of batch dimensions of this operator, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb]`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `cholesky`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L980-L1003)
```
cholesky(
name='cholesky'
)
```
Returns a Cholesky factor as a `LinearOperator`.
Given `A` representing this `LinearOperator`, if `A` is positive definite self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky decomposition.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `LinearOperator` which represents the lower triangular matrix in the Cholesky decomposition. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be positive definite and self adjoint. |
### `cond`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1129-L1139)
```
cond(
name='cond'
)
```
Returns the condition number of this linear operator.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L739-L756)
```
determinant(
name='det'
)
```
Determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `diag_part`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1030-L1056)
```
diag_part(
name='diag_part'
)
```
Efficiently get the [batch] diagonal part of this operator.
If this operator has shape `[B1,...,Bb, M, N]`, this returns a `Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where `diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
```
my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
```
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `diag_part` | A `Tensor` of same `dtype` as self. |
### `domain_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L442-L458)
```
domain_dimension_tensor(
name='domain_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the domain of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `eigvals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1097-L1114)
```
eigvals(
name='eigvals'
)
```
Returns the eigenvalues of this linear operator.
If the operator is marked as self-adjoint (via `is_self_adjoint`) this computation can be more efficient.
>
> **Note:** This currently only supports self-adjoint operators.
>
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. |
### `inverse`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L955-L978)
```
inverse(
name='inverse'
)
```
Returns the Inverse of this `LinearOperator`.
Given `A` representing this `LinearOperator`, return a `LinearOperator` representing `A^-1`.
| Args |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `LinearOperator` representing inverse of this matrix. |
| Raises |
| `ValueError` | When the `LinearOperator` is not hinted to be `non_singular`. |
### `log_abs_determinant`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L768-L785)
```
log_abs_determinant(
name='log_abs_det'
)
```
Log absolute value of determinant for every batch member.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. |
| Raises |
| `NotImplementedError` | If `self.is_square` is `False`. |
### `matmul`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L633-L686)
```
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
```
Transform [batch] matrix `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
```
| Args |
| `x` | `LinearOperator` or `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `adjoint_arg` | Python `bool`. If `True`, compute `A x^H` where `x^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name for this `Op`. |
| Returns |
| A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` as `self`. |
### `matvec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L696-L729)
```
matvec(
x, adjoint=False, name='matvec'
)
```
Transform [batch] vector `x` with left multiplication: `x --> Ax`.
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
```
| Args |
| `x` | `Tensor` with compatible shape and same `dtype` as `self`. `x` is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. |
| `name` | A name for this `Op`. |
| Returns |
| A `Tensor` with shape `[..., M]` and same `dtype` as `self`. |
### `range_dimension_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L486-L502)
```
range_dimension_tensor(
name='range_dimension_tensor'
)
```
Dimension (in the sense of vector spaces) of the range of this operator.
Determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `shape_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L323-L341)
```
shape_tensor(
name='shape_tensor'
)
```
Shape of this `LinearOperator`, determined at runtime.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding `[B1,...,Bb, M, N]`, equivalent to [`tf.shape(A)`](../shape).
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor` |
### `solve`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L806-L879)
```
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
```
Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator and compatible shape. `rhs` is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `adjoint_arg` | Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` is the hermitian transpose (transposition and complex conjugation). |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `solvevec`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L887-L933)
```
solvevec(
rhs, adjoint=False, name='solve'
)
```
Solve single equation with best effort: `A X = rhs`.
The returned `Tensor` will be close to an exact solution if `A` is well conditioned. Otherwise closeness will vary. See class docstring for details.
#### Examples:
```
# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
```
| Args |
| `rhs` | `Tensor` with same `dtype` as this operator. `rhs` is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions. |
| `adjoint` | Python `bool`. If `True`, solve the system involving the adjoint of this `LinearOperator`: `A^H X = rhs`. |
| `name` | A name scope to use for ops added by this method. |
| Returns |
| `Tensor` with shape `[...,N]` and same `dtype` as `rhs`. |
| Raises |
| `NotImplementedError` | If `self.is_non_singular` or `is_square` is False. |
### `tensor_rank_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L401-L415)
```
tensor_rank_tensor(
name='tensor_rank_tensor'
)
```
Rank (in the sense of tensors) of matrix corresponding to this operator.
If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| `int32` `Tensor`, determined at runtime. |
### `to_dense`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1021-L1024)
```
to_dense(
name='to_dense'
)
```
Return a dense (batch) matrix representing this operator.
### `trace`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L1061-L1073)
```
trace(
name='trace'
)
```
Trace of the linear operator, equal to sum of `self.diag_part()`.
If the operator is square, this is also the sum of the eigenvalues.
| Args |
| `name` | A name for this `Op`. |
| Returns |
| Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. |
### `__matmul__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/linalg/linear_operator.py#L688-L689)
```
__matmul__(
other
)
```
| programming_docs |
tensorflow tf.linalg.experimental.conjugate_gradient tf.linalg.experimental.conjugate\_gradient
==========================================
Conjugate gradient solver.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.linalg.experimental.conjugate_gradient`](https://www.tensorflow.org/api_docs/python/tf/linalg/experimental/conjugate_gradient)
```
tf.linalg.experimental.conjugate_gradient(
operator,
rhs,
preconditioner=None,
x=None,
tol=1e-05,
max_iter=20,
name='conjugate_gradient'
)
```
Solves a linear system of equations `A*x = rhs` for self-adjoint, positive definite matrix `A` and right-hand side vector `rhs`, using an iterative, matrix-free algorithm where the action of the matrix A is represented by `operator`. The iteration terminates when either the number of iterations exceeds `max_iter` or when the residual norm has been reduced to `tol` times its initial value, i.e. \(||rhs - A x\_k|| <= tol ||rhs||\).
| Args |
| `operator` | A `LinearOperator` that is self-adjoint and positive definite. |
| `rhs` | A possibly batched vector of shape `[..., N]` containing the right-hand size vector. |
| `preconditioner` | A `LinearOperator` that approximates the inverse of `A`. An efficient preconditioner could dramatically improve the rate of convergence. If `preconditioner` represents matrix `M`(`M` approximates `A^{-1}`), the algorithm uses `preconditioner.apply(x)` to estimate `A^{-1}x`. For this to be useful, the cost of applying `M` should be much lower than computing `A^{-1}` directly. |
| `x` | A possibly batched vector of shape `[..., N]` containing the initial guess for the solution. |
| `tol` | A float scalar convergence tolerance. |
| `max_iter` | An integer giving the maximum number of iterations. |
| `name` | A name scope for the operation. |
| Returns |
| `output` | A namedtuple representing the final state with fields: * i: A scalar `int32` `Tensor`. Number of iterations executed.
* x: A rank-1 `Tensor` of shape `[..., N]` containing the computed solution.
* r: A rank-1 `Tensor` of shape `[.., M]` containing the residual vector.
* p: A rank-1 `Tensor` of shape `[..., N]`. `A`-conjugate basis vector.
* gamma: \(r \dot M \dot r\), equivalent to \(||r||\_2^2\) when `preconditioner=None`.
|
tensorflow tf.nn.ctc_unique_labels tf.nn.ctc\_unique\_labels
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ctc_ops.py#L1220-L1253) |
Get unique labels and indices for batched labels for [`tf.nn.ctc_loss`](ctc_loss).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.ctc_unique_labels`](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_unique_labels)
```
tf.nn.ctc_unique_labels(
labels, name=None
)
```
For use with [`tf.nn.ctc_loss`](ctc_loss) optional argument `unique`: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU.
#### Example:
ctc\_unique\_labels([[3, 4, 4, 3]]) -> unique labels padded with 0: [[3, 4, 0, 0]] indices of original labels in unique: [0, 1, 1, 0]
| Args |
| `labels` | tensor of shape [batch\_size, max\_label\_length] padded with 0. |
| `name` | A name for this `Op`. Defaults to "ctc\_unique\_labels". |
| Returns |
| tuple of * unique labels, tensor of shape `[batch_size, max_label_length]`
* indices into unique labels, shape `[batch_size, max_label_length]`
|
tensorflow tf.nn.ctc_greedy_decoder tf.nn.ctc\_greedy\_decoder
==========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ctc_ops.py#L295-L375) |
Performs greedy decoding on the logits given in input (best path).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.ctc_greedy_decoder`](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_greedy_decoder)
```
tf.nn.ctc_greedy_decoder(
inputs, sequence_length, merge_repeated=True, blank_index=None
)
```
Given a tensor as `inputs`, the `blank_index` parameter defines the class index of the blank symbol.
#### For example:
If `blank_index` is equal to 1:
```
inf = float("inf")
logits = tf.constant([[[ 0., -inf, -inf],
[ -2.3, -inf, -0.1]],
[[ -inf, -0.5, -inf],
[ -inf, -inf, -0.1]],
[[ -inf, -inf, -inf],
[ -0.1, -inf, -2.3]]])
seq_lens = tf.constant([2, 3])
outputs = tf.nn.ctc_greedy_decoder(
logits,
seq_lens,
blank_index=1)
```
#### Notes:
* Unlike `ctc_beam_search_decoder`, `ctc_greedy_decoder` considers blanks as regular elements when computing the probability of a sequence.
* Default `blank_index` is `(num_classes - 1)`, unless overriden.
If `merge_repeated` is `True`, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence `A B B * B * B` (where '\*' is the blank label) becomes
* `A B B B` if `merge_repeated=True`.
* `A B B B B` if `merge_repeated=False`.
| Args |
| `inputs` | 3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. The logits. |
| `sequence_length` | 1-D `int32` vector containing sequence lengths, having size `[batch_size]`. |
| `merge_repeated` | Boolean. Default: True. |
| `blank_index` | (Optional). Default: `num_classes - 1`. Define the class index to use for the blank label. Negative values will start from num\_classes, ie, -1 will reproduce the ctc\_greedy\_decoder behavior of using num\_classes - 1 for the blank symbol, which corresponds to the default. |
| Returns |
| A tuple `(decoded, neg_sum_logits)` where |
| `decoded` | A single-element list. `decoded[0]` is an `SparseTensor` containing the decoded outputs s.t.: `decoded.indices`: Indices matrix `(total_decoded_outputs, 2)`. The rows store: `[batch, time]`. `decoded.values`: Values vector, size `(total_decoded_outputs)`. The vector stores the decoded classes. `decoded.dense_shape`: Shape vector, size `(2)`. The shape values are: `[batch_size, max_decoded_length]` |
| `neg_sum_logits` | A `float` matrix `(batch_size x 1)` containing, for the sequence found, the negative of the sum of the greatest logit at each timeframe. |
tensorflow tf.nn.batch_normalization tf.nn.batch\_normalization
==========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1523-L1591) |
Batch normalization.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization)
```
tf.nn.batch_normalization(
x, mean, variance, offset, scale, variance_epsilon, name=None
)
```
Normalizes a tensor by `mean` and `variance`, and applies (optionally) a `scale` \(\gamma\) to it, as well as an `offset` \(\beta\):
\(\frac{\gamma(x-\mu)}{\sigma}+\beta\)
`mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes:
* In all generality, they can have the same number of dimensions as the input `x`, with identical sizes as `x` for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. `mean` and `variance` in this case would typically be the outputs of [`tf.nn.moments(..., keepdims=True)`](moments) during training, or running averages thereof during inference.
* In the common case where the 'depth' dimension is the last dimension in the input tensor `x`, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common `[batch, depth]` layout of fully-connected layers, and `[batch, height, width, depth]` for convolutions. `mean` and `variance` in this case would typically be the outputs of [`tf.nn.moments(..., keepdims=False)`](moments) during training, or running averages thereof during inference.
See equation 11 in Algorithm 2 of source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy](http://arxiv.org/abs/1502.03167).
| Args |
| `x` | Input `Tensor` of arbitrary dimensionality. |
| `mean` | A mean `Tensor`. |
| `variance` | A variance `Tensor`. |
| `offset` | An offset `Tensor`, often denoted \(\beta\) in equations, or None. If present, will be added to the normalized tensor. |
| `scale` | A scale `Tensor`, often denoted \(\gamma\) in equations, or `None`. If present, the scale is applied to the normalized tensor. |
| `variance_epsilon` | A small float number to avoid dividing by 0. |
| `name` | A name for this operation (optional). |
| Returns |
| the normalized, scaled, offset tensor. |
#### References:
Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: [Ioffe et al., 2015](http://arxiv.org/abs/1502.03167) ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))
tensorflow tf.nn.max_pool tf.nn.max\_pool
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4676-L4811) |
Performs max pooling on the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.max_pool_v2`](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool)
```
tf.nn.max_pool(
input, ksize, strides, padding, data_format=None, name=None
)
```
For a given window of `ksize`, takes the maximum value within that window. Used for reducing computation and preventing overfitting.
Consider an example of pooling with 2x2, non-overlapping windows:
```
matrix = tf.constant([
[0, 0, 1, 7],
[0, 2, 0, 0],
[5, 2, 0, 0],
[0, 0, 9, 8],
])
reshaped = tf.reshape(matrix, (1, 4, 4, 1))
tf.nn.max_pool(reshaped, ksize=2, strides=2, padding="SAME")
<tf.Tensor: shape=(1, 2, 2, 1), dtype=int32, numpy=
array([[[[2],
[7]],
[[5],
[9]]]], dtype=int32)>
```
We can adjust the window size using the `ksize` parameter. For example, if we were to expand the window to 3:
```
tf.nn.max_pool(reshaped, ksize=3, strides=2, padding="SAME")
<tf.Tensor: shape=(1, 2, 2, 1), dtype=int32, numpy=
array([[[[5],
[7]],
[[9],
[9]]]], dtype=int32)>
```
We've now picked up two additional large numbers (5 and 9) in two of the pooled spots.
Note that our windows are now overlapping, since we're still moving by 2 units on each iteration. This is causing us to see the same 9 repeated twice, since it is part of two overlapping windows.
We can adjust how far we move our window with each iteration using the `strides` parameter. Updating this to the same value as our window size eliminates the overlap:
```
tf.nn.max_pool(reshaped, ksize=3, strides=3, padding="SAME")
<tf.Tensor: shape=(1, 2, 2, 1), dtype=int32, numpy=
array([[[[2],
[7]],
[[5],
[9]]]], dtype=int32)>
```
Because the window does not neatly fit into our input, padding is added around the edges, giving us the same result as when we used a 2x2 window. We can skip padding altogether and simply drop the windows that do not fully fit into our input by instead passing `"VALID"` to the `padding` argument:
```
tf.nn.max_pool(reshaped, ksize=3, strides=3, padding="VALID")
<tf.Tensor: shape=(1, 1, 1, 1), dtype=int32, numpy=array([[[[5]]]],
dtype=int32)>
```
Now we've grabbed the largest value in the 3x3 window starting from the upper- left corner. Since no other windows fit in our input, they are dropped.
| Args |
| `input` | Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data\_format starts with "NC". Pooling happens over the spatial dimensions only. |
| `ksize` | An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. |
| `data_format` | A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW". |
| `name` | Optional name for the operation. |
| Returns |
| A `Tensor` of format specified by `data_format`. The max pooled output tensor. |
tensorflow tf.nn.crelu tf.nn.crelu
===========
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3586-L3589) |
Computes Concatenated ReLU.
```
tf.nn.crelu(
features, axis=-1, name=None
)
```
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the *negative* part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: [Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.](https://arxiv.org/abs/1603.05201)
| Args |
| `features` | A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, `int16`, or `int8`. |
| `name` | A name for the operation (optional). |
| `axis` | The axis that the output values are concatenated along. Default is -1. |
| Returns |
| A `Tensor` with the same type as `features`. |
#### References:
Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units: [Shang et al., 2016](http://proceedings.mlr.press/v48/shang16) ([pdf](http://proceedings.mlr.press/v48/shang16.pdf))
tensorflow tf.nn.silu tf.nn.silu
==========
Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.
#### View aliases
**Main aliases**
[`tf.nn.swish`](https://www.tensorflow.org/api_docs/python/tf/nn/silu)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.silu`](https://www.tensorflow.org/api_docs/python/tf/nn/silu), [`tf.compat.v1.nn.swish`](https://www.tensorflow.org/api_docs/python/tf/nn/silu)
```
tf.nn.silu(
features, beta=1.0
)
```
beta : Hyperparameter for Swish activation function. Default value 1.0.
The SiLU activation function was introduced in "Gaussian Error Linear Units (GELUs)" [Hendrycks et al. 2016](https://arxiv.org/abs/1606.08415) and "Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning" [Elfwing et al. 2017](https://arxiv.org/abs/1702.03118) and was independently discovered (and called swish) in "Searching for Activation Functions" [Ramachandran et al. 2017](https://arxiv.org/abs/1710.05941)
| Args |
| `features` | A `Tensor` representing preactivation values. |
| `beta` | A 'Tensor' representing value of beta hyperparameter. |
| Returns |
| The activation value. |
tensorflow tf.nn.selu tf.nn.selu
==========
Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.selu`](https://www.tensorflow.org/api_docs/python/tf/nn/selu)
```
tf.nn.selu(
features, name=None
)
```
if < 0, `scale * features` otherwise.
To be used together with `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`. For correct dropout, use `tf.contrib.nn.alpha_dropout`.
See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
| Args |
| `features` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `features`. |
tensorflow tf.nn.avg_pool2d tf.nn.avg\_pool2d
=================
Performs the average pooling on the input.
```
tf.nn.avg_pool2d(
input, ksize, strides, padding, data_format='NHWC', name=None
)
```
Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
| Args |
| `input` | A 4-D `Tensor` of shape `[batch, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`. |
| `ksize` | An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string. 'NHWC' and 'NCHW' are supported. |
| `name` | Optional name for the operation. |
| Returns |
| A `Tensor` with the same type as `value`. The average pooled output tensor. |
tensorflow tf.nn.atrous_conv2d_transpose tf.nn.atrous\_conv2d\_transpose
===============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L2777-L2939) |
The transpose of `atrous_conv2d`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.atrous_conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/atrous_conv2d_transpose)
```
tf.nn.atrous_conv2d_transpose(
value, filters, output_shape, rate, padding, name=None
)
```
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
| Args |
| `value` | A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` format. Its shape is `[batch, in_height, in_width, in_channels]`. |
| `filters` | A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, out_channels, in_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions. |
| `output_shape` | A 1-D `Tensor` of shape representing the output shape of the deconvolution op, of form `[batch, out_height, out_width, out_channels]`. |
| `rate` | A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `name` | Optional name for the returned tensor. |
| Returns |
| A `Tensor` with the same type as `value`. |
| Raises |
| `ValueError` | If input/output depth does not match `filters`' shape, or if padding is other than `'VALID'` or `'SAME'`, or if the `rate` is less than one, or if the output\_shape is not a tensor with 4 elements. |
#### References:
Deconvolutional Networks: [Zeiler et al., 2010](https://ieeexplore.ieee.org/abstract/document/5539957) ([pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))
| programming_docs |
tensorflow tf.nn.conv2d tf.nn.conv2d
============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L2223-L2327) |
Computes a 2-D convolution given `input` and 4-D `filters` tensors.
```
tf.nn.conv2d(
input,
filters,
strides,
padding,
data_format='NHWC',
dilations=None,
name=None
)
```
The `input` tensor may have rank `4` or higher, where shape dimensions `[:-3]` are considered batch dimensions (`batch_shape`).
Given an input tensor of shape `batch_shape + [in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:
1. Flattens the filter to a 2-D matrix with shape `[filter_height * filter_width * in_channels, output_channels]`.
2. Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, out_height, out_width, filter_height * filter_width * in_channels]`.
3. For each patch, right-multiplies the filter matrix and the image patch vector.
In detail, with the default NHWC format,
```
output[b, i, j, k] =
sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
filter[di, dj, q, k]
```
Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
#### Usage Example:
```
x_in = np.array([[
[[2], [1], [2], [0], [1]],
[[1], [3], [2], [2], [3]],
[[1], [1], [3], [3], [0]],
[[2], [2], [0], [1], [1]],
[[0], [0], [3], [1], [2]], ]])
kernel_in = np.array([
[ [[2, 0.1]], [[3, 0.2]] ],
[ [[0, 0.3]],[[1, 0.4]] ], ])
x = tf.constant(x_in, dtype=tf.float32)
kernel = tf.constant(kernel_in, dtype=tf.float32)
tf.nn.conv2d(x, kernel, strides=[1, 1, 1, 1], padding='VALID')
<tf.Tensor: shape=(1, 4, 4, 2), dtype=float32, numpy=..., dtype=float32)>
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. A Tensor of rank at least 4. The dimension order is interpreted according to the value of `data_format`; with the all-but-inner-3 dimensions acting as batch dimensions. See below for details. |
| `filters` | A `Tensor`. Must have the same type as `input`. A 4-D tensor of shape `[filter_height, filter_width, in_channels, out_channels]` |
| `strides` | An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. The dimension order is determined by the value of `data_format`, see below for details. |
| `padding` | Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. |
| `data_format` | An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: `batch_shape + [height, width, channels]`. Alternatively, the format could be "NCHW", the data storage order of: `batch_shape + [channels, height, width]`. |
| `dilations` | An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input` and the same outer batch shape. |
tensorflow tf.nn.embedding_lookup tf.nn.embedding\_lookup
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/embedding_ops.py#L339-L402) |
Looks up embeddings for the given `ids` from a list of tensors.
```
tf.nn.embedding_lookup(
params, ids, max_norm=None, name=None
)
```
This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of [`tf.gather`](../gather), where `params` is interpreted as a partitioning of a large embedding tensor.
If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the "div" partition strategy, which means we assign ids to partitions in a contiguous manner. For instance, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.
If the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.
The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
| Args |
| `params` | A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following "div" partition strategy. |
| `ids` | A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`. |
| `max_norm` | If not `None`, each embedding is clipped if its l2-norm is larger than this value. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` with the same type as the tensors in `params`. For instance, if `params` is a 5x2 matrix:
```
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
```
or a list of matrices:
```
params[0]: [[1, 2], [3, 4]]
params[1]: [[5, 6], [7, 8]]
params[2]: [[9, 10]]
```
and `ids` is:
```
[0, 3, 4]
```
The output will be a 3x2 matrix:
```
[[1, 2], [7, 8], [9, 10]]
```
|
| Raises |
| `ValueError` | If `params` is empty. |
tensorflow tf.nn.conv_transpose tf.nn.conv\_transpose
=====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3391-L3476) |
The transpose of `convolution`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.conv_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv_transpose)
```
tf.nn.conv_transpose(
input,
filters,
output_shape,
strides,
padding='SAME',
data_format=None,
dilations=None,
name=None
)
```
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
| Args |
| `input` | An N+2 dimensional `Tensor` of shape `[batch_size] + input_spatial_shape + [in_channels]` if data\_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data\_format starts with "NC". It must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. |
| `filters` | An N+2 dimensional `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`. |
| `output_shape` | A 1-D `Tensor` representing the output shape of the deconvolution op. |
| `strides` | An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". |
| `dilations` | An int or list of `ints` that has length `1`, `N` or `N+2`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the spatial dimensions. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. |
| `name` | A name for the operation (optional). If not specified "conv\_transpose" is used. |
| Returns |
| A `Tensor` with the same type as `value`. |
#### References:
Deconvolutional Networks: [Zeiler et al., 2010](https://ieeexplore.ieee.org/abstract/document/5539957) ([pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))
tensorflow tf.nn.conv3d_transpose tf.nn.conv3d\_transpose
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3310-L3381) |
The transpose of `conv3d`.
```
tf.nn.conv3d_transpose(
input,
filters,
output_shape,
strides,
padding='SAME',
data_format='NDHWC',
dilations=None,
name=None
)
```
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d` rather than an actual deconvolution.
| Args |
| `input` | A 5-D `Tensor` of type `float` and shape `[batch, depth, height, width, in_channels]` for `NDHWC` data format or `[batch, in_channels, depth, height, width]` for `NCDHW` data format. |
| `filters` | A 5-D `Tensor` with the same type as `input` and shape `[depth, height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `input`. |
| `output_shape` | A 1-D `Tensor` representing the output shape of the deconvolution op. |
| `strides` | An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string. 'NDHWC' and 'NCDHW' are supported. |
| `dilations` | An int or list of `ints` that has length `1`, `3` or `5`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `D`, `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1. |
| `name` | Optional name for the returned tensor. |
| Returns |
| A `Tensor` with the same type as `input`. |
#### References:
Deconvolutional Networks: [Zeiler et al., 2010](https://ieeexplore.ieee.org/abstract/document/5539957) ([pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))
tensorflow tf.nn.RNNCellDropoutWrapper tf.nn.RNNCellDropoutWrapper
===========================
Operator adding dropout to inputs and outputs of the given cell.
Inherits From: [`Module`](../module)
```
tf.nn.RNNCellDropoutWrapper(
*args, **kwargs
)
```
| Args |
| `cell` | an RNNCell, a projection to output\_size is added to it. |
| `input_keep_prob` | unit Tensor or float between 0 and 1, input keep probability; if it is constant and 1, no input dropout will be added. |
| `output_keep_prob` | unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. |
| `state_keep_prob` | unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. State dropout is performed on the outgoing states of the cell. **Note** the state components to which dropout is applied when `state_keep_prob` is in `(0, 1)` are also determined by the argument `dropout_state_filter_visitor` (e.g. by default dropout is never applied to the `c` component of an `LSTMStateTuple`). |
| `variational_recurrent` | Python bool. If `True`, then the same dropout pattern is applied across all time steps per run call. If this parameter is set, `input_size` **must** be provided. |
| `input_size` | (optional) (possibly nested tuple of) `TensorShape` objects containing the depth(s) of the input tensors expected to be passed in to the `DropoutWrapper`. Required and used **iff** `variational_recurrent = True` and `input_keep_prob < 1`. |
| `dtype` | (optional) The `dtype` of the input, state, and output tensors. Required and used **iff** `variational_recurrent = True`. |
| `seed` | (optional) integer, the randomness seed. |
| `dropout_state_filter_visitor` | (optional), default: (see below). Function that takes any hierarchical level of the state and returns a scalar or depth=1 structure of Python booleans describing which terms in the state should be dropped out. In addition, if the function returns `True`, dropout is applied across this sublevel. If the function returns `False`, dropout is not applied across this entire sublevel. Default behavior: perform dropout on all terms except the memory (`c`) state of `LSTMCellState` objects, and don't try to apply dropout to `TensorArray` objects: `def dropout_state_filter_visitor(s): if isinstance(s, LSTMCellState): # Never perform dropout on the c state. return LSTMCellState(c=False, h=True) elif isinstance(s, TensorArray): return False return True` |
| `**kwargs` | dict of keyword arguments for base layer. |
| Raises |
| `TypeError` | if `cell` is not an `RNNCell`, or `keep_state_fn` is provided but not `callable`. |
| `ValueError` | if any of the keep\_probs are not between 0 and 1. |
| Attributes |
| `activity_regularizer` | Optional regularizer function for the output of this layer. |
| `compute_dtype` | The dtype of the layer's computations. This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless mixed precision is used, this is the same as [`Layer.dtype`](../keras/layers/layer#dtype), the dtype of the weights. Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in [`Layer.**call**`](../keras/layers/layer#__call__), so you do not have to insert these casts if implementing your own layer. Layers often perform certain internal computations in higher precision when `compute_dtype` is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases. |
| `dtype` | The dtype of the layer weights. This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless mixed precision is used, this is the same as [`Layer.compute_dtype`](../keras/layers/layer#compute_dtype), the dtype of the layer's computations. |
| `dtype_policy` | The dtype policy associated with this layer. This is an instance of a [`tf.keras.mixed_precision.Policy`](../keras/mixed_precision/policy). |
| `dynamic` | Whether the layer is dynamic (eager-only); set in the constructor. |
| `input` | Retrieves the input tensor(s) of a layer. Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. |
| `input_spec` | `InputSpec` instance(s) describing the input format for this layer. When you create a layer subclass, you can set `self.input_spec` to enable the layer to run input compatibility checks when it is called. Consider a `Conv2D` layer: it can only be called on a single input tensor of rank 4. As such, you can set, in `__init__()`:
```
self.input_spec = tf.keras.layers.InputSpec(ndim=4)
```
Now, if you try to call the layer on an input that isn't rank 4 (for instance, an input of shape `(2,)`, it will raise a nicely-formatted error:
```
ValueError: Input 0 of layer conv2d is incompatible with the layer:
expected ndim=4, found ndim=1. Full shape received: [2]
```
Input checks that can be specified via `input_spec` include:* Structure (e.g. a single input, a list of 2 inputs, etc)
* Shape
* Rank (ndim)
* Dtype
For more information, see [`tf.keras.layers.InputSpec`](../keras/layers/inputspec). |
| `losses` | List of losses added using the `add_loss()` API. Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a [`tf.GradientTape`](../gradienttape) will propagate gradients back to the corresponding variables.
```
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
l = MyLayer()
l(np.ones((10, 1)))
l.losses
[1.0]
```
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
len(model.losses)
0
model.add_loss(tf.abs(tf.reduce_mean(x)))
len(model.losses)
1
```
```
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10, kernel_initializer='ones')
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
```
|
| `metrics` | List of metrics added using the `add_metric()` API.
```
input = tf.keras.layers.Input(shape=(3,))
d = tf.keras.layers.Dense(2)
output = d(input)
d.add_metric(tf.reduce_max(output), name='max')
d.add_metric(tf.reduce_min(output), name='min')
[m.name for m in d.metrics]
['max', 'min']
```
|
| `non_trainable_weights` | List of all non-trainable weights tracked by this layer. Non-trainable weights are *not* updated during training. They are expected to be updated manually in `call()`. |
| `output` | Retrieves the output tensor(s) of a layer. Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. |
| `output_size` | |
| `state_size` | |
| `supports_masking` | Whether this layer supports computing a mask using `compute_mask`. |
| `trainable` | |
| `trainable_weights` | List of all trainable weights tracked by this layer. Trainable weights are updated via gradient descent during training. |
| `variable_dtype` | Alias of [`Layer.dtype`](../keras/layers/layer#dtype), the dtype of the weights. |
| `weights` | Returns the list of all layer variables/weights. |
| `wrapped_cell` | |
Methods
-------
### `add_loss`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1412-L1530)
```
add_loss(
losses, **kwargs
)
```
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs `a` and `b`, some entries in `layer.losses` may be dependent on `a` and some on `b`. This method automatically keeps track of dependencies.
This method can be used inside a subclassed layer or model's `call` function, in which case `losses` should be a Tensor or list of Tensors.
#### Example:
```
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
```
This method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These losses become part of the model's topology and are tracked in `get_config`.
#### Example:
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))
```
If this is not the case for your loss (if, for example, your loss references a `Variable` of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized.
#### Example:
```
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
```
| Args |
| `losses` | Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. |
| `**kwargs` | Additional keyword arguments for backward compatibility. Accepted values: inputs - Deprecated, will be automatically inferred. |
### `add_metric`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1565-L1684)
```
add_metric(
value, name=None, **kwargs
)
```
Adds metric tensor to the layer.
This method can be used inside the `call()` method of a subclassed layer or model.
```
class MyMetricLayer(tf.keras.layers.Layer):
def __init__(self):
super(MyMetricLayer, self).__init__(name='my_metric_layer')
self.mean = tf.keras.metrics.Mean(name='metric_1')
def call(self, inputs):
self.add_metric(self.mean(inputs))
self.add_metric(tf.reduce_sum(inputs), name='metric_2')
return inputs
```
This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These metrics become part of the model's topology and are tracked when you save the model via `save()`.
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
model.add_metric(math_ops.reduce_sum(x), name='metric_1')
```
>
> **Note:** Calling `add_metric()` with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model's inputs.
>
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
```
| Args |
| `value` | Metric tensor. |
| `name` | String metric name. |
| `**kwargs` | Additional keyword arguments for backward compatibility. Accepted values: `aggregation` - When the `value` tensor provided is not the result of calling a `keras.Metric` instance, it will be aggregated by default using a `keras.Metric.Mean`. |
### `build`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L193-L195)
```
build(
inputs_shape
)
```
### `compute_mask`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L910-L930)
```
compute_mask(
inputs, mask=None
)
```
Computes an output mask tensor.
| Args |
| `inputs` | Tensor or list of tensors. |
| `mask` | Tensor or list of tensors. |
| Returns |
| None or a tensor (or list of tensors, one per output tensor of the layer). |
### `compute_output_shape`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L756-L799)
```
compute_output_shape(
input_shape
)
```
Computes the output shape of the layer.
If the layer has not been built, this method will call `build` on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.
| Args |
| `input_shape` | Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer. |
| Returns |
| An input shape tuple. |
### `count_params`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L2129-L2148)
```
count_params()
```
Count the total number of scalars composing the weights.
| Returns |
| An integer count. |
| Raises |
| `ValueError` | if the layer isn't yet built (in which case its weights aren't yet defined). |
### `from_config`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L306-L316)
```
@classmethod
from_config(
config, custom_objects=None
)
```
Creates a layer from its config.
This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by `set_weights`).
| Args |
| `config` | A Python dictionary, typically the output of get\_config. |
| Returns |
| A layer instance. |
### `get_config`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L287-L304)
```
get_config()
```
Returns the config of the dropout wrapper.
### `get_initial_state`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/recurrent.py#L1092-L1093)
```
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
```
### `get_weights`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1808-L1850)
```
get_weights()
```
Returns the current weights of the layer, as NumPy arrays.
The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.
For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:
```
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
```
| Returns |
| Weights values as a list of NumPy arrays. |
### `set_weights`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1723-L1806)
```
set_weights(
weights
)
```
Sets the weights of the layer, from NumPy arrays.
The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer.
For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:
```
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
```
| Args |
| `weights` | a list of NumPy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`). |
| Raises |
| `ValueError` | If the provided weights list does not match the layer's specifications. |
### `zero_state`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L197-L199)
```
zero_state(
batch_size, dtype
)
```
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L932-L1053)
```
__call__(
*args, **kwargs
)
```
Wraps `call`, applying pre- and post-processing steps.
| Args |
| `*args` | Positional arguments to be passed to `self.call`. |
| `**kwargs` | Keyword arguments to be passed to `self.call`. |
| Returns |
| Output tensor(s). |
#### Note:
* The following optional keyword arguments are reserved for specific uses:
+ `training`: Boolean scalar tensor of Python boolean indicating whether the `call` is meant for training or inference.
+ `mask`: Boolean input mask.
* If the layer's `call` method takes a `mask` argument (as some Keras layers do), its default value will be set to the mask generated for `inputs` by the previous layer (if `input` did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.
* If the layer is not built, the method will call `build`.
| Raises |
| `ValueError` | if the layer's `call` method returns None (an invalid value). |
| `RuntimeError` | if `super().__init__()` was not called in the constructor. |
| programming_docs |
tensorflow tf.nn.batch_norm_with_global_normalization tf.nn.batch\_norm\_with\_global\_normalization
==============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1757-L1804) |
Batch normalization.
```
tf.nn.batch_norm_with_global_normalization(
input,
mean,
variance,
beta,
gamma,
variance_epsilon,
scale_after_normalization,
name=None
)
```
This op is deprecated. See [`tf.nn.batch_normalization`](batch_normalization).
| Args |
| `input` | A 4D input Tensor. |
| `mean` | A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof. |
| `variance` | A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof. |
| `beta` | A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. |
| `gamma` | A 1D gamma Tensor with size matching the last dimension of t. If "scale\_after\_normalization" is true, this tensor will be multiplied with the normalized tensor. |
| `variance_epsilon` | A small float number to avoid dividing by 0. |
| `scale_after_normalization` | A bool indicating whether the resulted tensor needs to be multiplied with gamma. |
| `name` | A name for this operation (optional). |
| Returns |
| A batch-normalized `t`. |
#### References:
Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html) ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))
tensorflow tf.nn.sigmoid_cross_entropy_with_logits tf.nn.sigmoid\_cross\_entropy\_with\_logits
===========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L153-L245) |
Computes sigmoid cross entropy given `logits`.
```
tf.nn.sigmoid_cross_entropy_with_logits(
labels=None, logits=None, name=None
)
```
Measures the probability error in tasks with two outcomes in which each outcome is independent and need not have a fully certain label. For instance, one could perform a regression where the probability of an event happening is known and used as a label. This loss may also be used for binary classification, where labels are either zero or one.
For brevity, let `x = logits`, `z = labels`. The logistic loss is
```
z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + log(1 + exp(-x))
= x - x * z + log(1 + exp(-x))
```
For x < 0, to avoid overflow in exp(-x), we reformulate the above
```
x - x * z + log(1 + exp(-x))
= log(exp(x)) - x * z + log(1 + exp(-x))
= - x * z + log(1 + exp(x))
```
Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation
```
max(x, 0) - x * z + log(1 + exp(-abs(x)))
```
`logits` and `labels` must have the same type and shape.
```
logits = tf.constant([1., -1., 0., 1., -1., 0., 0.])
labels = tf.constant([0., 0., 0., 1., 1., 1., 0.5])
tf.nn.sigmoid_cross_entropy_with_logits(
labels=labels, logits=logits).numpy()
array([1.3132617, 0.3132617, 0.6931472, 0.3132617, 1.3132617, 0.6931472,
0.6931472], dtype=float32)
```
Compared to the losses which handle multiple outcomes, [`tf.nn.softmax_cross_entropy_with_logits`](softmax_cross_entropy_with_logits) for general multi-class classification and [`tf.nn.sparse_softmax_cross_entropy_with_logits`](sparse_softmax_cross_entropy_with_logits) for more efficient multi-class classification with hard labels, `sigmoid_cross_entropy_with_logits` is a slight simplification for binary classification:
```
sigmoid(x) = softmax([x, 0])[0]
```
\[\frac{1}{1 + e^{-x} } = \frac{e^x}{e^x + e^0}\]
While `sigmoid_cross_entropy_with_logits` works for soft binary labels (probabilities between 0 and 1), it can also be used for binary classification where the labels are hard. There is an equivalence between all three symbols in this case, with a probability 0 indicating the second class or 1 indicating the first class:
```
sigmoid_logits = tf.constant([1., -1., 0.])
softmax_logits = tf.stack([sigmoid_logits, tf.zeros_like(sigmoid_logits)],
axis=-1)
soft_binary_labels = tf.constant([1., 1., 0.])
soft_multiclass_labels = tf.stack(
[soft_binary_labels, 1. - soft_binary_labels], axis=-1)
hard_labels = tf.constant([0, 0, 1])
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=hard_labels, logits=softmax_logits).numpy()
array([0.31326166, 1.3132616 , 0.6931472 ], dtype=float32)
tf.nn.softmax_cross_entropy_with_logits(
labels=soft_multiclass_labels, logits=softmax_logits).numpy()
array([0.31326166, 1.3132616, 0.6931472], dtype=float32)
tf.nn.sigmoid_cross_entropy_with_logits(
labels=soft_binary_labels, logits=sigmoid_logits).numpy()
array([0.31326166, 1.3132616, 0.6931472], dtype=float32)
```
| Args |
| `labels` | A `Tensor` of the same type and shape as `logits`. Between 0 and 1, inclusive. |
| `logits` | A `Tensor` of type `float32` or `float64`. Any real number. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of the same shape as `logits` with the componentwise logistic losses. |
| Raises |
| `ValueError` | If `logits` and `labels` do not have the same shape. |
tensorflow tf.nn.softsign tf.nn.softsign
==============
Computes softsign: `features / (abs(features) + 1)`.
#### View aliases
**Main aliases**
[`tf.math.softsign`](https://www.tensorflow.org/api_docs/python/tf/nn/softsign)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.math.softsign`](https://www.tensorflow.org/api_docs/python/tf/nn/softsign), [`tf.compat.v1.nn.softsign`](https://www.tensorflow.org/api_docs/python/tf/nn/softsign)
```
tf.nn.softsign(
features, name=None
)
```
| Args |
| `features` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `features`. |
tensorflow tf.nn.softmax_cross_entropy_with_logits tf.nn.softmax\_cross\_entropy\_with\_logits
===========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3954-L4014) |
Computes softmax cross entropy between `logits` and `labels`.
```
tf.nn.softmax_cross_entropy_with_logits(
labels, logits, axis=-1, name=None
)
```
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
>
> **Note:** While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of `labels` is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.
>
If using exclusive `labels` (wherein one and only one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.
#### Usage:
```
logits = [[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]]
labels = [[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]]
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)
<tf.Tensor: shape=(2,), dtype=float32,
numpy=array([0.16984604, 0.82474494], dtype=float32)>
```
A common use case is to have logits and labels of shape `[batch_size, num_classes]`, but higher dimensions are supported, with the `axis` argument specifying the class dimension.
`logits` and `labels` must have the same dtype (either `float16`, `float32`, or `float64`).
Backpropagation will happen into both `logits` and `labels`. To disallow backpropagation into `labels`, pass label tensors through [`tf.stop_gradient`](../stop_gradient) before feeding it to this function.
**Note that to avoid confusion, it is required to pass only named arguments to this function.**
| Args |
| `labels` | Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape `[batch_size, num_classes]`, each row of `labels[i]` must be a valid probability distribution. |
| `logits` | Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities. |
| `axis` | The class dimension. Defaulted to -1 which is the last dimension. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` that contains the softmax cross entropy loss. Its type is the same as `logits` and its shape is the same as `labels` except that it does not have the last dimension of `labels`. |
tensorflow tf.nn.ctc_beam_search_decoder tf.nn.ctc\_beam\_search\_decoder
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ctc_ops.py#L444-L493) |
Performs beam search decoding on the logits given in input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.ctc_beam_search_decoder_v2`](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_beam_search_decoder)
```
tf.nn.ctc_beam_search_decoder(
inputs, sequence_length, beam_width=100, top_paths=1
)
```
>
> **Note:** Although in general greedy search is a special case of beam-search with `top_paths=1` and `beam_width=1`, `ctc_beam_search_decoder` differs from `ctc_greedy_decoder` in the treatment of blanks when computing the probability of a sequence:
>
* `ctc_beam_search_decoder` treats blanks as sequence termination
* `ctc_greedy_decoder` treats blanks as regular elements
| Args |
| `inputs` | 3-D `float` `Tensor`, size `[max_time, batch_size, num_classes]`. The logits. |
| `sequence_length` | 1-D `int32` vector containing sequence lengths, having size `[batch_size]`. |
| `beam_width` | An int scalar >= 0 (beam search beam width). |
| `top_paths` | An int scalar >= 0, <= beam\_width (controls output size). |
| Returns |
| A tuple `(decoded, log_probabilities)` where |
| `decoded` | A list of length top\_paths, where `decoded[j]` is a `SparseTensor` containing the decoded outputs: `decoded[j].indices`: Indices matrix `[total_decoded_outputs[j], 2]`; The rows store: `[batch, time]`. `decoded[j].values`: Values vector, size `[total_decoded_outputs[j]]`. The vector stores the decoded classes for beam `j`. `decoded[j].dense_shape`: Shape vector, size `(2)`. The shape values are: `[batch_size, max_decoded_length[j]]`. |
| `log_probability` | A `float` matrix `[batch_size, top_paths]` containing sequence log-probabilities. |
tensorflow tf.nn.space_to_depth tf.nn.space\_to\_depth
======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L4106-L4109) |
SpaceToDepth for tensors of type T.
```
tf.nn.space_to_depth(
input, block_size, data_format='NHWC', name=None
)
```
Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the `height` and `width` dimensions are moved to the `depth` dimension. The attr `block_size` indicates the input block size.
* Non-overlapping blocks of size `block_size x block size` are rearranged into depth at each location.
* The depth of the output tensor is `block_size * block_size * input_depth`.
* The Y, X coordinates within each block of the input become the high order component of the output channel index.
* The input tensor's height and width must be divisible by block\_size.
The `data_format` attr specifies the layout of the input and output tensors with the following options: "NHWC": `[ batch, height, width, channels ]` "NCHW": `[ batch, channels, height, width ]` "NCHW\_VECT\_C": `qint8 [ batch, channels / 4, height, width, 4 ]`
It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data\_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates within the output image, bX, bY means coordinates within the input block, iC means input channels). The output would be a transpose to the following layout: n,oY,oX,bY,bX,iC
This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.
For example, given an input of shape `[1, 2, 2, 1]`, data\_format = "NHWC" and block\_size = 2:
```
x = [[[[1], [2]],
[[3], [4]]]]
```
This operation will output a tensor of shape `[1, 1, 1, 4]`:
```
[[[[1, 2, 3, 4]]]]
```
Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`, the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 \* block\_size \* block\_size). The output element shape is `[1, 1, 4]`.
For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.
```
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
This operation, for block\_size of 2, will return the following tensor of shape `[1, 1, 1, 12]`
```
[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
```
Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:
```
x = [[[[1], [2], [5], [6]],
[[3], [4], [7], [8]],
[[9], [10], [13], [14]],
[[11], [12], [15], [16]]]]
```
the operator will return the following tensor of shape `[1 2 2 4]`:
```
x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
```
| Args |
| `input` | A `Tensor`. |
| `block_size` | An `int` that is `>= 2`. The size of the spatial block. |
| `data_format` | An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.nn.embedding_lookup_sparse tf.nn.embedding\_lookup\_sparse
===============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/embedding_ops.py#L591-L679) |
Looks up embeddings for the given ids and weights from a list of tensors.
```
tf.nn.embedding_lookup_sparse(
params, sp_ids, sp_weights, combiner=None, max_norm=None, name=None
)
```
This op assumes that there is at least one id for each row in the dense tensor represented by sp\_ids (i.e. there are no rows with empty features), and that all the indices of sp\_ids are in canonical row-major order.
`sp_ids` and `sp_weights` (if not None) are `SparseTensor`s with rank of 2. Embeddings are always aggregated along the last dimension.
It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
If `len(params) > 1`, each element of `sp_ids` is partitioned between the elements of `params` according to the "div" partition strategy, which means we assign ids to partitions in a contiguous manner. For instance, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.
If the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.
| Args |
| `params` | A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following "div" partition strategy. |
| `sp_ids` | N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary. |
| `sp_weights` | either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`. |
| `combiner` | A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights. Defaults to `mean`. |
| `max_norm` | If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining. |
| `name` | Optional name for the op. |
| Returns |
| A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. In other words, if `shape(combined params) = [p0, p1, ..., pm]` and `shape(sp_ids) = shape(sp_weights) = [d0, d1]` then `shape(output) = [d0, p1, ..., pm]`. For instance, if params is a 10x20 matrix, and sp\_ids / sp\_weights are
```
[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id 0, weight 1.0
[2, 3]: id 1, weight 3.0
```
with `combiner`="mean", then the output will be a 3x20 matrix where
```
output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
output[1, :] = (params[0, :] * 1.0) / 1.0
output[2, :] = (params[1, :] * 3.0) / 3.0
```
|
| Raises |
| `TypeError` | If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is neither `None` nor `SparseTensor`. |
| `ValueError` | If `combiner` is not one of {"mean", "sqrtn", "sum"}. |
tensorflow tf.nn.collapse_repeated tf.nn.collapse\_repeated
========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ctc_ops.py#L1122-L1183) |
Merge repeated labels into single labels.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.collapse_repeated`](https://www.tensorflow.org/api_docs/python/tf/nn/collapse_repeated)
```
tf.nn.collapse_repeated(
labels, seq_length, name=None
)
```
| Args |
| `labels` | Tensor of shape [batch, max value in seq\_length] |
| `seq_length` | Tensor of shape [batch], sequence length of each batch element. |
| `name` | A name for this `Op`. Defaults to "collapse\_repeated\_labels". |
| Returns |
| A tuple `(collapsed_labels, new_seq_length)` where |
| `collapsed_labels` | Tensor of shape [batch, max\_seq\_length] with repeated labels collapsed and padded to max\_seq\_length, eg: `[[A, A, B, B, A], [A, B, C, D, E]] => [[A, B, A, 0, 0], [A, B, C, D, E]]` |
| `new_seq_length` | int tensor of shape [batch] with new sequence lengths. |
tensorflow tf.nn.gelu tf.nn.gelu
==========
Compute the Gaussian Error Linear Unit (GELU) activation function.
```
tf.nn.gelu(
features, approximate=False, name=None
)
```
Gaussian error linear unit (GELU) computes `x * P(X <= x)`, where `P(X) ~ N(0, 1)`. The (GELU) nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLU.
#### For example:
```
x = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype=tf.float32)
y = tf.nn.gelu(x)
y.numpy()
array([-0.00404951, -0.15865529, 0. , 0.8413447 , 2.9959507 ],
dtype=float32)
y = tf.nn.gelu(x, approximate=True)
y.numpy()
array([-0.00363752, -0.15880796, 0. , 0.841192 , 2.9963627 ],
dtype=float32)
```
| Args |
| `features` | A `Tensor` representing preactivation values. |
| `approximate` | An optional `bool`. Defaults to `False`. Whether to enable approximation. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` with the same type as `features`. |
#### References:
[Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415).
| programming_docs |
tensorflow tf.nn.max_pool2d tf.nn.max\_pool2d
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4937-L5042) |
Performs max pooling on 2D spatial data such as images.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.max_pool2d`](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool2d)
```
tf.nn.max_pool2d(
input, ksize, strides, padding, data_format='NHWC', name=None
)
```
This is a more specific version of [`tf.nn.max_pool`](max_pool) where the input tensor is 4D, representing 2D spatial data such as images. Using these APIs are equivalent
Downsamples the input images along theirs spatial dimensions (height and width) by taking its maximum over an input window defined by `ksize`. The window is shifted by `strides` along each dimension.
For example, for `strides=(2, 2)` and `padding=VALID` windows that extend outside of the input are not included in the output:
```
x = tf.constant([[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.]])
# Add the `batch` and `channels` dimensions.
x = x[tf.newaxis, :, :, tf.newaxis]
result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),
padding="VALID")
result[0, :, :, 0]
<tf.Tensor: shape=(1, 2), dtype=float32, numpy=
array([[6., 8.]], dtype=float32)>
```
With `padding=SAME`, we get:
```
x = tf.constant([[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.]])
x = x[tf.newaxis, :, :, tf.newaxis]
result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),
padding='SAME')
result[0, :, :, 0]
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[ 6., 8.],
[10.,12.]], dtype=float32)>
```
We can also specify padding explicitly. The following example adds width-1 padding on all sides (top, bottom, left, right):
```
x = tf.constant([[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.]])
x = x[tf.newaxis, :, :, tf.newaxis]
result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),
padding=[[0, 0], [1, 1], [1, 1], [0, 0]])
result[0, :, :, 0]
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 1., 3., 4.],
[ 9., 11., 12.]], dtype=float32)>
```
For more examples and detail, see [`tf.nn.max_pool`](max_pool).
| Args |
| `input` | A 4-D `Tensor` of the format specified by `data_format`. |
| `ksize` | An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor. If only one integer is specified, then we apply the same window for all 4 dims. If two are provided then we use those for H, W dimensions and keep N, C dimension window size = 1. |
| `strides` | An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor. If only one integer is specified, we apply the same stride to all 4 dims. If two are provided we use those for the H, W dimensions and keep N, C of stride = 1. |
| `padding` | Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. |
| `data_format` | A string. 'NHWC', 'NCHW' and 'NCHW\_VECT\_C' are supported. |
| `name` | Optional name for the operation. |
| Returns |
| A `Tensor` of format specified by `data_format`. The max pooled output tensor. |
tensorflow tf.nn.relu tf.nn.relu
==========
Computes rectified linear: `max(features, 0)`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.relu`](https://www.tensorflow.org/api_docs/python/tf/nn/relu)
```
tf.nn.relu(
features, name=None
)
```
See: <https://en.wikipedia.org/wiki/Rectifier_(neural_networks)> Example usage:
```
>>> tf.nn.relu([-2., 0., 3.]).numpy()
array([0., 0., 3.], dtype=float32)
```
| Args |
| `features` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `features`. |
tensorflow tf.nn.avg_pool3d tf.nn.avg\_pool3d
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4634-L4672) |
Performs the average pooling on the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.avg_pool3d`](https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool3d)
```
tf.nn.avg_pool3d(
input, ksize, strides, padding, data_format='NDHWC', name=None
)
```
Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
| Args |
| `input` | A 5-D `Tensor` of shape `[batch, depth, height, width, channels]` and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`. |
| `ksize` | An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string. 'NDHWC' and 'NCDHW' are supported. |
| `name` | Optional name for the operation. |
| Returns |
| A `Tensor` with the same type as `value`. The average pooled output tensor. |
tensorflow tf.nn.conv3d tf.nn.conv3d
============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3202-L3214) |
Computes a 3-D convolution given 5-D `input` and `filters` tensors.
```
tf.nn.conv3d(
input,
filters,
strides,
padding,
data_format='NDHWC',
dilations=None,
name=None
)
```
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.
Our Conv3D implements a form of cross-correlation.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Shape `[batch, in_depth, in_height, in_width, in_channels]`. |
| `filters` | A `Tensor`. Must have the same type as `input`. Shape `[filter_depth, filter_height, filter_width, in_channels, out_channels]`. `in_channels` must match between `input` and `filters`. |
| `strides` | A list of `ints` that has length `>= 5`. 1-D tensor of length 5. The stride of the sliding window for each dimension of `input`. Must have `strides[0] = strides[4] = 1`. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. |
| `data_format` | An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in\_depth, in\_height, in\_width, in\_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in\_channels, in\_depth, in\_height, in\_width]. |
| `dilations` | An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. 1-D tensor of length 5. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.nn.pool tf.nn.pool
==========
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L1652-L1750) |
Performs an N-D pooling operation.
```
tf.nn.pool(
input,
window_shape,
pooling_type,
strides=None,
padding='VALID',
data_format=None,
dilations=None,
name=None
)
```
In the case that `data_format` does not start with "NC", computes for 0 <= b < batch\_size, 0 <= x[i] < output\_spatial\_shape[i], 0 <= c < num\_channels:
```
output[b, x[0], ..., x[N-1], c] =
REDUCE_{z[0], ..., z[N-1]}
input[b,
x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],
...
x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1],
c],
```
where the reduction function REDUCE depends on the value of `pooling_type`, and pad\_before is defined based on the value of `padding` as described in the "returns" section of [`tf.nn.convolution`](convolution) for details. The reduction never includes out-of-bounds positions.
In the case that `data_format` starts with `"NC"`, the `input` and output are simply transposed as follows:
```
pool(input, data_format, **kwargs) =
tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]),
**kwargs),
[0, N+1] + range(1, N+1))
```
| Args |
| `input` | Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if data\_format does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data\_format starts with "NC". Pooling happens over the spatial dimensions only. |
| `window_shape` | Sequence of N ints >= 1. |
| `pooling_type` | Specifies pooling operation, must be "AVG" or "MAX". |
| `strides` | Optional. Sequence of N ints >= 1. Defaults to `[1]*N`. If any value of strides is > 1, then all values of dilation\_rate must be 1. |
| `padding` | The padding algorithm, must be "SAME" or "VALID". Defaults to "SAME". See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". |
| `dilations` | Optional. Dilation rate. List of N ints >= 1. Defaults to `[1]*N`. If any value of dilation\_rate is > 1, then all values of strides must be 1. |
| `name` | Optional. Name of the op. |
| Returns |
| Tensor of rank N+2, of shape [batch\_size] + output\_spatial\_shape + [num\_channels] if data\_format is None or does not start with "NC", or [batch\_size, num\_channels] + output\_spatial\_shape if data\_format starts with "NC", where `output_spatial_shape` depends on the value of padding: If padding = "SAME": output\_spatial\_shape[i] = ceil(input\_spatial\_shape[i] / strides[i]) If padding = "VALID": output\_spatial\_shape[i] = ceil((input\_spatial\_shape[i] - (window\_shape[i] - 1) \* dilation\_rate[i]) / strides[i]). |
| Raises |
| `ValueError` | if arguments are invalid. |
tensorflow tf.nn.conv1d tf.nn.conv1d
============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L2062-L2129) |
Computes a 1-D convolution given 3-D input and filter tensors.
```
tf.nn.conv1d(
input,
filters,
stride,
padding,
data_format='NWC',
dilations=None,
name=None
)
```
Given an input tensor of shape `batch_shape + [in_width, in_channels]` if `data_format` is `"NWC"`, or `batch_shape + [in_channels, in_width]` if `data_format` is `"NCW"`, and a filter / kernel tensor of shape `[filter_width, in_channels, out_channels]`, this op reshapes the arguments to pass them to `conv2d` to perform the equivalent convolution operation.
Internally, this op reshapes the input tensors and invokes [`tf.nn.conv2d`](conv2d). For example, if `data_format` does not start with `"NC"`, a tensor of shape `batch_shape + [in_width, in_channels]` is reshaped to `batch_shape + [1, in_width, in_channels]`, and the filter is reshaped to `[1, filter_width, in_channels, out_channels]`. The result is then reshaped back to `batch_shape + [out_width, out_channels]` (where out\_width is a function of the stride and padding as in conv2d) and returned to the caller.
| Args |
| `input` | A Tensor of rank at least 3. Must be of type `float16`, `float32`, or `float64`. |
| `filters` | A Tensor of rank at least 3. Must have the same type as `input`. |
| `stride` | An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step. |
| `padding` | 'SAME' or 'VALID'. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, the data is stored in the order of `batch_shape + [in_width, in_channels]`. The `"NCW"` format stores data as `batch_shape + [in_channels, in_width]`. |
| `dilations` | An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as input. |
| Raises |
| `ValueError` | if `data_format` is invalid. |
tensorflow tf.nn.compute_average_loss tf.nn.compute\_average\_loss
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L409-L472) |
Scales per-example losses with sample\_weights and computes their average.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.compute_average_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/compute_average_loss)
```
tf.nn.compute_average_loss(
per_example_loss, sample_weight=None, global_batch_size=None
)
```
Usage with distribution strategy and custom training loop:
```
with strategy.scope():
def compute_loss(labels, predictions, sample_weight=None):
# If you are using a `Loss` class instead, set reduction to `NONE` so that
# we can do the reduction afterwards and divide by global batch size.
per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, predictions)
# Compute loss that is scaled by sample_weight and by global batch size.
return tf.nn.compute_average_loss(
per_example_loss,
sample_weight=sample_weight,
global_batch_size=GLOBAL_BATCH_SIZE)
```
| Args |
| `per_example_loss` | Per-example loss. |
| `sample_weight` | Optional weighting for each example. |
| `global_batch_size` | Optional global batch size value. Defaults to (size of first dimension of `losses`) \* (number of replicas). |
| Returns |
| Scalar loss value. |
tensorflow tf.nn.convolution tf.nn.convolution
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L1140-L1157) |
Computes sums of N-D convolutions (actually cross-correlation).
```
tf.nn.convolution(
input,
filters,
strides=None,
padding='VALID',
data_format=None,
dilations=None,
name=None
)
```
This also supports either output striding via the optional `strides` parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional `dilations` parameter. Currently, however, output striding is not supported for atrous convolutions.
Specifically, in the case that `data_format` does not start with "NC", given a rank (N+2) `input` Tensor of shape
[num\_batches, input\_spatial\_shape[0], ..., input\_spatial\_shape[N-1], num\_input\_channels],
a rank (N+2) `filters` Tensor of shape
[spatial\_filter\_shape[0], ..., spatial\_filter\_shape[N-1], num\_input\_channels, num\_output\_channels],
an optional `dilations` tensor of shape N (defaults to `[1]*N`) specifying the filter upsampling/input downsampling rate, and an optional list of N `strides` (defaults to `[1]*N`), this computes for each N-D spatial output position `(x[0], ..., x[N-1])`:
```
output[b, x[0], ..., x[N-1], k] =
sum_{z[0], ..., z[N-1], q}
filter[z[0], ..., z[N-1], q, k] *
padded_input[b,
x[0]*strides[0] + dilation_rate[0]*z[0],
...,
x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],
q]
```
where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, `padded_input` is obtained by zero padding the input using an effective spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and output striding `strides`.
In the case that `data_format` does start with `"NC"`, the `input` and output (but not the `filters`) are simply transposed as follows:
```
convolution(input, data_format, **kwargs) =
tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]),
**kwargs),
[0, N+1] + range(1, N+1))
```
It is required that 1 <= N <= 3.
| Args |
| `input` | An (N+2)-D `Tensor` of type `T`, of shape `[batch_size] + input_spatial_shape + [in_channels]` if data\_format does not start with "NC" (default), or `[batch_size, in_channels] + input_spatial_shape` if data\_format starts with "NC". |
| `filters` | An (N+2)-D `Tensor` with the same type as `input` and shape `spatial_filter_shape + [in_channels, out_channels]`. |
| `padding` | A string, either `"VALID"` or `"SAME"`. The padding algorithm. `"valid"` means no padding. `"same"` results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input when the strides are 1. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `strides` | Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to `[1]*N`. If any value of strides is > 1, then all values of dilation\_rate must be 1. |
| `dilations` | Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called `input stride` or `dilation`. The effective filter size used for the convolution will be `spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting (dilation\_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation\_rate is > 1, then all values of strides must be 1. |
| `name` | Optional name for the returned tensor. |
| `data_format` | A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". |
| Returns |
| A `Tensor` with the same type as `input` of shape
```
`[batch_size] + output_spatial_shape + [out_channels]`
```
if data\_format is None or does not start with "NC", or
```
`[batch_size, out_channels] + output_spatial_shape`
```
if data\_format starts with "NC", where `output_spatial_shape` depends on the value of `padding`. If padding == "SAME": output\_spatial\_shape[i] = ceil(input\_spatial\_shape[i] / strides[i]) If padding == "VALID": output\_spatial\_shape[i] = ceil((input\_spatial\_shape[i] - (spatial\_filter\_shape[i]-1) \* dilation\_rate[i]) / strides[i]). |
| Raises |
| `ValueError` | If input/output depth does not match `filters` shape, if padding is other than `"VALID"` or `"SAME"`, or if data\_format is invalid. |
| programming_docs |
tensorflow tf.nn.depth_to_space tf.nn.depth\_to\_space
======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L4125-L4128) |
DepthToSpace for tensors of type T.
```
tf.nn.depth_to_space(
input, block_size, data_format='NHWC', name=None
)
```
Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the `depth` dimension are moved in spatial blocks to the `height` and `width` dimensions. The attr `block_size` indicates the input block size and how the data is moved.
* Chunks of data of size `block_size * block_size` from depth are rearranged into non-overlapping blocks of size `block_size x block_size`
* The width the output tensor is `input_depth * block_size`, whereas the height is `input_height * block_size`.
* The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index.
* The depth of the input tensor must be divisible by `block_size * block_size`.
The `data_format` attr specifies the layout of the input and output tensors with the following options: "NHWC": `[ batch, height, width, channels ]` "NCHW": `[ batch, channels, height, width ]` "NCHW\_VECT\_C": `qint8 [ batch, channels / 4, height, width, 4 ]`
It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data\_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates within the input image, bX, bY means coordinates within the output block, oC means output channels). The output would be the input transposed to the following layout: n,iY,bY,iX,bX,oC
This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.
For example, given an input of shape `[1, 1, 1, 4]`, data\_format = "NHWC" and block\_size = 2:
```
x = [[[[1, 2, 3, 4]]]]
```
This operation will output a tensor of shape `[1, 2, 2, 1]`:
```
[[[[1], [2]],
[[3], [4]]]]
```
Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = `4 / (block_size * block_size)`). The output element shape is `[2, 2, 1]`.
For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.
```
x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
```
This operation, for block size of 2, will return the following tensor of shape `[1, 2, 2, 3]`
```
[[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:
```
x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
```
the operator will return the following tensor of shape `[1 4 4 1]`:
```
x = [[[ [1], [2], [5], [6]],
[ [3], [4], [7], [8]],
[ [9], [10], [13], [14]],
[ [11], [12], [15], [16]]]]
```
| Args |
| `input` | A `Tensor`. |
| `block_size` | An `int` that is `>= 2`. The size of the spatial block, same as in Space2Depth. |
| `data_format` | An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.nn.relu6 tf.nn.relu6
===========
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3593-L3627) |
Computes Rectified Linear 6: `min(max(features, 0), 6)`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.relu6`](https://www.tensorflow.org/api_docs/python/tf/nn/relu6)
```
tf.nn.relu6(
features, name=None
)
```
In comparison with [`tf.nn.relu`](relu), relu6 activation functions have shown to empirically perform better under low-precision conditions (e.g. fixed point inference) by encouraging the model to learn sparse features earlier. Source: [Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010](http://www.cs.utoronto.ca/%7Ekriz/conv-cifar10-aug2010.pdf).
#### For example:
```
x = tf.constant([-3.0, -1.0, 0.0, 6.0, 10.0], dtype=tf.float32)
y = tf.nn.relu6(x)
y.numpy()
array([0., 0., 0., 6., 6.], dtype=float32)
```
| Args |
| `features` | A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, `int16`, or `int8`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` with the same type as `features`. |
#### References:
Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010 ([pdf](http://www.cs.utoronto.ca/%7Ekriz/conv-cifar10-aug2010.pdf))
tensorflow tf.nn.log_poisson_loss tf.nn.log\_poisson\_loss
========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L44-L107) |
Computes log Poisson loss given `log_input`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.log_poisson_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/log_poisson_loss)
```
tf.nn.log_poisson_loss(
targets, log_input, compute_full_loss=False, name=None
)
```
Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute\_full\_loss=True to enable Stirling's Approximation.
For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson loss is
```
-log(exp(-x) * (x^z) / z!)
= -log(exp(-x) * (x^z)) + log(z!)
~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
[ Note the second term is the Stirling's Approximation for log(z!).
It is invariant to x and does not affect optimization, though
important for correct relative loss comparisons. It is only
computed when compute_full_loss == True. ]
= x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
= exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
```
| Args |
| `targets` | A `Tensor` of the same type and shape as `log_input`. |
| `log_input` | A `Tensor` of type `float32` or `float64`. |
| `compute_full_loss` | whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of the same shape as `log_input` with the componentwise logistic losses. |
| Raises |
| `ValueError` | If `log_input` and `targets` do not have the same shape. |
tensorflow tf.nn.avg_pool1d tf.nn.avg\_pool1d
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4587-L4631) |
Performs the average pooling on the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.avg_pool1d`](https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool1d)
```
tf.nn.avg_pool1d(
input, ksize, strides, padding, data_format='NWC', name=None
)
```
Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
Note internally this op reshapes and uses the underlying 2d operation.
| Args |
| `input` | A 3-D `Tensor` of the format specified by `data_format`. |
| `ksize` | An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | An optional string from: "NWC", "NCW". Defaults to "NWC". |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of format specified by `data_format`. The max pooled output tensor. |
tensorflow tf.nn.erosion2d tf.nn.erosion2d
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L6209-L6279) |
Computes the grayscale erosion of 4-D `value` and 3-D `filters` tensors.
```
tf.nn.erosion2d(
value, filters, strides, padding, data_format, dilations, name=None
)
```
The `value` tensor has shape `[batch, in_height, in_width, depth]` and the `filters` tensor has shape `[filters_height, filters_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.
In detail, the grayscale morphological 2-D erosion is given by:
```
output[b, y, x, c] =
min_{dy, dx} value[b,
strides[1] * y - dilations[1] * dy,
strides[2] * x - dilations[2] * dx,
c] -
filters[dy, dx, c]
```
Duality: The erosion of `value` by the `filters` is equal to the negation of the dilation of `-value` by the reflected `filters`.
| Args |
| `value` | A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`. |
| `filters` | A `Tensor`. Must have the same type as `value`. 3-D with shape `[filters_height, filters_width, depth]`. |
| `strides` | A list of `ints` that has length `>= 4`. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A `string`, only `"NHWC"` is currently supported. |
| `dilations` | A list of `ints` that has length `>= 4`. 1-D of length 4. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`. |
| `name` | A name for the operation (optional). If not specified "erosion2d" is used. |
| Returns |
| A `Tensor`. Has the same type as `value`. 4-D with shape `[batch, out_height, out_width, depth]`. |
| Raises |
| `ValueError` | If the `value` depth does not match `filters`' shape, or if padding is other than `'VALID'` or `'SAME'`. |
tensorflow tf.nn.softmax tf.nn.softmax
=============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3829-L3867) |
Computes softmax activations.
#### View aliases
**Main aliases**
[`tf.math.softmax`](https://www.tensorflow.org/api_docs/python/tf/nn/softmax)
```
tf.nn.softmax(
logits, axis=None, name=None
)
```
Used for multi-class predictions. The sum of all outputs generated by softmax is 1.
This function performs the equivalent of
```
softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis, keepdims=True)
```
Example usage:
```
softmax = tf.nn.softmax([-1, 0., 1.])
softmax
<tf.Tensor: shape=(3,), dtype=float32,
numpy=array([0.09003057, 0.24472848, 0.66524094], dtype=float32)>
sum(softmax)
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
```
| Args |
| `logits` | A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. |
| `axis` | The dimension softmax would be performed on. The default is -1 which indicates the last dimension. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type and shape as `logits`. |
| Raises |
| `InvalidArgumentError` | if `logits` is empty or `axis` is beyond the last dimension of `logits`. |
tensorflow tf.nn.with_space_to_batch tf.nn.with\_space\_to\_batch
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L536-L693) |
Performs `op` on the space-to-batch representation of `input`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.with_space_to_batch`](https://www.tensorflow.org/api_docs/python/tf/nn/with_space_to_batch)
```
tf.nn.with_space_to_batch(
input,
dilation_rate,
padding,
op,
filter_shape=None,
spatial_dims=None,
data_format=None
)
```
This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified `dilation_rate`.
In the special case that `dilation_rate` is uniformly 1, this simply returns:
op(input, num\_spatial\_dims, padding)
Otherwise, it returns:
batch\_to\_space\_nd( op(space\_to\_batch\_nd(input, adjusted\_dilation\_rate, adjusted\_paddings), num\_spatial\_dims, "VALID") adjusted\_dilation\_rate, adjusted\_crops),
where:
adjusted\_dilation\_rate is an int64 tensor of shape [max(spatial*dims)], adjusted*{paddings,crops} are int64 tensors of shape [max(spatial\_dims), 2]
defined as follows:
We first define two int64 tensors `paddings` and `crops` of shape `[num_spatial_dims, 2]` based on the value of `padding` and the spatial dimensions of the `input`:
If `padding = "VALID"`, then:
paddings, crops = required\_space\_to\_batch\_paddings( input\_shape[spatial\_dims], dilation\_rate)
If `padding = "SAME"`, then:
dilated\_filter\_shape = filter\_shape + (filter\_shape - 1) \* (dilation\_rate - 1)
paddings, crops = required\_space\_to\_batch\_paddings( input\_shape[spatial\_dims], dilation\_rate, [(dilated\_filter\_shape - 1) // 2, dilated\_filter\_shape - 1 - (dilated\_filter\_shape - 1) // 2])
Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial dimensions are contiguous starting at the second dimension, but the specified `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and `crops` in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space\_to\_batch\_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of `spatial_dims`. Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case efficiently for any number of leading and trailing dimensions.
For 0 <= i < len(spatial\_dims), we assign:
adjusted\_dilation\_rate[spatial\_dims[i] - 1] = dilation\_rate[i] adjusted\_paddings[spatial\_dims[i] - 1, :] = paddings[i, :] adjusted\_crops[spatial\_dims[i] - 1, :] = crops[i, :]
All unassigned values of `adjusted_dilation_rate` default to 1, while all unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.
Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" padding is equivalent to specifying `padding = "SAME"` with a filter\_shape of `[1]*N`.
Advanced usage. Note the following optimization: A sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters and "VALID" padding
net = with\_space\_to\_batch(net, dilation\_rate, "VALID", op\_1) ... net = with\_space\_to\_batch(net, dilation\_rate, "VALID", op\_k)
can be combined into a single `with_space_to_batch` operation as follows:
def combined\_op(converted\_input, num\_spatial\_dims, \_): result = op\_1(converted\_input, num\_spatial\_dims, "VALID") ... result = op\_k(result, num\_spatial\_dims, "VALID")
net = with\_space\_to\_batch(net, dilation\_rate, "VALID", combined\_op)
This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and `batch_to_space_nd`.
Similarly, a sequence of `with_space_to_batch` operations with identical (not uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter dimensions
net = with\_space\_to\_batch(net, dilation\_rate, "SAME", op\_1, filter\_shape\_1) ... net = with\_space\_to\_batch(net, dilation\_rate, "SAME", op\_k, filter\_shape\_k)
can be combined into a single `with_space_to_batch` operation as follows:
def combined\_op(converted\_input, num\_spatial\_dims, \_): result = op\_1(converted\_input, num\_spatial\_dims, "SAME") ... result = op\_k(result, num\_spatial\_dims, "SAME")
net = with\_space\_to\_batch(net, dilation\_rate, "VALID", combined\_op)
| Args |
| `input` | Tensor of rank > max(spatial\_dims). |
| `dilation_rate` | int32 Tensor of *known* shape [num\_spatial\_dims]. |
| `padding` | str constant equal to "VALID" or "SAME" |
| `op` | Function that maps (input, num\_spatial\_dims, padding) -> output |
| `filter_shape` | If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num\_spatial\_dims]. If padding = "VALID", filter\_shape is ignored and need not be specified. |
| `spatial_dims` | Monotonically increasing sequence of `num_spatial_dims` integers (which are >= 1) specifying the spatial dimensions of `input` and output. Defaults to: `range(1, num_spatial_dims+1)`. |
| `data_format` | A string or None. Specifies whether the channel dimension of the `input` and output is the last dimension (default, or if `data_format` does not start with "NC"), or the second dimension (if `data_format` starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". |
| Returns |
| The output Tensor as described above, dimensions will vary based on the op provided. |
| Raises |
| `ValueError` | if `padding` is invalid or the arguments are incompatible. |
| `ValueError` | if `spatial_dims` are invalid. |
tensorflow tf.nn.scale_regularization_loss tf.nn.scale\_regularization\_loss
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L475-L511) |
Scales the sum of the given regularization losses by number of replicas.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.scale_regularization_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/scale_regularization_loss)
```
tf.nn.scale_regularization_loss(
regularization_loss
)
```
Usage with distribution strategy and custom training loop:
```
with strategy.scope():
def compute_loss(self, label, predictions):
per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, predictions)
# Compute loss that is scaled by sample_weight and by global batch size.
loss = tf.nn.compute_average_loss(
per_example_loss,
sample_weight=sample_weight,
global_batch_size=GLOBAL_BATCH_SIZE)
# Add scaled regularization losses.
loss += tf.nn.scale_regularization_loss(tf.nn.l2_loss(weights))
return loss
```
| Args |
| `regularization_loss` | Regularization loss. |
| Returns |
| Scalar loss value. |
| programming_docs |
tensorflow tf.nn.fractional_avg_pool tf.nn.fractional\_avg\_pool
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L6071-L6131) |
Performs fractional average pooling on the input.
```
tf.nn.fractional_avg_pool(
value,
pooling_ratio,
pseudo_random=False,
overlapping=False,
seed=0,
name=None
)
```
Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.
| Args |
| `value` | A `Tensor`. 4-D with shape `[batch, height, width, channels]`. |
| `pooling_ratio` | A list of `floats` that has length >= 4. Pooling ratio for each dimension of `value`, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. |
| `pseudo_random` | An optional `bool`. Defaults to `False`. When set to `True`, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper (Graham, 2015) for difference between pseudorandom and random. |
| `overlapping` | An optional `bool`. Defaults to `False`. When set to `True`, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: `index 0 1 2 3 4` `value 20 5 16 3 7` If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional avg pooling. |
| `seed` | An optional `int`. Defaults to `0`. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed. |
| `name` | A name for the operation (optional). |
| Returns |
A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, `col_pooling_sequence`). output: Output `Tensor` after fractional avg pooling. Has the same type as `value`. row\_pooling\_sequence: A `Tensor` of type `int64`. col\_pooling\_sequence: A `Tensor` of type `int64`.
#### References:
Fractional Max-Pooling: [Graham, 2015](https://arxiv.org/abs/1412.6071) ([pdf](https://arxiv.org/pdf/1412.6071.pdf))
tensorflow tf.nn.max_pool3d tf.nn.max\_pool3d
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L5047-L5087) |
Performs the max pooling on the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.max_pool3d`](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool3d)
```
tf.nn.max_pool3d(
input, ksize, strides, padding, data_format='NDHWC', name=None
)
```
| Args |
| `input` | A 5-D `Tensor` of the format specified by `data_format`. |
| `ksize` | An int or list of `ints` that has length `1`, `3` or `5`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1`, `3` or `5`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in\_depth, in\_height, in\_width, in\_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in\_channels, in\_depth, in\_height, in\_width]. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of format specified by `data_format`. The max pooled output tensor. |
tensorflow tf.nn.bias_add tf.nn.bias\_add
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3479-L3523) |
Adds `bias` to `value`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.bias_add`](https://www.tensorflow.org/api_docs/python/tf/nn/bias_add)
```
tf.nn.bias_add(
value, bias, data_format=None, name=None
)
```
This is (mostly) a special case of [`tf.add`](../math/add) where `bias` is restricted to 1-D. Broadcasting is supported, so `value` may have any number of dimensions. Unlike [`tf.add`](../math/add), the type of `bias` is allowed to differ from `value` in the case where both types are quantized.
| Args |
| `value` | A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, or `complex128`. |
| `bias` | A 1-D `Tensor` with size matching the channel dimension of `value`. Must be the same type as `value` unless `value` is a quantized type, in which case a different quantized type may be used. |
| `data_format` | A string. 'N...C' and 'NC...' are supported. If `None` (the default) is specified then 'N..C' is assumed. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` with the same type as `value`. |
| Raises |
| ValueError if data format is unrecognized, if `value` has less than two dimensions when `data_format` is 'N..C'/`None` or `value` has less then three dimensions when `data_format` is `NC..`, if `bias` does not have exactly one dimension (is a vector), or if the size of `bias` does not match the size of the channel dimension of `value`. |
tensorflow tf.nn.sampled_softmax_loss tf.nn.sampled\_softmax\_loss
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L2224-L2313) |
Computes and returns the sampled softmax training loss.
```
tf.nn.sampled_softmax_loss(
weights,
biases,
labels,
inputs,
num_sampled,
num_classes,
num_true=1,
sampled_values=None,
remove_accidental_hits=True,
seed=None,
name='sampled_softmax_loss'
)
```
This is a faster way to train a softmax classifier over a huge number of classes.
This operation is for training only. It is generally an underestimate of the full softmax loss.
A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference as in the following example:
```
if mode == "train":
loss = tf.nn.sampled_softmax_loss(
weights=weights,
biases=biases,
labels=labels,
inputs=inputs,
...)
elif mode == "eval":
logits = tf.matmul(inputs, tf.transpose(weights))
logits = tf.nn.bias_add(logits, biases)
labels_one_hot = tf.one_hot(labels, n_classes)
loss = tf.nn.softmax_cross_entropy_with_logits(
labels=labels_one_hot,
logits=logits)
```
See our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)
Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
>
> **Note:** when doing embedding lookup on `weights` and `bias`, "div" partition strategy will be used. Support for other partition strategy will be added later.
>
| Args |
| `weights` | A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num\_classes, dim]. The (possibly-sharded) class embeddings. |
| `biases` | A `Tensor` of shape `[num_classes]`. The class biases. |
| `labels` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of [`nn.softmax_cross_entropy_with_logits`](softmax_cross_entropy_with_logits). |
| `inputs` | A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network. |
| `num_sampled` | An `int`. The number of classes to randomly sample per batch. |
| `num_classes` | An `int`. The number of possible classes. |
| `num_true` | An `int`. The number of target classes per training example. |
| `sampled_values` | a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`) |
| `remove_accidental_hits` | A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True. |
| `seed` | random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling. |
| `name` | A name for the operation (optional). |
| Returns |
| A `batch_size` 1-D tensor of per-example sampled softmax losses. |
tensorflow tf.nn.local_response_normalization tf.nn.local\_response\_normalization
====================================
Local Response Normalization.
#### View aliases
**Main aliases**
[`tf.nn.lrn`](https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.local_response_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization), [`tf.compat.v1.nn.lrn`](https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization)
```
tf.nn.local_response_normalization(
input, depth_radius=5, bias=1, alpha=1, beta=0.5, name=None
)
```
The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,
```
sqr_sum[a, b, c, d] =
sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta
```
For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. 4-D. |
| `depth_radius` | An optional `int`. Defaults to `5`. 0-D. Half-width of the 1-D normalization window. |
| `bias` | An optional `float`. Defaults to `1`. An offset (usually positive to avoid dividing by 0). |
| `alpha` | An optional `float`. Defaults to `1`. A scale factor, usually positive. |
| `beta` | An optional `float`. Defaults to `0.5`. An exponent. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.nn.max_pool1d tf.nn.max\_pool1d
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4877-L4932) |
Performs the max pooling on the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.max_pool1d`](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool1d)
```
tf.nn.max_pool1d(
input, ksize, strides, padding, data_format='NWC', name=None
)
```
Note internally this op reshapes and uses the underlying 2d operation.
| Args |
| `input` | A 3-D `Tensor` of the format specified by `data_format`. |
| `ksize` | An int or list of `ints` that has length `1` or `3`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1` or `3`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NWC"`, this should be in the form `[[0, 0], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCW"`, this should be in the form `[[0, 0], [0, 0], [pad_left, pad_right]]`. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. |
| `data_format` | An optional string from: "NWC", "NCW". Defaults to "NWC". |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of format specified by `data_format`. The max pooled output tensor. |
tensorflow tf.nn.sufficient_statistics tf.nn.sufficient\_statistics
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1253-L1280) |
Calculate the sufficient statistics for the mean and variance of `x`.
```
tf.nn.sufficient_statistics(
x, axes, shift=None, keepdims=False, name=None
)
```
These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: <https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data>
| Args |
| `x` | A `Tensor`. |
| `axes` | Array of ints. Axes along which to compute mean and variance. |
| `shift` | A `Tensor` containing the value by which to shift the data for numerical stability, or `None` if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. |
| `keepdims` | produce statistics with the same dimensionality as the input. |
| `name` | Name used to scope the operations that compute the sufficient stats. |
| Returns |
| Four `Tensor` objects of the same type as `x`: * the count (number of elements to average over).
* the (possibly shifted) sum of the elements in the array.
* the (possibly shifted) sum of squares of the elements in the array.
* the shift by which the mean must be corrected or None if `shift` is None.
|
tensorflow tf.nn.elu tf.nn.elu
=========
Computes the exponential linear function.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.elu`](https://www.tensorflow.org/api_docs/python/tf/nn/elu)
```
tf.nn.elu(
features, name=None
)
```
The ELU function is defined as:
* \( e ^ x - 1 \) if \( x < 0 \)
* \( x \) if \( x >= 0 \)
#### Examples:
```
tf.nn.elu(1.0)
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
tf.nn.elu(0.0)
<tf.Tensor: shape=(), dtype=float32, numpy=0.0>
tf.nn.elu(-1000.0)
<tf.Tensor: shape=(), dtype=float32, numpy=-1.0>
```
See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)](http://arxiv.org/abs/1511.07289)
| Args |
| `features` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `features`. |
tensorflow tf.nn.weighted_moments tf.nn.weighted\_moments
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1498-L1520) |
Returns the frequency-weighted mean and variance of `x`.
```
tf.nn.weighted_moments(
x, axes, frequency_weights, keepdims=False, name=None
)
```
| Args |
| `x` | A tensor. |
| `axes` | 1-d tensor of int32 values; these are the axes along which to compute mean and variance. |
| `frequency_weights` | A tensor of positive weights which can be broadcast with x. |
| `keepdims` | Produce moments with the same dimensionality as the input. |
| `name` | Name used to scope the operation. |
| Returns |
| Two tensors: `weighted_mean` and `weighted_variance`. |
tensorflow tf.nn.sparse_softmax_cross_entropy_with_logits tf.nn.sparse\_softmax\_cross\_entropy\_with\_logits
===================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4373-L4431) |
Computes sparse softmax cross entropy between `logits` and `labels`.
```
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels, logits, name=None
)
```
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
>
> **Note:** For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the `labels` vector must provide a single specific index for the true class for each row of `logits` (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see `softmax_cross_entropy_with_logits_v2`.
>
A common use case is to have logits of shape `[batch_size, num_classes]` and have labels of shape `[batch_size]`, but higher dimensions are supported, in which case the `dim`-th dimension is assumed to be of size `num_classes`. `logits` must have the dtype of `float16`, `float32`, or `float64`, and `labels` must have the dtype of `int32` or `int64`.
```
logits = tf.constant([[2., -5., .5, -.1],
[0., 0., 1.9, 1.4],
[-100., 100., -100., -100.]])
labels = tf.constant([0, 3, 1])
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits).numpy()
array([0.29750752, 1.1448325 , 0. ], dtype=float32)
```
To avoid confusion, passing only named arguments to this function is recommended.
| Args |
| `labels` | `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of `labels` and result) and dtype `int32` or `int64`. Each entry in `labels` must be an index in `[0, num_classes)`. Other values will raise an exception when this op is run on CPU, and return `NaN` for corresponding loss and gradient rows on GPU. |
| `logits` | Unscaled log probabilities of shape `[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of the same shape as `labels` and of the same type as `logits` with the softmax cross entropy loss. |
| Raises |
| `ValueError` | If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the logits minus one. |
tensorflow tf.nn.leaky_relu tf.nn.leaky\_relu
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3630-L3662) |
Compute the Leaky ReLU activation function.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.leaky_relu`](https://www.tensorflow.org/api_docs/python/tf/nn/leaky_relu)
```
tf.nn.leaky_relu(
features, alpha=0.2, name=None
)
```
Source: [Rectifier Nonlinearities Improve Neural Network Acoustic Models. AL Maas, AY Hannun, AY Ng - Proc. ICML, 2013](https://ai.stanford.edu/%7Eamaas/papers/relu_hybrid_icml2013_final.pdf).
| Args |
| `features` | A `Tensor` representing preactivation values. Must be one of the following types: `float16`, `float32`, `float64`, `int32`, `int64`. |
| `alpha` | Slope of the activation function at x < 0. |
| `name` | A name for the operation (optional). |
| Returns |
| The activation value. |
#### References:
Rectifier Nonlinearities Improve Neural Network Acoustic Models: [Maas et al., 2013](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.693.1422) ([pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.1422&rep=rep1&type=pdf))
| programming_docs |
tensorflow tf.nn.atrous_conv2d tf.nn.atrous\_conv2d
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L1753-L1901) |
Atrous convolution (a.k.a. convolution with holes or dilated convolution).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.atrous_conv2d`](https://www.tensorflow.org/api_docs/python/tf/nn/atrous_conv2d)
```
tf.nn.atrous_conv2d(
value, filters, rate, padding, name=None
)
```
This function is a simpler wrapper around the more general [`tf.nn.convolution`](convolution), and exists only for backwards compatibility. You can use [`tf.nn.convolution`](convolution) to perform 1-D, 2-D, or 3-D atrous convolution.
Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D `value` and `filters` tensors. If the `rate` parameter is equal to one, it performs regular 2-D convolution. If the `rate` parameter is greater than one, it performs convolution with holes, sampling the input values every `rate` pixels in the `height` and `width` dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting `rate - 1` zeros between two consecutive values of the filters along the `height` and `width` dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).
#### More specifically:
```
output[batch, height, width, out_channel] =
sum_{dheight, dwidth, in_channel} (
filters[dheight, dwidth, in_channel, out_channel] *
value[batch, height + rate*dheight, width + rate*dwidth, in_channel]
)
```
Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to `conv2d_transpose` in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.
For a description of atrous convolution and how it can be used for dense feature extraction, please see: (Chen et al., 2015). The same operation is investigated further in (Yu et al., 2016). Previous works that effectively use atrous convolution in different ways are, among others, (Sermanet et al., 2014) and (Giusti et al., 2013). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing.
There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces
```
atrous_conv2d(value, filters, rate, padding=padding)
```
to the following three operations:
```
paddings = ...
net = space_to_batch(value, paddings, block_size=rate)
net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID")
crops = ...
net = batch_to_space(net, crops, block_size=rate)
```
Advanced usage. Note the following optimization: A sequence of `atrous_conv2d` operations with identical `rate` parameters, 'SAME' `padding`, and filters with odd heights/ widths:
```
net = atrous_conv2d(net, filters1, rate, padding="SAME")
net = atrous_conv2d(net, filters2, rate, padding="SAME")
...
net = atrous_conv2d(net, filtersK, rate, padding="SAME")
```
can be equivalently performed cheaper in terms of computation and memory as:
```
pad = ... # padding so that the input dims are multiples of rate
net = space_to_batch(net, paddings=pad, block_size=rate)
net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME")
net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME")
...
net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME")
net = batch_to_space(net, crops=pad, block_size=rate)
```
because a pair of consecutive `space_to_batch` and `batch_to_space` ops with the same `block_size` cancel out when their respective `paddings` and `crops` inputs are identical.
| Args |
| `value` | A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC" format. Its shape is `[batch, in_height, in_width, in_channels]`. |
| `filters` | A 4-D `Tensor` with the same type as `value` and shape `[filter_height, filter_width, in_channels, out_channels]`. `filters`' `in_channels` dimension must match that of `value`. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height `filter_height + (filter_height - 1) * (rate - 1)` and effective width `filter_width + (filter_width - 1) * (rate - 1)`, produced by inserting `rate - 1` zeros along consecutive elements across the `filters`' spatial dimensions. |
| `rate` | A positive int32. The stride with which we sample input values across the `height` and `width` dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the `height` and `width` dimensions. In the literature, the same parameter is sometimes called `input stride` or `dilation`. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `name` | Optional name for the returned tensor. |
| Returns |
| A `Tensor` with the same type as `value`. Output shape with `'VALID'` padding is:
```
[batch, height - 2 * (filter_width - 1),
width - 2 * (filter_height - 1), out_channels].
```
Output shape with `'SAME'` padding is:
```
[batch, height, width, out_channels].
```
|
| Raises |
| `ValueError` | If input/output depth does not match `filters`' shape, or if padding is other than `'VALID'` or `'SAME'`. |
#### References:
Multi-Scale Context Aggregation by Dilated Convolutions: [Yu et al., 2016](https://arxiv.org/abs/1511.07122) ([pdf](https://arxiv.org/pdf/1511.07122.pdf)) Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs: [Chen et al., 2015](http://arxiv.org/abs/1412.7062) ([pdf](https://arxiv.org/pdf/1412.7062)) OverFeat - Integrated Recognition, Localization and Detection using Convolutional Networks: [Sermanet et al., 2014](https://arxiv.org/abs/1312.6229) ([pdf](https://arxiv.org/pdf/1312.6229.pdf)) Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks: [Giusti et al., 2013](https://ieeexplore.ieee.org/abstract/document/6738831) ([pdf](https://arxiv.org/pdf/1302.1700.pdf))
tensorflow tf.nn.conv2d_transpose tf.nn.conv2d\_transpose
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L2655-L2739) |
The transpose of `conv2d`.
```
tf.nn.conv2d_transpose(
input,
filters,
output_shape,
strides,
padding='SAME',
data_format='NHWC',
dilations=None,
name=None
)
```
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of `atrous_conv2d` rather than an actual deconvolution.
| Args |
| `input` | A 4-D `Tensor` of type `float` and shape `[batch, height, width, in_channels]` for `NHWC` data format or `[batch, in_channels, height, width]` for `NCHW` data format. |
| `filters` | A 4-D `Tensor` with the same type as `input` and shape `[height, width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `input`. |
| `output_shape` | A 1-D `Tensor` representing the output shape of the deconvolution op. |
| `strides` | An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of `input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 0. The dimension order is determined by the value of `data_format`, see below for details. |
| `padding` | Either the `string` `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. |
| `data_format` | A string. 'NHWC' and 'NCHW' are supported. |
| `dilations` | An int or list of `ints` that has length `1`, `2` or `4`, defaults to 1. The dilation factor for each dimension of`input`. If a single value is given it is replicated in the `H` and `W` dimension. By default the `N` and `C` dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. |
| `name` | Optional name for the returned tensor. |
| Returns |
| A `Tensor` with the same type as `input`. |
| Raises |
| `ValueError` | If input/output depth does not match `filter`'s shape, or if padding is other than `'VALID'` or `'SAME'`. |
#### References:
Deconvolutional Networks: [Zeiler et al., 2010](https://ieeexplore.ieee.org/abstract/document/5539957) ([pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))
tensorflow tf.nn.moments tf.nn.moments
=============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1382-L1416) |
Calculates the mean and variance of `x`.
```
tf.nn.moments(
x, axes, shift=None, keepdims=False, name=None
)
```
The mean and variance are calculated by aggregating the contents of `x` across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean and variance of a vector.
>
> **Note:** shift is currently not used; the true mean is computed and used.
>
When using these moments for batch normalization (see [`tf.nn.batch_normalization`](batch_normalization)):
* for so-called "global normalization", used with convolutional filters with shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`.
* for simple batch normalization pass `axes=[0]` (batch only).
| Args |
| `x` | A `Tensor`. |
| `axes` | Array of ints. Axes along which to compute mean and variance. |
| `shift` | Not used in the current implementation. |
| `keepdims` | produce moments with the same dimensionality as the input. |
| `name` | Name used to scope the operations that compute the moments. |
| Returns |
| Two `Tensor` objects: `mean` and `variance`. |
tensorflow tf.nn.normalize_moments tf.nn.normalize\_moments
========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1283-L1313) |
Calculate the mean and variance of based on the sufficient statistics.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.normalize_moments`](https://www.tensorflow.org/api_docs/python/tf/nn/normalize_moments)
```
tf.nn.normalize_moments(
counts, mean_ss, variance_ss, shift, name=None
)
```
| Args |
| `counts` | A `Tensor` containing the total count of the data (one value). |
| `mean_ss` | A `Tensor` containing the mean sufficient statistics: the (possibly shifted) sum of the elements to average over. |
| `variance_ss` | A `Tensor` containing the variance sufficient statistics: the (possibly shifted) squared sum of the data to compute the variance over. |
| `shift` | A `Tensor` containing the value by which the data is shifted for numerical stability, or `None` if no shift was performed. |
| `name` | Name used to scope the operations that compute the moments. |
| Returns |
| Two `Tensor` objects: `mean` and `variance`. |
tensorflow tf.nn.depthwise_conv2d tf.nn.depthwise\_conv2d
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L896-L988) |
Depthwise 2-D convolution.
```
tf.nn.depthwise_conv2d(
input,
filter,
strides,
padding,
data_format=None,
dilations=None,
name=None
)
```
Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]` containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. The output has `in_channels * channel_multiplier` channels.
In detail, with the default NHWC format,
```
output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}
filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di,
strides[2] * j + rate[1] * dj, k]
```
Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
#### Usage Example:
```
x = np.array([
[1., 2.],
[3., 4.],
[5., 6.]
], dtype=np.float32).reshape((1, 3, 2, 1))
kernel = np.array([
[1., 2.],
[3., 4]
], dtype=np.float32).reshape((2, 1, 1, 2))
tf.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],
padding='VALID').numpy()
array([[[[10., 14.],
[14., 20.]],
[[18., 26.],
[22., 32.]]]], dtype=float32)
```
```
tf.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],
padding=[[0, 0], [1, 0], [1, 0], [0, 0]]).numpy()
array([[[[ 0., 0.],
[ 3., 4.],
[ 6., 8.]],
[[ 0., 0.],
[10., 14.],
[14., 20.]],
[[ 0., 0.],
[18., 26.],
[22., 32.]]]], dtype=float32)
```
| Args |
| `input` | 4-D with shape according to `data_format`. |
| `filter` | 4-D with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. |
| `strides` | 1-D of size 4. The stride of the sliding window for each dimension of `input`. |
| `padding` | Controls how to pad the image before applying the convolution. Can be the string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. |
| `data_format` | The data format for input. Either "NHWC" (default) or "NCHW". |
| `dilations` | 1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1. |
| `name` | A name for this operation (optional). |
| Returns |
| A 4-D `Tensor` with shape according to `data_format`. E.g., for "NHWC" format, shape is `[batch, out_height, out_width, in_channels * channel_multiplier].` |
tensorflow tf.nn.RNNCellDeviceWrapper tf.nn.RNNCellDeviceWrapper
==========================
Operator that ensures an RNNCell runs on a particular device.
Inherits From: [`Module`](../module)
```
tf.nn.RNNCellDeviceWrapper(
*args, **kwargs
)
```
| Args |
| `cell` | An instance of `RNNCell`. |
| `device` | A device string or function, for passing to [`tf.device`](../device). |
| `**kwargs` | dict of keyword arguments for base layer. |
| Attributes |
| `activity_regularizer` | Optional regularizer function for the output of this layer. |
| `compute_dtype` | The dtype of the layer's computations. This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless mixed precision is used, this is the same as [`Layer.dtype`](../keras/layers/layer#dtype), the dtype of the weights. Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in [`Layer.**call**`](../keras/layers/layer#__call__), so you do not have to insert these casts if implementing your own layer. Layers often perform certain internal computations in higher precision when `compute_dtype` is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases. |
| `dtype` | The dtype of the layer weights. This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless mixed precision is used, this is the same as [`Layer.compute_dtype`](../keras/layers/layer#compute_dtype), the dtype of the layer's computations. |
| `dtype_policy` | The dtype policy associated with this layer. This is an instance of a [`tf.keras.mixed_precision.Policy`](../keras/mixed_precision/policy). |
| `dynamic` | Whether the layer is dynamic (eager-only); set in the constructor. |
| `input` | Retrieves the input tensor(s) of a layer. Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. |
| `input_spec` | `InputSpec` instance(s) describing the input format for this layer. When you create a layer subclass, you can set `self.input_spec` to enable the layer to run input compatibility checks when it is called. Consider a `Conv2D` layer: it can only be called on a single input tensor of rank 4. As such, you can set, in `__init__()`:
```
self.input_spec = tf.keras.layers.InputSpec(ndim=4)
```
Now, if you try to call the layer on an input that isn't rank 4 (for instance, an input of shape `(2,)`, it will raise a nicely-formatted error:
```
ValueError: Input 0 of layer conv2d is incompatible with the layer:
expected ndim=4, found ndim=1. Full shape received: [2]
```
Input checks that can be specified via `input_spec` include:* Structure (e.g. a single input, a list of 2 inputs, etc)
* Shape
* Rank (ndim)
* Dtype
For more information, see [`tf.keras.layers.InputSpec`](../keras/layers/inputspec). |
| `losses` | List of losses added using the `add_loss()` API. Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a [`tf.GradientTape`](../gradienttape) will propagate gradients back to the corresponding variables.
```
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
l = MyLayer()
l(np.ones((10, 1)))
l.losses
[1.0]
```
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
len(model.losses)
0
model.add_loss(tf.abs(tf.reduce_mean(x)))
len(model.losses)
1
```
```
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10, kernel_initializer='ones')
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
```
|
| `metrics` | List of metrics added using the `add_metric()` API.
```
input = tf.keras.layers.Input(shape=(3,))
d = tf.keras.layers.Dense(2)
output = d(input)
d.add_metric(tf.reduce_max(output), name='max')
d.add_metric(tf.reduce_min(output), name='min')
[m.name for m in d.metrics]
['max', 'min']
```
|
| `non_trainable_weights` | List of all non-trainable weights tracked by this layer. Non-trainable weights are *not* updated during training. They are expected to be updated manually in `call()`. |
| `output` | Retrieves the output tensor(s) of a layer. Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. |
| `output_size` | |
| `state_size` | |
| `supports_masking` | Whether this layer supports computing a mask using `compute_mask`. |
| `trainable` | |
| `trainable_weights` | List of all trainable weights tracked by this layer. Trainable weights are updated via gradient descent during training. |
| `variable_dtype` | Alias of [`Layer.dtype`](../keras/layers/layer#dtype), the dtype of the weights. |
| `weights` | Returns the list of all layer variables/weights. |
Methods
-------
### `add_loss`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1412-L1530)
```
add_loss(
losses, **kwargs
)
```
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs `a` and `b`, some entries in `layer.losses` may be dependent on `a` and some on `b`. This method automatically keeps track of dependencies.
This method can be used inside a subclassed layer or model's `call` function, in which case `losses` should be a Tensor or list of Tensors.
#### Example:
```
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
```
This method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These losses become part of the model's topology and are tracked in `get_config`.
#### Example:
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))
```
If this is not the case for your loss (if, for example, your loss references a `Variable` of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized.
#### Example:
```
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
```
| Args |
| `losses` | Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. |
| `**kwargs` | Additional keyword arguments for backward compatibility. Accepted values: inputs - Deprecated, will be automatically inferred. |
### `add_metric`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1565-L1684)
```
add_metric(
value, name=None, **kwargs
)
```
Adds metric tensor to the layer.
This method can be used inside the `call()` method of a subclassed layer or model.
```
class MyMetricLayer(tf.keras.layers.Layer):
def __init__(self):
super(MyMetricLayer, self).__init__(name='my_metric_layer')
self.mean = tf.keras.metrics.Mean(name='metric_1')
def call(self, inputs):
self.add_metric(self.mean(inputs))
self.add_metric(tf.reduce_sum(inputs), name='metric_2')
return inputs
```
This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These metrics become part of the model's topology and are tracked when you save the model via `save()`.
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
model.add_metric(math_ops.reduce_sum(x), name='metric_1')
```
>
> **Note:** Calling `add_metric()` with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model's inputs.
>
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
```
| Args |
| `value` | Metric tensor. |
| `name` | String metric name. |
| `**kwargs` | Additional keyword arguments for backward compatibility. Accepted values: `aggregation` - When the `value` tensor provided is not the result of calling a `keras.Metric` instance, it will be aggregated by default using a `keras.Metric.Mean`. |
### `build`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/rnn_cell_wrapper_v2.py#L70-L73)
```
build(
inputs_shape
)
```
Builds the wrapped cell.
### `compute_mask`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L910-L930)
```
compute_mask(
inputs, mask=None
)
```
Computes an output mask tensor.
| Args |
| `inputs` | Tensor or list of tensors. |
| `mask` | Tensor or list of tensors. |
| Returns |
| None or a tensor (or list of tensors, one per output tensor of the layer). |
### `compute_output_shape`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L756-L799)
```
compute_output_shape(
input_shape
)
```
Computes the output shape of the layer.
If the layer has not been built, this method will call `build` on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.
| Args |
| `input_shape` | Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer. |
| Returns |
| An input shape tuple. |
### `count_params`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L2129-L2148)
```
count_params()
```
Count the total number of scalars composing the weights.
| Returns |
| An integer count. |
| Raises |
| `ValueError` | if the layer isn't yet built (in which case its weights aren't yet defined). |
### `from_config`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/rnn_cell_wrapper_v2.py#L85-L90)
```
@classmethod
from_config(
config, custom_objects=None
)
```
Creates a layer from its config.
This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by `set_weights`).
| Args |
| `config` | A Python dictionary, typically the output of get\_config. |
| Returns |
| A layer instance. |
### `get_config`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L441-L444)
```
get_config()
```
### `get_initial_state`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/recurrent.py#L1092-L1093)
```
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
```
### `get_weights`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1808-L1850)
```
get_weights()
```
Returns the current weights of the layer, as NumPy arrays.
The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.
For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:
```
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
```
| Returns |
| Weights values as a list of NumPy arrays. |
### `set_weights`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1723-L1806)
```
set_weights(
weights
)
```
Sets the weights of the layer, from NumPy arrays.
The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer.
For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:
```
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
```
| Args |
| `weights` | a list of NumPy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`). |
| Raises |
| `ValueError` | If the provided weights list does not match the layer's specifications. |
### `zero_state`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L431-L434)
```
zero_state(
batch_size, dtype
)
```
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L932-L1053)
```
__call__(
*args, **kwargs
)
```
Wraps `call`, applying pre- and post-processing steps.
| Args |
| `*args` | Positional arguments to be passed to `self.call`. |
| `**kwargs` | Keyword arguments to be passed to `self.call`. |
| Returns |
| Output tensor(s). |
#### Note:
* The following optional keyword arguments are reserved for specific uses:
+ `training`: Boolean scalar tensor of Python boolean indicating whether the `call` is meant for training or inference.
+ `mask`: Boolean input mask.
* If the layer's `call` method takes a `mask` argument (as some Keras layers do), its default value will be set to the mask generated for `inputs` by the previous layer (if `input` did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.
* If the layer is not built, the method will call `build`.
| Raises |
| `ValueError` | if the layer's `call` method returns None (an invalid value). |
| `RuntimeError` | if `super().__init__()` was not called in the constructor. |
| programming_docs |
tensorflow tf.nn.depthwise_conv2d_backprop_input tf.nn.depthwise\_conv2d\_backprop\_input
========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3017-L3086) |
Computes the gradients of depthwise convolution with respect to the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.depthwise_conv2d_backprop_input`](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_backprop_input), [`tf.compat.v1.nn.depthwise_conv2d_native_backprop_input`](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_backprop_input)
```
tf.nn.depthwise_conv2d_backprop_input(
input_sizes,
filter,
out_backprop,
strides,
padding,
data_format='NHWC',
dilations=[1, 1, 1, 1],
name=None
)
```
| Args |
| `input_sizes` | A `Tensor` of type `int32`. An integer vector representing the shape of `input`, based on `data_format`. For example, if `data_format` is 'NHWC' then `input` is a 4-D `[batch, height, width, channels]` tensor. |
| `filter` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape `[filter_height, filter_width, in_channels, depthwise_multiplier]`. |
| `out_backprop` | A `Tensor`. Must have the same type as `filter`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then out\_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution. |
| `strides` | A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. |
| `padding` | Controls how to pad the image before applying the convolution. Can be the string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. |
| `data_format` | An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. |
| `dilations` | An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `filter`. |
tensorflow tf.nn.l2_loss tf.nn.l2\_loss
==============
L2 Loss.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.l2_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/l2_loss)
```
tf.nn.l2_loss(
t, name=None
)
```
Computes half the L2 norm of a tensor without the `sqrt`:
```
output = sum(t ** 2) / 2
```
| Args |
| `t` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. Typically 2-D, but may have any dimensions. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `t`. |
tensorflow tf.nn.dilation2d tf.nn.dilation2d
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L446-L514) |
Computes the grayscale dilation of 4-D `input` and 3-D `filters` tensors.
```
tf.nn.dilation2d(
input, filters, strides, padding, data_format, dilations, name=None
)
```
The `input` tensor has shape `[batch, in_height, in_width, depth]` and the `filters` tensor has shape `[filter_height, filter_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.
In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with `conv2d`, we use unmirrored filters):
```
output[b, y, x, c] =
max_{dy, dx} input[b,
strides[1] * y + rates[1] * dy,
strides[2] * x + rates[2] * dx,
c] +
filters[dy, dx, c]
```
Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.
Note on duality: The dilation of `input` by the `filters` is equal to the negation of the erosion of `-input` by the reflected `filters`.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, in_height, in_width, depth]`. |
| `filters` | A `Tensor`. Must have the same type as `input`. 3-D with shape `[filter_height, filter_width, depth]`. |
| `strides` | A list of `ints` that has length `>= 4`. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A `string`, only `"NHWC"` is currently supported. |
| `dilations` | A list of `ints` that has length `>= 4`. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow Module: tf.nn.experimental Module: tf.nn.experimental
==========================
Public API for tf.nn.experimental namespace.
Functions
---------
[`stateless_dropout(...)`](experimental/stateless_dropout): Computes dropout: randomly sets elements to zero to prevent overfitting.
tensorflow tf.nn.isotonic_regression tf.nn.isotonic\_regression
==========================
Solves isotonic regression problems along the given axis.
```
tf.nn.isotonic_regression(
inputs, decreasing=True, axis=-1
)
```
For each vector x, the problem solved is
\[\argmin\_{y\_1 >= y\_2 >= ... >= y\_n} \sum\_i (x\_i - y\_i)^2.\]
As the solution is component-wise constant, a second tensor is returned that encodes the segments. The problems are solved over the given axis.
Consider the following example, where we solve a batch of two problems. The first input is [3, 1, 2], while the second [1, 3, 4](as%20the%20axis%20is%201).
```
>>> x = tf.constant([[3, 1, 2], [1, 3, 4]], dtype=tf.float32)
>>> y, segments = tf.nn.isotonic_regression(x, axis=1)
>>> y # The solution.
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[3. , 1.5 , 1.5 ],
[2.6666667, 2.6666667, 2.6666667]], dtype=float32)>
```
Note that the first solution has two blocks [2] and [1.5, 1.5]. The second solution is constant, and thus has a single segment. These segments are exactly what the second returned tensor encodes:
```
segments
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[0, 1, 1],
[0, 0, 0]], dtype=int32)>
```
| Args |
| `inputs` | A tensor holding the inputs. |
| `decreasing` | If set to False, the inequalities in the optimizing constrained are flipped. |
| `axis` | The axis along which the problems should be solved. |
| Returns |
| `output` | The solutions, same shape as type as the input. |
| `segments` | An int32 tensor, same shape as the input indicating the segments that have the same value. Specifically, those positions that have the same value correspond to the same segment. These values start at zero, and are monotonously increasing for each solution. |
tensorflow tf.nn.fractional_max_pool tf.nn.fractional\_max\_pool
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L5891-L6002) |
Performs fractional max pooling on the input.
```
tf.nn.fractional_max_pool(
value,
pooling_ratio,
pseudo_random=False,
overlapping=False,
seed=0,
name=None
)
```
Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.
The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.
First we define the following:
1. input\_row\_length : the number of rows from the input set
2. output\_row\_length : which will be smaller than the input
3. alpha = input\_row\_length / output\_row\_length : our reduction ratio
4. K = floor(alpha)
5. row\_pooling\_sequence : this is the result list of pool boundary rows
Then, row\_pooling\_sequence should satisfy:
1. a[0] = 0 : the first value of the sequence is 0
2. a[end] = input\_row\_length : the last value of the sequence is the size
3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
4. length(row\_pooling\_sequence) = output\_row\_length+1
| Args |
| `value` | A `Tensor`. 4-D with shape `[batch, height, width, channels]`. |
| `pooling_ratio` | An int or list of `ints` that has length `1`, `2` or `4`. Pooling ratio for each dimension of `value`, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. |
| `pseudo_random` | An optional `bool`. Defaults to `False`. When set to `True`, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper (Graham, 2015) for difference between pseudorandom and random. |
| `overlapping` | An optional `bool`. Defaults to `False`. When set to `True`, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: `index 0 1 2 3 4` `value 20 5 16 3 7` If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling. |
| `seed` | An optional `int`. Defaults to `0`. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed. |
| `name` | A name for the operation (optional). |
| Returns |
A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, `col_pooling_sequence`). output: Output `Tensor` after fractional max pooling. Has the same type as `value`. row\_pooling\_sequence: A `Tensor` of type `int64`. col\_pooling\_sequence: A `Tensor` of type `int64`.
| Raises |
| `ValueError` | If no seed is specified and op determinism is enabled. |
#### References:
Fractional Max-Pooling: [Graham, 2015](https://arxiv.org/abs/1412.6071) ([pdf](https://arxiv.org/pdf/1412.6071.pdf))
tensorflow tf.nn.depthwise_conv2d_backprop_filter tf.nn.depthwise\_conv2d\_backprop\_filter
=========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3089-L3159) |
Computes the gradients of depthwise convolution with respect to the filter.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.depthwise_conv2d_backprop_filter`](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_backprop_filter), [`tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter`](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_backprop_filter)
```
tf.nn.depthwise_conv2d_backprop_filter(
input,
filter_sizes,
out_backprop,
strides,
padding,
data_format='NHWC',
dilations=[1, 1, 1, 1],
name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, in_width, in_channels]` tensor. |
| `filter_sizes` | A `Tensor` of type `int32`. An integer vector representing the tensor shape of `filter`, where `filter` is a 4-D `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor. |
| `out_backprop` | A `Tensor`. Must have the same type as `input`. 4-D with shape based on `data_format`. For example, if `data_format` is 'NHWC' then out\_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution. |
| `strides` | A list of `ints`. The stride of the sliding window for each dimension of the input of the convolution. |
| `padding` | Controls how to pad the image before applying the convolution. Can be the string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. |
| `data_format` | An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. |
| `dilations` | An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D tensor of length 4. The dilation factor for each dimension of `input`. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of `data_format`, see above for details. Dilations in the batch and depth dimensions must be 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.nn.max_pool_with_argmax tf.nn.max\_pool\_with\_argmax
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L5091-L5161) |
Performs max pooling on the input and outputs both max values and indices.
```
tf.nn.max_pool_with_argmax(
input,
ksize,
strides,
padding,
data_format='NHWC',
output_dtype=tf.dtypes.int64,
include_batch_in_index=False,
name=None
)
```
The indices in `argmax` are flattened, so that a maximum value at position `[b, y, x, c]` becomes flattened index: `(y * width + x) * channels + c` if `include_batch_in_index` is False; `((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True.
The indices returned are always in `[0, height) x [0, width)` before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, height, width, channels]`. Input to pool over. |
| `ksize` | An int or list of `ints` that has length `1`, `2` or `4`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1`, `2` or `4`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | An optional `string`, must be set to `"NHWC"`. Defaults to `"NHWC"`. Specify the data format of the input and output data. |
| `output_dtype` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.int32, tf.int64`. Defaults to [`tf.int64`](../../tf#int64). The dtype of the returned argmax tensor. |
| `include_batch_in_index` | An optional `boolean`. Defaults to `False`. Whether to include batch dimension in flattened index of `argmax`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (output, argmax). |
| `output` | A `Tensor`. Has the same type as `input`. |
| `argmax` | A `Tensor` of type `output_dtype`. |
tensorflow tf.nn.compute_accidental_hits tf.nn.compute\_accidental\_hits
===============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/candidate_sampling_ops.py#L344-L391) |
Compute the position ids in `sampled_candidates` matching `true_classes`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.compute_accidental_hits`](https://www.tensorflow.org/api_docs/python/tf/nn/compute_accidental_hits)
```
tf.nn.compute_accidental_hits(
true_classes, sampled_candidates, num_true, seed=None, name=None
)
```
In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.
See our [Candidate Sampling Algorithms Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).
We presuppose that the `sampled_candidates` are unique.
We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples `(index, id, weight)`, where `index` represents the row number in `true_classes`, `id` represents the position in `sampled_candidates`, and weight is `-FLOAT_MAX`.
The result of this op should be passed through a `sparse_to_dense` operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.
| Args |
| `true_classes` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. |
| `sampled_candidates` | A tensor of type `int64` and shape `[num_sampled]`. The sampled\_candidates output of CandidateSampler. |
| `num_true` | An `int`. The number of target classes per training example. |
| `seed` | An `int`. An operation-specific seed. Default is 0. |
| `name` | A name for the operation (optional). |
| Returns |
| `indices` | A `Tensor` of type `int32` and shape `[num_accidental_hits]`. Values indicate rows in `true_classes`. |
| `ids` | A `Tensor` of type `int64` and shape `[num_accidental_hits]`. Values indicate positions in `sampled_candidates`. |
| `weights` | A `Tensor` of type `float` and shape `[num_accidental_hits]`. Each value is `-FLOAT_MAX`. |
| programming_docs |
tensorflow tf.nn.safe_embedding_lookup_sparse tf.nn.safe\_embedding\_lookup\_sparse
=====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/embedding_ops.py#L682-L781) |
Lookup embedding results, accounting for invalid IDs and empty features.
```
tf.nn.safe_embedding_lookup_sparse(
embedding_weights,
sparse_ids,
sparse_weights=None,
combiner='mean',
default_id=None,
max_norm=None,
name=None
)
```
The partitioned embedding in `embedding_weights` must all be the same shape except for the first dimension. The first dimension is allowed to vary as the vocabulary size is not necessarily a multiple of num of shards.
Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs with non-positive weight. For an entry with no features, the embedding vector for `default_id` is returned, or the 0-vector if `default_id` is not supplied.
The ids and weights may be multi-dimensional. Embeddings are always aggregated along the last dimension.
If `len(embedding_weights) > 1`, each element `id` of `ids` is partitioned between the elements of `embedding_weights` according to the "div" partition strategy, which means we assign ids to partitions in a contiguous manner. For instance, 13 ids are split across 5 partitions as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.
If the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(embedding_weights)` partitions will be assigned one more id.
| Args |
| `embedding_weights` | A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following "div" partition strategy. |
| `sparse_ids` | `SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the ids. `d_0` is typically batch size. |
| `sparse_weights` | `SparseTensor` of same shape as `sparse_ids`, containing float weights corresponding to `sparse_ids`, or `None` if all weights are be assumed to be 1.0. |
| `combiner` | A string specifying how to combine embedding results for each entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default. |
| `default_id` | The id to use for an entry with no features. Defaults to 0-vector. |
| `max_norm` | If not `None`, all embeddings are l2-normalized to max\_norm before combining. |
| `name` | A name for this operation (optional). |
| Returns |
| A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sparse_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. In other words, if `shape(combined embedding_weights) = [p0, p1, ..., pm]` and `shape(sparse_ids) = shape(sparse_weights) = [d0, d1, ..., dn]` then `shape(output) = [d0, d1, ... dn-1, p1, ..., pm]`. For instance, if params is a 10x20 matrix, and sp\_ids / sp\_weights are
```
[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id -1, weight 1.0
[2, 3]: id 1, weight 3.0
```
`default_id` is 0. with `combiner`="mean", then the output will be a 3x20 matrix where
```
output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
output[1, :] = (params[0, :] * 1.0) / 1.0
output[2, :] = (params[1, :] * 3.0) / 3.0
```
|
| Raises |
| `ValueError` | if `embedding_weights` is empty. |
tensorflow tf.nn.weighted_cross_entropy_with_logits tf.nn.weighted\_cross\_entropy\_with\_logits
============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L252-L341) |
Computes a weighted cross entropy.
```
tf.nn.weighted_cross_entropy_with_logits(
labels, logits, pos_weight, name=None
)
```
This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.
The usual cross-entropy cost is defined as:
```
labels * -log(sigmoid(logits)) +
(1 - labels) * -log(1 - sigmoid(logits))
```
A value `pos_weight > 1` decreases the false negative count, hence increasing the recall. Conversely setting `pos_weight < 1` decreases the false positive count and increases the precision. This can be seen from the fact that `pos_weight` is introduced as a multiplicative coefficient for the positive labels term in the loss expression:
```
labels * -log(sigmoid(logits)) * pos_weight +
(1 - labels) * -log(1 - sigmoid(logits))
```
For brevity, let `x = logits`, `z = labels`, `q = pos_weight`. The loss is:
```
qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))
= (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))
```
Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, the implementation uses
```
(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))
```
`logits` and `labels` must have the same type and shape.
```
labels = tf.constant([1., 0.5, 0.])
logits = tf.constant([1.5, -0.1, -10.])
tf.nn.weighted_cross_entropy_with_logits(
labels=labels, logits=logits, pos_weight=tf.constant(1.5)).numpy()
array([3.0211994e-01, 8.8049585e-01, 4.5776367e-05], dtype=float32)
tf.nn.weighted_cross_entropy_with_logits(
labels=labels, logits=logits, pos_weight=tf.constant(0.5)).numpy()
array([1.00706644e-01, 5.08297503e-01, 4.57763672e-05], dtype=float32)
```
| Args |
| `labels` | A `Tensor` of the same type and shape as `logits`, with values between 0 and 1 inclusive. |
| `logits` | A `Tensor` of type `float32` or `float64`, any real numbers. |
| `pos_weight` | A coefficient to use on the positive examples, typically a scalar but otherwise broadcastable to the shape of `logits`. Its value should be non-negative. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of the same shape as `logits` with the componentwise weighted logistic losses. |
| Raises |
| `ValueError` | If `logits` and `labels` do not have the same shape. |
tensorflow tf.nn.conv1d_transpose tf.nn.conv1d\_transpose
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L2132-L2220) |
The transpose of `conv1d`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.conv1d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv1d_transpose)
```
tf.nn.conv1d_transpose(
input,
filters,
output_shape,
strides,
padding='SAME',
data_format='NWC',
dilations=None,
name=None
)
```
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is actually the transpose (gradient) of `conv1d` rather than an actual deconvolution.
| Args |
| `input` | A 3-D `Tensor` of type `float` and shape `[batch, in_width, in_channels]` for `NWC` data format or `[batch, in_channels, in_width]` for `NCW` data format. |
| `filters` | A 3-D `Tensor` with the same type as `input` and shape `[filter_width, output_channels, in_channels]`. `filter`'s `in_channels` dimension must match that of `input`. |
| `output_shape` | A 1-D `Tensor`, containing three elements, representing the output shape of the deconvolution op. |
| `strides` | An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string. `'NWC'` and `'NCW'` are supported. |
| `dilations` | An int or list of `ints` that has length `1` or `3` which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1. |
| `name` | Optional name for the returned tensor. |
| Returns |
| A `Tensor` with the same type as `input`. |
| Raises |
| `ValueError` | If input/output depth does not match `filter`'s shape, if `output_shape` is not at 3-element vector, if `padding` is other than `'VALID'` or `'SAME'`, or if `data_format` is invalid. |
#### References:
Deconvolutional Networks: [Zeiler et al., 2010](https://ieeexplore.ieee.org/abstract/document/5539957) ([pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))
tensorflow tf.nn.avg_pool tf.nn.avg\_pool
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L4434-L4498) |
Performs the avg pooling on the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.avg_pool_v2`](https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool)
```
tf.nn.avg_pool(
input, ksize, strides, padding, data_format=None, name=None
)
```
Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.
| Args |
| `input` | Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + [num_channels]` if `data_format` does not start with "NC" (default), or `[batch_size, num_channels] + input_spatial_shape` if data\_format starts with "NC". Pooling happens over the spatial dimensions only. |
| `ksize` | An int or list of `ints` that has length `1`, `N` or `N+2`. The size of the window for each dimension of the input tensor. |
| `strides` | An int or list of `ints` that has length `1`, `N` or `N+2`. The stride of the sliding window for each dimension of the input tensor. |
| `padding` | A string, either `'VALID'` or `'SAME'`. The padding algorithm. See [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2) for more information. |
| `data_format` | A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW". |
| `name` | Optional name for the operation. |
| Returns |
| A `Tensor` of format specified by `data_format`. The average pooled output tensor. |
tensorflow tf.nn.ctc_loss tf.nn.ctc\_loss
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ctc_ops.py#L880-L986) |
Computes CTC (Connectionist Temporal Classification) loss.
```
tf.nn.ctc_loss(
labels,
logits,
label_length,
logit_length,
logits_time_major=True,
unique=None,
blank_index=None,
name=None
)
```
This op implements the CTC loss as presented in (Graves et al., 2006).
#### Notes:
* Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc\_loss setting of preprocess\_collapse\_repeated=False, ctc\_merge\_repeated=True
* Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor.
* On TPU and GPU: Only dense padded labels are supported.
* On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster.
* Default blank label is 0 rather num\_classes - 1, unless overridden by blank\_index.
| Args |
| `labels` | tensor of shape [batch\_size, max\_label\_seq\_length] or SparseTensor |
| `logits` | tensor of shape [frames, batch\_size, num\_labels], if logits\_time\_major == False, shape is [batch\_size, frames, num\_labels]. |
| `label_length` | tensor of shape [batch\_size], None if labels is SparseTensor Length of reference label sequence in labels. |
| `logit_length` | tensor of shape [batch\_size] Length of input sequence in logits. |
| `logits_time_major` | (optional) If True (default), logits is shaped [time, batch, logits]. If False, shape is [batch, time, logits] |
| `unique` | (optional) Unique label indices as computed by ctc\_unique\_labels(labels). If supplied, enable a faster, memory efficient implementation on TPU. |
| `blank_index` | (optional) Set the class index to use for the blank label. Negative values will start from num\_classes, ie, -1 will reproduce the ctc\_loss behavior of using num\_classes - 1 for the blank symbol. There is some memory/performance overhead to switching from the default of 0 as an additional shifted copy of the logits may be created. |
| `name` | A name for this `Op`. Defaults to "ctc\_loss\_dense". |
| Returns |
| `loss` | tensor of shape [batch\_size], negative log probabilities. |
#### References:
Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: [Graves et al., 2006](https://dl.acm.org/citation.cfm?id=1143891) ([pdf](http://www.cs.toronto.edu/%7Egraves/icml_2006.pdf))
tensorflow tf.nn.nce_loss tf.nn.nce\_loss
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L2007-L2109) |
Computes and returns the noise-contrastive estimation training loss.
```
tf.nn.nce_loss(
weights,
biases,
labels,
inputs,
num_sampled,
num_classes,
num_true=1,
sampled_values=None,
remove_accidental_hits=False,
name='nce_loss'
)
```
See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)
A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference as in the following example:
```
if mode == "train":
loss = tf.nn.nce_loss(
weights=weights,
biases=biases,
labels=labels,
inputs=inputs,
...)
elif mode == "eval":
logits = tf.matmul(inputs, tf.transpose(weights))
logits = tf.nn.bias_add(logits, biases)
labels_one_hot = tf.one_hot(labels, n_classes)
loss = tf.nn.sigmoid_cross_entropy_with_logits(
labels=labels_one_hot,
logits=logits)
loss = tf.reduce_sum(loss, axis=1)
```
>
> **Note:** when doing embedding lookup on `weights` and `bias`, "div" partition strategy will be used. Support for other partition strategy will be added later.
>
>
> **Note:** By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see [`tf.random.log_uniform_candidate_sampler`](../random/log_uniform_candidate_sampler).
>
>
> **Note:** In the case where `num_true` > 1, we assign to each target class the target probability 1 / `num_true` so that the target probabilities sum to 1 per-example.
>
>
> **Note:** It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.
>
| Args |
| `weights` | A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape [num\_classes, dim]. The (possibly-partitioned) class embeddings. |
| `biases` | A `Tensor` of shape `[num_classes]`. The class biases. |
| `labels` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. |
| `inputs` | A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network. |
| `num_sampled` | An `int`. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch. |
| `num_classes` | An `int`. The number of possible classes. |
| `num_true` | An `int`. The number of target classes per training example. |
| `sampled_values` | a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`) |
| `remove_accidental_hits` | A `bool`. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to `True`, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf). Default is False. |
| `name` | A name for the operation (optional). |
| Returns |
| A `batch_size` 1-D tensor of per-example NCE losses. |
tensorflow tf.nn.separable_conv2d tf.nn.separable\_conv2d
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L1102-L1173) |
2-D convolution with separable filters.
```
tf.nn.separable_conv2d(
input,
depthwise_filter,
pointwise_filter,
strides,
padding,
data_format=None,
dilations=None,
name=None
)
```
Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions `[1, 2]` and `3`, not spatial separability between dimensions `1` and `2`.
In detail, with the default NHWC format,
```
output[b, i, j, k] = sum_{di, dj, q, r}
input[b, strides[1] * i + di, strides[2] * j + dj, q] *
depthwise_filter[di, dj, q, r] *
pointwise_filter[0, 0, q * channel_multiplier + r, k]
```
`strides` controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. If any value in `rate` is greater than 1, we perform atrous depthwise convolution, in which case all values in the `strides` tensor must be equal to 1.
| Args |
| `input` | 4-D `Tensor` with shape according to `data_format`. |
| `depthwise_filter` | 4-D `Tensor` with shape `[filter_height, filter_width, in_channels, channel_multiplier]`. Contains `in_channels` convolutional filters of depth 1. |
| `pointwise_filter` | 4-D `Tensor` with shape `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise filter to mix channels after `depthwise_filter` has convolved spatially. |
| `strides` | 1-D of size 4. The strides for the depthwise convolution for each dimension of `input`. |
| `padding` | Controls how to pad the image before applying the depthwise convolution. Can be the string `"SAME"` or `"VALID"` indicating the type of padding algorithm to use, or a Python list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data\_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used and data\_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`. |
| `data_format` | The data format for input. Either "NHWC" (default) or "NCHW". |
| `dilations` | 1-D of size 2. The dilation rate in which we sample input values across the `height` and `width` dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1. |
| `name` | A name for this operation (optional). |
| Returns |
| A 4-D `Tensor` with shape according to 'data\_format'. For example, with data\_format="NHWC", shape is [batch, out\_height, out\_width, out\_channels]. |
| programming_docs |
tensorflow tf.nn.dropout tf.nn.dropout
=============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L5388-L5480) |
Computes dropout: randomly sets elements to zero to prevent overfitting.
```
tf.nn.dropout(
x, rate, noise_shape=None, seed=None, name=None
)
```
>
> **Note:** The behavior of dropout has changed between TensorFlow 1.x and 2.x. When converting 1.x code, please use named arguments to ensure behavior stays consistent.
>
See also: [`tf.keras.layers.Dropout`](../keras/layers/dropout) for a dropout layer.
[Dropout](https://arxiv.org/abs/1207.0580) is useful for regularizing DNN models. Inputs elements are randomly set to zero (and the other elements are rescaled). This encourages each node to be independently useful, as it cannot rely on the output of other nodes.
More precisely: With probability `rate` elements of `x` are set to `0`. The remaining elements are scaled up by `1.0 / (1 - rate)`, so that the expected value is preserved.
```
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate = 0.5, seed = 1).numpy()
array([[2., 0., 0., 2., 2.],
[2., 2., 2., 2., 2.],
[2., 0., 2., 0., 2.]], dtype=float32)
```
```
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate = 0.8, seed = 1).numpy()
array([[0., 0., 0., 5., 5.],
[0., 5., 0., 5., 0.],
[5., 0., 5., 0., 5.]], dtype=float32)
```
```
tf.nn.dropout(x, rate = 0.0) == x
<tf.Tensor: shape=(3, 5), dtype=bool, numpy=
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True]])>
```
By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. This is useful for dropping whole channels from an image or sequence. For example:
```
tf.random.set_seed(0)
x = tf.ones([3,10])
tf.nn.dropout(x, rate = 2/3, noise_shape=[1,10], seed=1).numpy()
array([[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],
[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],
[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.]], dtype=float32)
```
| Args |
| `x` | A floating point tensor. |
| `rate` | A scalar `Tensor` with the same type as x. The probability that each element is dropped. For example, setting rate=0.1 would drop 10% of input elements. |
| `noise_shape` | A 1-D integer `Tensor`, representing the shape for randomly generated keep/drop flags. |
| `seed` | A Python integer. Used to create random seeds. See [`tf.random.set_seed`](../random/set_seed) for behavior. |
| `name` | A name for this operation (optional). |
| Returns |
| A Tensor of the same shape of `x`. |
| Raises |
| `ValueError` | If `rate` is not in `[0, 1)` or if `x` is not a floating point tensor. `rate=1` is disallowed, because the output would be all zeros, which is likely not what was intended. |
tensorflow tf.nn.RNNCellResidualWrapper tf.nn.RNNCellResidualWrapper
============================
RNNCell wrapper that ensures cell inputs are added to the outputs.
Inherits From: [`Module`](../module)
```
tf.nn.RNNCellResidualWrapper(
*args, **kwargs
)
```
| Args |
| `cell` | An instance of `RNNCell`. |
| `residual_fn` | (Optional) The function to map raw cell inputs and raw cell outputs to the actual cell outputs of the residual network. Defaults to calling nest.map\_structure on (lambda i, o: i + o), inputs and outputs. |
| `**kwargs` | dict of keyword arguments for base layer. |
| Attributes |
| `activity_regularizer` | Optional regularizer function for the output of this layer. |
| `compute_dtype` | The dtype of the layer's computations. This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless mixed precision is used, this is the same as [`Layer.dtype`](../keras/layers/layer#dtype), the dtype of the weights. Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in [`Layer.**call**`](../keras/layers/layer#__call__), so you do not have to insert these casts if implementing your own layer. Layers often perform certain internal computations in higher precision when `compute_dtype` is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases. |
| `dtype` | The dtype of the layer weights. This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless mixed precision is used, this is the same as [`Layer.compute_dtype`](../keras/layers/layer#compute_dtype), the dtype of the layer's computations. |
| `dtype_policy` | The dtype policy associated with this layer. This is an instance of a [`tf.keras.mixed_precision.Policy`](../keras/mixed_precision/policy). |
| `dynamic` | Whether the layer is dynamic (eager-only); set in the constructor. |
| `input` | Retrieves the input tensor(s) of a layer. Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. |
| `input_spec` | `InputSpec` instance(s) describing the input format for this layer. When you create a layer subclass, you can set `self.input_spec` to enable the layer to run input compatibility checks when it is called. Consider a `Conv2D` layer: it can only be called on a single input tensor of rank 4. As such, you can set, in `__init__()`:
```
self.input_spec = tf.keras.layers.InputSpec(ndim=4)
```
Now, if you try to call the layer on an input that isn't rank 4 (for instance, an input of shape `(2,)`, it will raise a nicely-formatted error:
```
ValueError: Input 0 of layer conv2d is incompatible with the layer:
expected ndim=4, found ndim=1. Full shape received: [2]
```
Input checks that can be specified via `input_spec` include:* Structure (e.g. a single input, a list of 2 inputs, etc)
* Shape
* Rank (ndim)
* Dtype
For more information, see [`tf.keras.layers.InputSpec`](../keras/layers/inputspec). |
| `losses` | List of losses added using the `add_loss()` API. Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a [`tf.GradientTape`](../gradienttape) will propagate gradients back to the corresponding variables.
```
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
l = MyLayer()
l(np.ones((10, 1)))
l.losses
[1.0]
```
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
len(model.losses)
0
model.add_loss(tf.abs(tf.reduce_mean(x)))
len(model.losses)
1
```
```
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10, kernel_initializer='ones')
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
```
|
| `metrics` | List of metrics added using the `add_metric()` API.
```
input = tf.keras.layers.Input(shape=(3,))
d = tf.keras.layers.Dense(2)
output = d(input)
d.add_metric(tf.reduce_max(output), name='max')
d.add_metric(tf.reduce_min(output), name='min')
[m.name for m in d.metrics]
['max', 'min']
```
|
| `non_trainable_weights` | List of all non-trainable weights tracked by this layer. Non-trainable weights are *not* updated during training. They are expected to be updated manually in `call()`. |
| `output` | Retrieves the output tensor(s) of a layer. Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. |
| `output_size` | |
| `state_size` | |
| `supports_masking` | Whether this layer supports computing a mask using `compute_mask`. |
| `trainable` | |
| `trainable_weights` | List of all trainable weights tracked by this layer. Trainable weights are updated via gradient descent during training. |
| `variable_dtype` | Alias of [`Layer.dtype`](../keras/layers/layer#dtype), the dtype of the weights. |
| `weights` | Returns the list of all layer variables/weights. |
Methods
-------
### `add_loss`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1412-L1530)
```
add_loss(
losses, **kwargs
)
```
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs `a` and `b`, some entries in `layer.losses` may be dependent on `a` and some on `b`. This method automatically keeps track of dependencies.
This method can be used inside a subclassed layer or model's `call` function, in which case `losses` should be a Tensor or list of Tensors.
#### Example:
```
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
```
This method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These losses become part of the model's topology and are tracked in `get_config`.
#### Example:
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))
```
If this is not the case for your loss (if, for example, your loss references a `Variable` of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized.
#### Example:
```
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
```
| Args |
| `losses` | Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. |
| `**kwargs` | Additional keyword arguments for backward compatibility. Accepted values: inputs - Deprecated, will be automatically inferred. |
### `add_metric`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1565-L1684)
```
add_metric(
value, name=None, **kwargs
)
```
Adds metric tensor to the layer.
This method can be used inside the `call()` method of a subclassed layer or model.
```
class MyMetricLayer(tf.keras.layers.Layer):
def __init__(self):
super(MyMetricLayer, self).__init__(name='my_metric_layer')
self.mean = tf.keras.metrics.Mean(name='metric_1')
def call(self, inputs):
self.add_metric(self.mean(inputs))
self.add_metric(tf.reduce_sum(inputs), name='metric_2')
return inputs
```
This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These metrics become part of the model's topology and are tracked when you save the model via `save()`.
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
model.add_metric(math_ops.reduce_sum(x), name='metric_1')
```
>
> **Note:** Calling `add_metric()` with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model's inputs.
>
```
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
```
| Args |
| `value` | Metric tensor. |
| `name` | String metric name. |
| `**kwargs` | Additional keyword arguments for backward compatibility. Accepted values: `aggregation` - When the `value` tensor provided is not the result of calling a `keras.Metric` instance, it will be aggregated by default using a `keras.Metric.Mean`. |
### `build`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/rnn_cell_wrapper_v2.py#L70-L73)
```
build(
inputs_shape
)
```
Builds the wrapped cell.
### `compute_mask`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L910-L930)
```
compute_mask(
inputs, mask=None
)
```
Computes an output mask tensor.
| Args |
| `inputs` | Tensor or list of tensors. |
| `mask` | Tensor or list of tensors. |
| Returns |
| None or a tensor (or list of tensors, one per output tensor of the layer). |
### `compute_output_shape`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L756-L799)
```
compute_output_shape(
input_shape
)
```
Computes the output shape of the layer.
If the layer has not been built, this method will call `build` on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.
| Args |
| `input_shape` | Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer. |
| Returns |
| An input shape tuple. |
### `count_params`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L2129-L2148)
```
count_params()
```
Count the total number of scalars composing the weights.
| Returns |
| An integer count. |
| Raises |
| `ValueError` | if the layer isn't yet built (in which case its weights aren't yet defined). |
### `from_config`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L394-L404)
```
@classmethod
from_config(
config, custom_objects=None
)
```
Creates a layer from its config.
This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by `set_weights`).
| Args |
| `config` | A Python dictionary, typically the output of get\_config. |
| Returns |
| A layer instance. |
### `get_config`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L379-L392)
```
get_config()
```
Returns the config of the residual wrapper.
### `get_initial_state`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/recurrent.py#L1092-L1093)
```
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
```
### `get_weights`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1808-L1850)
```
get_weights()
```
Returns the current weights of the layer, as NumPy arrays.
The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.
For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:
```
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
```
| Returns |
| Weights values as a list of NumPy arrays. |
### `set_weights`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L1723-L1806)
```
set_weights(
weights
)
```
Sets the weights of the layer, from NumPy arrays.
The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer.
For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:
```
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
```
| Args |
| `weights` | a list of NumPy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`). |
| Raises |
| `ValueError` | If the provided weights list does not match the layer's specifications. |
### `zero_state`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/layers/legacy_rnn/rnn_cell_wrapper_impl.py#L344-L346)
```
zero_state(
batch_size, dtype
)
```
### `__call__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/keras/engine/base_layer.py#L932-L1053)
```
__call__(
*args, **kwargs
)
```
Wraps `call`, applying pre- and post-processing steps.
| Args |
| `*args` | Positional arguments to be passed to `self.call`. |
| `**kwargs` | Keyword arguments to be passed to `self.call`. |
| Returns |
| Output tensor(s). |
#### Note:
* The following optional keyword arguments are reserved for specific uses:
+ `training`: Boolean scalar tensor of Python boolean indicating whether the `call` is meant for training or inference.
+ `mask`: Boolean input mask.
* If the layer's `call` method takes a `mask` argument (as some Keras layers do), its default value will be set to the mask generated for `inputs` by the previous layer (if `input` did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.
* If the layer is not built, the method will call `build`.
| Raises |
| `ValueError` | if the layer's `call` method returns None (an invalid value). |
| `RuntimeError` | if `super().__init__()` was not called in the constructor. |
tensorflow tf.nn.log_softmax tf.nn.log\_softmax
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L3915-L3940) |
Computes log softmax activations.
#### View aliases
**Main aliases**
[`tf.math.log_softmax`](https://www.tensorflow.org/api_docs/python/tf/nn/log_softmax)
```
tf.nn.log_softmax(
logits, axis=None, name=None
)
```
For each batch `i` and class `j` we have
```
logsoftmax = logits - log(reduce_sum(exp(logits), axis))
```
| Args |
| `logits` | A non-empty `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. |
| `axis` | The dimension softmax would be performed on. The default is -1 which indicates the last dimension. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `logits`. Same shape as `logits`. |
| Raises |
| `InvalidArgumentError` | if `logits` is empty or `axis` is beyond the last dimension of `logits`. |
| programming_docs |
tensorflow tf.nn.experimental.stateless_dropout tf.nn.experimental.stateless\_dropout
=====================================
Computes dropout: randomly sets elements to zero to prevent overfitting.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.nn.experimental.stateless_dropout`](https://www.tensorflow.org/api_docs/python/tf/nn/experimental/stateless_dropout)
```
tf.nn.experimental.stateless_dropout(
x, rate, seed, rng_alg=None, noise_shape=None, name=None
)
```
[Dropout](https://arxiv.org/abs/1207.0580) is useful for regularizing DNN models. Inputs elements are randomly set to zero (and the other elements are rescaled). This encourages each node to be independently useful, as it cannot rely on the output of other nodes.
More precisely: With probability `rate` elements of `x` are set to `0`. The remaining elements are scaled up by `1.0 / (1 - rate)`, so that the expected value is preserved.
```
x = tf.ones([3,5])
tf.nn.experimental.stateless_dropout(x, rate=0.5, seed=[1, 0])
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[2., 0., 2., 0., 0.],
[0., 0., 2., 0., 2.],
[0., 0., 0., 0., 2.]], dtype=float32)>
```
```
x = tf.ones([3,5])
tf.nn.experimental.stateless_dropout(x, rate=0.8, seed=[1, 0])
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[5., 0., 0., 0., 0.],
[0., 0., 0., 0., 5.],
[0., 0., 0., 0., 5.]], dtype=float32)>
```
```
tf.nn.experimental.stateless_dropout(x, rate=0.0, seed=[1, 0]) == x
<tf.Tensor: shape=(3, 5), dtype=bool, numpy=
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True]])>
```
This function is a stateless version of [`tf.nn.dropout`](../dropout), in the sense that no matter how many times you call this function, the same `seed` will lead to the same results, and different `seed` will lead to different results.
```
x = tf.ones([3,5])
tf.nn.experimental.stateless_dropout(x, rate=0.8, seed=[1, 0])
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[5., 0., 0., 0., 0.],
[0., 0., 0., 0., 5.],
[0., 0., 0., 0., 5.]], dtype=float32)>
tf.nn.experimental.stateless_dropout(x, rate=0.8, seed=[1, 0])
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[5., 0., 0., 0., 0.],
[0., 0., 0., 0., 5.],
[0., 0., 0., 0., 5.]], dtype=float32)>
tf.nn.experimental.stateless_dropout(x, rate=0.8, seed=[2, 0])
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[5., 0., 0., 0., 0.],
[0., 0., 0., 5., 0.],
[0., 0., 0., 0., 0.]], dtype=float32)>
tf.nn.experimental.stateless_dropout(x, rate=0.8, seed=[2, 0])
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[5., 0., 0., 0., 0.],
[0., 0., 0., 5., 0.],
[0., 0., 0., 0., 0.]], dtype=float32)>
```
Compare the above results to those of [`tf.nn.dropout`](../dropout) below. The second time [`tf.nn.dropout`](../dropout) is called with the same seed, it will give a different output.
```
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate=0.8, seed=1)
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[0., 0., 0., 5., 5.],
[0., 5., 0., 5., 0.],
[5., 0., 5., 0., 5.]], dtype=float32)>
tf.nn.dropout(x, rate=0.8, seed=1)
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[0., 0., 0., 0., 0.],
[0., 0., 0., 5., 0.],
[0., 0., 0., 0., 0.]], dtype=float32)>
tf.nn.dropout(x, rate=0.8, seed=2)
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[0., 0., 0., 0., 0.],
[0., 5., 0., 5., 0.],
[0., 0., 0., 0., 0.]], dtype=float32)>
tf.nn.dropout(x, rate=0.8, seed=2)
<tf.Tensor: shape=(3, 5), dtype=float32, numpy=
array([[0., 0., 0., 0., 0.],
[5., 0., 5., 0., 5.],
[0., 5., 0., 0., 5.]], dtype=float32)>
```
The difference between this function and [`tf.nn.dropout`](../dropout) is analogous to the difference between [`tf.random.stateless_uniform`](../../random/stateless_uniform) and [`tf.random.uniform`](../../random/uniform). Please see [Random number generation](https://www.tensorflow.org/guide/random_numbers) guide for a detailed description of the various RNG systems in TF. As the guide states, legacy stateful RNG ops like [`tf.random.uniform`](../../random/uniform) and [`tf.nn.dropout`](../dropout) are not deprecated yet but highly discouraged, because their states are hard to control.
By default, each element is kept or dropped independently. If `noise_shape` is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` will make independent decisions. This is useful for dropping whole channels from an image or sequence. For example:
```
x = tf.ones([3,10])
tf.nn.experimental.stateless_dropout(x, rate=2/3, noise_shape=[1,10],
seed=[1, 0])
<tf.Tensor: shape=(3, 10), dtype=float32, numpy=
array([[3., 0., 0., 0., 0., 0., 0., 3., 0., 3.],
[3., 0., 0., 0., 0., 0., 0., 3., 0., 3.],
[3., 0., 0., 0., 0., 0., 0., 3., 0., 3.]], dtype=float32)>
```
| Args |
| `x` | A floating point tensor. |
| `rate` | A scalar `Tensor` with the same type as x. The probability that each element is dropped. For example, setting rate=0.1 would drop 10% of input elements. |
| `seed` | An integer tensor of shape `[2]`. The seed of the random numbers. |
| `rng_alg` | The algorithm used to generate the random numbers (default to `"auto_select"`). See the `alg` argument of [`tf.random.stateless_uniform`](../../random/stateless_uniform) for the supported values. |
| `noise_shape` | A 1-D integer `Tensor`, representing the shape for randomly generated keep/drop flags. |
| `name` | A name for this operation. |
| Returns |
| A Tensor of the same shape and dtype of `x`. |
| Raises |
| `ValueError` | If `rate` is not in `[0, 1)` or if `x` is not a floating point tensor. `rate=1` is disallowed, because the output would be all zeros, which is likely not what was intended. |
tensorflow tf.ragged.stack_dynamic_partitions tf.ragged.stack\_dynamic\_partitions
====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/ragged_array_ops.py#L573-L673) |
Stacks dynamic partitions of a Tensor or RaggedTensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.stack_dynamic_partitions`](https://www.tensorflow.org/api_docs/python/tf/ragged/stack_dynamic_partitions)
```
tf.ragged.stack_dynamic_partitions(
data, partitions, num_partitions, name=None
)
```
Returns a RaggedTensor `output` with `num_partitions` rows, where the row `output[i]` is formed by stacking all slices `data[j1...jN]` such that `partitions[j1...jN] = i`. Slices of `data` are stacked in row-major order.
If `num_partitions` is an `int` (not a `Tensor`), then this is equivalent to `tf.ragged.stack(tf.dynamic_partition(data, partitions, num_partitions))`.
#### Example:
```
data = ['a', 'b', 'c', 'd', 'e']
partitions = [ 3, 0, 2, 2, 3]
num_partitions = 5
tf.ragged.stack_dynamic_partitions(data, partitions, num_partitions)
<tf.RaggedTensor [[b'b'], [], [b'c', b'd'], [b'a', b'e'], []]>
```
| Args |
| `data` | A `Tensor` or `RaggedTensor` containing the values to stack. |
| `partitions` | An `int32` or `int64` `Tensor` or `RaggedTensor` specifying the partition that each slice of `data` should be added to. `partitions.shape` must be a prefix of `data.shape`. Values must be greater than or equal to zero, and less than `num_partitions`. `partitions` is not required to be sorted. |
| `num_partitions` | An `int32` or `int64` scalar specifying the number of partitions to output. This determines the number of rows in `output`. |
| `name` | A name prefix for the returned tensor (optional). |
| Returns |
| A `RaggedTensor` containing the stacked partitions. The returned tensor has the same dtype as `data`, and its shape is `[num_partitions, (D)] + data.shape[partitions.rank:]`, where `(D)` is a ragged dimension whose length is the number of data slices stacked for each `partition`. |
tensorflow tf.ragged.boolean_mask tf.ragged.boolean\_mask
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/ragged_array_ops.py#L47-L209) |
Applies a boolean mask to `data` without flattening the mask dimensions.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.boolean_mask`](https://www.tensorflow.org/api_docs/python/tf/ragged/boolean_mask)
```
tf.ragged.boolean_mask(
data, mask, name=None
)
```
Returns a potentially ragged tensor that is formed by retaining the elements in `data` where the corresponding value in `mask` is `True`.
* `output[a1...aA, i, b1...bB] = data[a1...aA, j, b1...bB]`
Where `j` is the `i`th `True` entry of `mask[a1...aA]`.
Note that `output` preserves the mask dimensions `a1...aA`; this differs from [`tf.boolean_mask`](../boolean_mask), which flattens those dimensions.
| Args |
| `data` | A potentially ragged tensor. |
| `mask` | A potentially ragged boolean tensor. `mask`'s shape must be a prefix of `data`'s shape. `rank(mask)` must be known statically. |
| `name` | A name prefix for the returned tensor (optional). |
| Returns |
| A potentially ragged tensor that is formed by retaining the elements in `data` where the corresponding value in `mask` is `True`. * `rank(output) = rank(data)`.
* `output.ragged_rank = max(data.ragged_rank, rank(mask) - 1)`.
|
| Raises |
| `ValueError` | if `rank(mask)` is not known statically; or if `mask.shape` is not a prefix of `data.shape`. |
#### Examples:
```
# Aliases for True & False so data and mask line up.
T, F = (True, False)
```
```
tf.ragged.boolean_mask( # Mask a 2D Tensor.
data=[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
mask=[[T, F, T], [F, F, F], [T, F, F]]).to_list()
[[1, 3], [], [7]]
```
```
tf.ragged.boolean_mask( # Mask a 2D RaggedTensor.
tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),
tf.ragged.constant([[F, F, T], [F], [T, T]])).to_list()
[[3], [], [5, 6]]
```
```
tf.ragged.boolean_mask( # Mask rows of a 2D RaggedTensor.
tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),
tf.ragged.constant([True, False, True])).to_list()
[[1, 2, 3], [5, 6]]
```
tensorflow tf.ragged.cross tf.ragged.cross
===============
Generates feature cross from a list of tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.cross`](https://www.tensorflow.org/api_docs/python/tf/ragged/cross)
```
tf.ragged.cross(
inputs, name=None
)
```
The input tensors must have `rank=2`, and must all have the same number of rows. The result is a `RaggedTensor` with the same number of rows as the inputs, where `result[row]` contains a list of all combinations of values formed by taking a single value from each input's corresponding row (`inputs[i][row]`). Values are combined by joining their strings with '*X*'. E.g.:
```
tf.ragged.cross([tf.ragged.constant([['a'], ['b', 'c']]),
tf.ragged.constant([['d'], ['e']]),
tf.ragged.constant([['f'], ['g']])])
<tf.RaggedTensor [[b'a_X_d_X_f'], [b'b_X_e_X_g', b'c_X_e_X_g']]>
```
| Args |
| `inputs` | A list of `RaggedTensor` or `Tensor` or `SparseTensor`. |
| `name` | Optional name for the op. |
| Returns |
| A 2D `RaggedTensor` of type `string`. |
tensorflow tf.ragged.segment_ids_to_row_splits tf.ragged.segment\_ids\_to\_row\_splits
=======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/segment_id_ops.py#L74-L133) |
Generates the RaggedTensor `row_splits` corresponding to a segmentation.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.segment_ids_to_row_splits`](https://www.tensorflow.org/api_docs/python/tf/ragged/segment_ids_to_row_splits)
```
tf.ragged.segment_ids_to_row_splits(
segment_ids, num_segments=None, out_type=None, name=None
)
```
Returns an integer vector `splits`, where `splits[0] = 0` and `splits[i] = splits[i-1] + count(segment_ids==i)`. Example:
```
print(tf.ragged.segment_ids_to_row_splits([0, 0, 0, 2, 2, 3, 4, 4, 4]))
tf.Tensor([0 3 3 5 6 9], shape=(6,), dtype=int64)
```
| Args |
| `segment_ids` | A 1-D integer Tensor. |
| `num_segments` | A scalar integer indicating the number of segments. Defaults to `max(segment_ids) + 1` (or zero if `segment_ids` is empty). |
| `out_type` | The dtype for the return value. Defaults to `segment_ids.dtype`, or [`tf.int64`](../../tf#int64) if `segment_ids` does not have a dtype. |
| `name` | A name prefix for the returned tensor (optional). |
| Returns |
| A sorted 1-D integer Tensor, with `shape=[num_segments + 1]`. |
tensorflow tf.ragged.constant tf.ragged.constant
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/ragged_factory_ops.py#L33-L83) |
Constructs a constant RaggedTensor from a nested Python list.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.constant`](https://www.tensorflow.org/api_docs/python/tf/ragged/constant)
```
tf.ragged.constant(
pylist,
dtype=None,
ragged_rank=None,
inner_shape=None,
name=None,
row_splits_dtype=tf.dtypes.int64
)
```
#### Example:
```
tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
<tf.RaggedTensor [[1, 2], [3], [4, 5, 6]]>
```
All scalar values in `pylist` must have the same nesting depth `K`, and the returned `RaggedTensor` will have rank `K`. If `pylist` contains no scalar values, then `K` is one greater than the maximum depth of empty lists in `pylist`. All scalar values in `pylist` must be compatible with `dtype`.
| Args |
| `pylist` | A nested `list`, `tuple` or `np.ndarray`. Any nested element that is not a `list`, `tuple` or `np.ndarray` must be a scalar value compatible with `dtype`. |
| `dtype` | The type of elements for the returned `RaggedTensor`. If not specified, then a default is chosen based on the scalar values in `pylist`. |
| `ragged_rank` | An integer specifying the ragged rank of the returned `RaggedTensor`. Must be nonnegative and less than `K`. Defaults to `max(0, K - 1)` if `inner_shape` is not specified. Defaults to `max(0, K - 1 - len(inner_shape))` if `inner_shape` is specified. |
| `inner_shape` | A tuple of integers specifying the shape for individual inner values in the returned `RaggedTensor`. Defaults to `()` if `ragged_rank` is not specified. If `ragged_rank` is specified, then a default is chosen based on the contents of `pylist`. |
| `name` | A name prefix for the returned tensor (optional). |
| `row_splits_dtype` | data type for the constructed `RaggedTensor`'s row\_splits. One of [`tf.int32`](../../tf#int32) or [`tf.int64`](../../tf#int64). |
| Returns |
| A potentially ragged tensor with rank `K` and the specified `ragged_rank`, containing the values from `pylist`. |
| Raises |
| `ValueError` | If the scalar values in `pylist` have inconsistent nesting depth; or if ragged\_rank or inner\_shape are incompatible with `pylist`. |
tensorflow tf.ragged.cross_hashed tf.ragged.cross\_hashed
=======================
Generates hashed feature cross from a list of tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.cross_hashed`](https://www.tensorflow.org/api_docs/python/tf/ragged/cross_hashed)
```
tf.ragged.cross_hashed(
inputs, num_buckets=0, hash_key=None, name=None
)
```
The input tensors must have `rank=2`, and must all have the same number of rows. The result is a `RaggedTensor` with the same number of rows as the inputs, where `result[row]` contains a list of all combinations of values formed by taking a single value from each input's corresponding row (`inputs[i][row]`). Values are combined by hashing together their fingerprints. E.g.:
```
tf.ragged.cross_hashed([tf.ragged.constant([['a'], ['b', 'c']]),
tf.ragged.constant([['d'], ['e']]),
tf.ragged.constant([['f'], ['g']])],
num_buckets=100)
<tf.RaggedTensor [[78], [66, 74]]>
```
| Args |
| `inputs` | A list of `RaggedTensor` or `Tensor` or `SparseTensor`. |
| `num_buckets` | A non-negative `int` that used to bucket the hashed values. If `num_buckets != 0`, then `output = hashed_value % num_buckets`. |
| `hash_key` | Integer hash\_key that will be used by the `FingerprintCat64` function. If not given, a default key is used. |
| `name` | Optional name for the op. |
| Returns |
| A 2D `RaggedTensor` of type `int64`. |
tensorflow tf.ragged.range tf.ragged.range
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/ragged_math_ops.py#L41-L112) |
Returns a `RaggedTensor` containing the specified sequences of numbers.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.range`](https://www.tensorflow.org/api_docs/python/tf/ragged/range)
```
tf.ragged.range(
starts,
limits=None,
deltas=1,
dtype=None,
name=None,
row_splits_dtype=tf.dtypes.int64
)
```
Each row of the returned `RaggedTensor` contains a single sequence:
```
ragged.range(starts, limits, deltas)[i] ==
tf.range(starts[i], limits[i], deltas[i])
```
If `start[i] < limits[i] and deltas[i] > 0`, then `output[i]` will be an empty list. Similarly, if `start[i] > limits[i] and deltas[i] < 0`, then `output[i]` will be an empty list. This behavior is consistent with the Python `range` function, but differs from the [`tf.range`](../range) op, which returns an error for these cases.
#### Examples:
```
tf.ragged.range([3, 5, 2]).to_list()
[[0, 1, 2], [0, 1, 2, 3, 4], [0, 1]]
tf.ragged.range([0, 5, 8], [3, 3, 12]).to_list()
[[0, 1, 2], [], [8, 9, 10, 11]]
tf.ragged.range([0, 5, 8], [3, 3, 12], 2).to_list()
[[0, 2], [], [8, 10]]
```
The input tensors `starts`, `limits`, and `deltas` may be scalars or vectors. The vector inputs must all have the same size. Scalar inputs are broadcast to match the size of the vector inputs.
| Args |
| `starts` | Vector or scalar `Tensor`. Specifies the first entry for each range if `limits` is not `None`; otherwise, specifies the range limits, and the first entries default to `0`. |
| `limits` | Vector or scalar `Tensor`. Specifies the exclusive upper limits for each range. |
| `deltas` | Vector or scalar `Tensor`. Specifies the increment for each range. Defaults to `1`. |
| `dtype` | The type of the elements of the resulting tensor. If not specified, then a value is chosen based on the other args. |
| `name` | A name for the operation. |
| `row_splits_dtype` | `dtype` for the returned `RaggedTensor`'s `row_splits` tensor. One of [`tf.int32`](../../tf#int32) or [`tf.int64`](../../tf#int64). |
| Returns |
| A `RaggedTensor` of type `dtype` with `ragged_rank=1`. |
| programming_docs |
tensorflow tf.ragged.map_flat_values tf.ragged.map\_flat\_values
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/ragged_functional_ops.py#L25-L136) |
Applies `op` to the `flat_values` of one or more RaggedTensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.map_flat_values`](https://www.tensorflow.org/api_docs/python/tf/ragged/map_flat_values)
```
tf.ragged.map_flat_values(
op, *args, **kwargs
)
```
Replaces any `RaggedTensor` in `args` or `kwargs` with its `flat_values` tensor (which collapses all ragged dimensions), and then calls `op`. Returns a `RaggedTensor` that is constructed from the input `RaggedTensor`s' `nested_row_splits` and the value returned by the `op`.
If the input arguments contain multiple `RaggedTensor`s, then they must have identical `nested_row_splits`.
This operation is generally used to apply elementwise operations to each value in a `RaggedTensor`.
#### Examples:
```
rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])
tf.ragged.map_flat_values(tf.ones_like, rt)
<tf.RaggedTensor [[1, 1, 1], [], [1, 1], [1]]>
tf.ragged.map_flat_values(tf.multiply, rt, rt)
<tf.RaggedTensor [[1, 4, 9], [], [16, 25], [36]]>
tf.ragged.map_flat_values(tf.add, rt, 5)
<tf.RaggedTensor [[6, 7, 8], [], [9, 10], [11]]>
```
Example with a non-elementwise operation (note that `map_flat_values` and `map_fn` return different results):
```
rt = tf.ragged.constant([[1.0, 3.0], [], [3.0, 6.0, 3.0]])
def normalized(x):
return x / tf.reduce_sum(x)
tf.ragged.map_flat_values(normalized, rt)
<tf.RaggedTensor [[0.0625, 0.1875], [], [0.1875, 0.375, 0.1875]]>
tf.map_fn(normalized, rt)
<tf.RaggedTensor [[0.25, 0.75], [], [0.25, 0.5, 0.25]]>
```
| Args |
| `op` | The operation that should be applied to the RaggedTensor `flat_values`. `op` is typically an element-wise operation (such as math\_ops.add), but any operation that preserves the size of the outermost dimension can be used. I.e., `shape[0]` of the value returned by `op` must match `shape[0]` of the `RaggedTensor`s' `flat_values` tensors. |
| `*args` | Arguments for `op`. |
| `**kwargs` | Keyword arguments for `op`. |
| Returns |
| A `RaggedTensor` whose `ragged_rank` matches the `ragged_rank` of all input `RaggedTensor`s. |
| Raises |
| `ValueError` | If args contains no `RaggedTensors`, or if the `nested_splits` of the input `RaggedTensor`s are not identical. |
tensorflow tf.ragged.row_splits_to_segment_ids tf.ragged.row\_splits\_to\_segment\_ids
=======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/segment_id_ops.py#L30-L69) |
Generates the segmentation corresponding to a RaggedTensor `row_splits`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.row_splits_to_segment_ids`](https://www.tensorflow.org/api_docs/python/tf/ragged/row_splits_to_segment_ids)
```
tf.ragged.row_splits_to_segment_ids(
splits, name=None, out_type=None
)
```
Returns an integer vector `segment_ids`, where `segment_ids[i] == j` if `splits[j] <= i < splits[j+1]`. Example:
```
print(tf.ragged.row_splits_to_segment_ids([0, 3, 3, 5, 6, 9]))
tf.Tensor([0 0 0 2 2 3 4 4 4], shape=(9,), dtype=int64)
```
| Args |
| `splits` | A sorted 1-D integer Tensor. `splits[0]` must be zero. |
| `name` | A name prefix for the returned tensor (optional). |
| `out_type` | The dtype for the return value. Defaults to `splits.dtype`, or [`tf.int64`](../../tf#int64) if `splits` does not have a dtype. |
| Returns |
| A sorted 1-D integer Tensor, with `shape=[splits[-1]]` |
| Raises |
| `ValueError` | If `splits` is invalid. |
tensorflow tf.ragged.stack tf.ragged.stack
===============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/ragged/ragged_concat_ops.py#L72-L123) |
Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.ragged.stack`](https://www.tensorflow.org/api_docs/python/tf/ragged/stack)
```
tf.ragged.stack(
values: typing.List[ragged_tensor.RaggedOrDense], axis=0, name=None
)
```
Given a list of tensors or ragged tensors with the same rank `R` (`R >= axis`), returns a rank-`R+1` `RaggedTensor` `result` such that `result[i0...iaxis]` is `[value[i0...iaxis] for value in values]`.
#### Examples:
```
# Stacking two ragged tensors.
t1 = tf.ragged.constant([[1, 2], [3, 4, 5]])
t2 = tf.ragged.constant([[6], [7, 8, 9]])
tf.ragged.stack([t1, t2], axis=0)
<tf.RaggedTensor [[[1, 2], [3, 4, 5]], [[6], [7, 8, 9]]]>
tf.ragged.stack([t1, t2], axis=1)
<tf.RaggedTensor [[[1, 2], [6]], [[3, 4, 5], [7, 8, 9]]]>
```
```
# Stacking two dense tensors with different sizes.
t3 = tf.constant([[1, 2, 3], [4, 5, 6]])
t4 = tf.constant([[5], [6], [7]])
tf.ragged.stack([t3, t4], axis=0)
<tf.RaggedTensor [[[1, 2, 3], [4, 5, 6]], [[5], [6], [7]]]>
```
| Args |
| `values` | A list of [`tf.Tensor`](../tensor) or [`tf.RaggedTensor`](../raggedtensor). May not be empty. All `values` must have the same rank and the same dtype; but unlike [`tf.stack`](../stack), they can have arbitrary dimension sizes. |
| `axis` | A python integer, indicating the dimension along which to stack. (Note: Unlike [`tf.stack`](../stack), the `axis` parameter must be statically known.) Negative values are supported only if the rank of at least one `values` value is statically known. |
| `name` | A name prefix for the returned tensor (optional). |
| Returns |
| A `RaggedTensor` with rank `R+1` (if `R>0`). If `R==0`, then the result will be returned as a 1D `Tensor`, since `RaggedTensor` can only be used when `rank>1`. `result.ragged_rank=1+max(axis, max(rt.ragged_rank for rt in values]))`. |
| Raises |
| `ValueError` | If `values` is empty, if `axis` is out of bounds or if the input tensors have different ranks. |
tensorflow tf.test.is_built_with_xla tf.test.is\_built\_with\_xla
============================
Returns whether TensorFlow was built with XLA support.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.is_built_with_xla`](https://www.tensorflow.org/api_docs/python/tf/test/is_built_with_xla)
```
tf.test.is_built_with_xla()
```
This method should only be used in tests written with [`tf.test.TestCase`](testcase). A typical usage is to skip tests that should only run with XLA.
```
class MyTest(tf.test.TestCase):
def test_add_on_xla(self):
if not tf.test.is_built_with_xla():
self.skipTest("test is only applicable on XLA")
```
... @tf.function(jit\_compile=True) ... def add(x, y): ... return tf.math.add(x, y) ... ... self.assertEqual(add(tf.ones(()), tf.ones(())), 2.0)
TensorFlow official binary is built with XLA.
tensorflow tf.test.is_built_with_rocm tf.test.is\_built\_with\_rocm
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/test.py#L113-L131) |
Returns whether TensorFlow was built with ROCm (GPU) support.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.is_built_with_rocm`](https://www.tensorflow.org/api_docs/python/tf/test/is_built_with_rocm)
```
tf.test.is_built_with_rocm()
```
This method should only be used in tests written with [`tf.test.TestCase`](testcase). A typical usage is to skip tests that should only run with ROCm (GPU).
```
class MyTest(tf.test.TestCase):
def test_add_on_gpu(self):
if not tf.test.is_built_with_rocm():
self.skipTest("test is only applicable on GPU")
with tf.device("GPU:0"):
self.assertEqual(tf.math.add(1.0, 2.0), 3.0)
```
TensorFlow official binary is NOT built with ROCm.
tensorflow tf.test.with_eager_op_as_function tf.test.with\_eager\_op\_as\_function
=====================================
Adds methods that call original methods with eager\_op\_as\_function enabled.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.with_eager_op_as_function`](https://www.tensorflow.org/api_docs/python/tf/test/with_eager_op_as_function)
```
tf.test.with_eager_op_as_function(
cls=None, only_as_function=False
)
```
#### Example:
@test\_util.with\_eager\_op\_as\_function class SessionTest(test.TestCase):
def testEnabledForEagerOpAsFunction(self): ...
@disable\_eager\_op\_as\_function("b/xyzabc") def testDisabledForEagerOpAsFunction(self): ...
#### Generated class:
class SessionTest(test.TestCase):
def testEnabledForEagerOpAsFunction(self): ...
def testEnabledForEagerOpAsFunctionWithEagerOpAsFunctionEnabled(self): // Enable run\_eager\_op\_as\_function // Reset context testEnabledForEagerOpAsFunction(self) // Disable run\_eager\_op\_as\_function // Reset context
def testDisabledForEagerOpAsFunction(self): ...
| Args |
| `cls` | class to decorate. |
| `only_as_function` | whether to run all the tests in the TestCase in eager mode and in eager\_op\_as\_function mode. By default it will run all tests in both modes. When `only_as_function=True` tests will not be run in eager mode. |
| Returns |
| cls with new test methods added. |
tensorflow tf.test.is_built_with_gpu_support tf.test.is\_built\_with\_gpu\_support
=====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/test.py#L152-L170) |
Returns whether TensorFlow was built with GPU (CUDA or ROCm) support.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.is_built_with_gpu_support`](https://www.tensorflow.org/api_docs/python/tf/test/is_built_with_gpu_support)
```
tf.test.is_built_with_gpu_support()
```
This method should only be used in tests written with [`tf.test.TestCase`](testcase). A typical usage is to skip tests that should only run with GPU.
```
class MyTest(tf.test.TestCase):
def test_add_on_gpu(self):
if not tf.test.is_built_with_gpu_support():
self.skipTest("test is only applicable on GPU")
with tf.device("GPU:0"):
self.assertEqual(tf.math.add(1.0, 2.0), 3.0)
```
TensorFlow official binary is built with CUDA GPU support.
tensorflow tf.test.benchmark_config tf.test.benchmark\_config
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/benchmark.py#L291-L301) |
Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.benchmark_config`](https://www.tensorflow.org/api_docs/python/tf/test/benchmark_config)
```
tf.test.benchmark_config()
```
| Returns |
| A TensorFlow ConfigProto object. |
tensorflow tf.test.Benchmark tf.test.Benchmark
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/benchmark.py#L305-L437) |
Abstract class that provides helpers for TensorFlow benchmarks.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.Benchmark`](https://www.tensorflow.org/api_docs/python/tf/test/Benchmark)
```
tf.test.Benchmark()
```
Methods
-------
### `evaluate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/benchmark.py#L427-L437)
```
evaluate(
tensors
)
```
Evaluates tensors and returns numpy values.
| Args |
| `tensors` | A Tensor or a nested list/tuple of Tensors. |
| Returns |
| tensors numpy values. |
### `is_abstract`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/benchmark.py#L314-L318)
```
@classmethod
is_abstract()
```
### `report_benchmark`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/benchmark.py#L259-L288)
```
report_benchmark(
iters=None,
cpu_time=None,
wall_time=None,
throughput=None,
extras=None,
name=None,
metrics=None
)
```
Report a benchmark.
| Args |
| `iters` | (optional) How many iterations were run |
| `cpu_time` | (optional) Median or mean cpu time in seconds. |
| `wall_time` | (optional) Median or mean wall time in seconds. |
| `throughput` | (optional) Throughput (in MB/s) |
| `extras` | (optional) Dict mapping string keys to additional benchmark info. Values may be either floats or values that are convertible to strings. |
| `name` | (optional) Override the BenchmarkEntry name with `name`. Otherwise it is inferred from the top-level method name. |
| `metrics` | (optional) A list of dict, where each dict has the keys below name (required), string, metric name value (required), double, metric value min\_value (optional), double, minimum acceptable metric value max\_value (optional), double, maximum acceptable metric value |
### `run_op_benchmark`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/benchmark.py#L320-L425)
```
run_op_benchmark(
sess,
op_or_tensor,
feed_dict=None,
burn_iters=2,
min_iters=10,
store_trace=False,
store_memory_usage=True,
name=None,
extras=None,
mbs=0
)
```
Run an op or tensor in the given session. Report the results.
| Args |
| `sess` | `Session` object to use for timing. |
| `op_or_tensor` | `Operation` or `Tensor` to benchmark. |
| `feed_dict` | A `dict` of values to feed for each op iteration (see the `feed_dict` parameter of `Session.run`). |
| `burn_iters` | Number of burn-in iterations to run. |
| `min_iters` | Minimum number of iterations to use for timing. |
| `store_trace` | Boolean, whether to run an extra untimed iteration and store the trace of iteration in returned extras. The trace will be stored as a string in Google Chrome trace format in the extras field "full\_trace\_chrome\_format". Note that trace will not be stored in test\_log\_pb2.TestResults proto. |
| `store_memory_usage` | Boolean, whether to run an extra untimed iteration, calculate memory usage, and store that in extras fields. |
| `name` | (optional) Override the BenchmarkEntry name with `name`. Otherwise it is inferred from the top-level method name. |
| `extras` | (optional) Dict mapping string keys to additional benchmark info. Values may be either floats or values that are convertible to strings. |
| `mbs` | (optional) The number of megabytes moved by this op, used to calculate the ops throughput. |
| Returns |
| A `dict` containing the key-value pairs that were passed to `report_benchmark`. If `store_trace` option is used, then `full_chrome_trace_format` will be included in return dictionary even though it is not passed to `report_benchmark` with `extras`. |
tensorflow tf.test.main tf.test.main
============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/test.py#L52-L56) |
Runs all unit tests.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.main`](https://www.tensorflow.org/api_docs/python/tf/test/main)
```
tf.test.main(
argv=None
)
```
tensorflow tf.test.assert_equal_graph_def tf.test.assert\_equal\_graph\_def
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L199-L217) |
Asserts that two `GraphDef`s are (mostly) the same.
```
tf.test.assert_equal_graph_def(
expected, actual
)
```
Compares two `GraphDef` protos for equality, ignoring versions and ordering of nodes, attrs, and control inputs. Node names are used to match up nodes between the graphs, so the naming of nodes must be consistent. This function ignores randomized attribute values that may appear in V2 checkpoints.
| Args |
| `expected` | The `GraphDef` we expected. |
| `actual` | The `GraphDef` we have. |
| Raises |
| `AssertionError` | If the `GraphDef`s do not match. |
| `TypeError` | If either argument is not a `GraphDef`. |
tensorflow tf.test.gpu_device_name tf.test.gpu\_device\_name
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L150-L169) |
Returns the name of a GPU device if available or a empty string.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.gpu_device_name`](https://www.tensorflow.org/api_docs/python/tf/test/gpu_device_name)
```
tf.test.gpu_device_name()
```
This method should only be used in tests written with [`tf.test.TestCase`](testcase).
```
class MyTest(tf.test.TestCase):
def test_add_on_gpu(self):
if not tf.test.is_built_with_gpu_support():
self.skipTest("test is only applicable on GPU")
with tf.device(tf.test.gpu_device_name()):
self.assertEqual(tf.math.add(1.0, 2.0), 3.0)
```
tensorflow tf.test.disable_with_predicate tf.test.disable\_with\_predicate
================================
Disables the test if pred is true.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.disable_with_predicate`](https://www.tensorflow.org/api_docs/python/tf/test/disable_with_predicate)
```
tf.test.disable_with_predicate(
pred, skip_message
)
```
tensorflow tf.test.TestCase tf.test.TestCase
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2395-L3716) |
Base class for tests that need to test TensorFlow.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.TestCase`](https://www.tensorflow.org/api_docs/python/tf/test/TestCase)
```
tf.test.TestCase(
methodName='runTest'
)
```
Child Classes
-------------
[`class failureException`](testcase/failureexception)
Methods
-------
### `addCleanup`
```
addCleanup(
*args, **kwargs
)
```
Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success.
Cleanup items are called even if setUp fails (unlike tearDown).
### `addTypeEqualityFunc`
```
addTypeEqualityFunc(
typeobj, function
)
```
Add a type specific assertEqual style function to compare a type.
This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.
| Args |
| `typeobj` | The data type to call this function on when both values are of the same type in assertEqual(). |
| `function` | The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal. |
### `assertAllClose`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3077-L3108)
```
assertAllClose(
a, b, rtol=1e-06, atol=1e-06, msg=None
)
```
Asserts that two structures of numpy arrays or Tensors, have near values.
`a` and `b` can be arbitrarily nested structures. A layer of a nested structure can be a `dict`, `namedtuple`, `tuple` or `list`.
>
> **Note:** the implementation follows [`numpy.allclose`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html) (and numpy.testing.assert\_allclose). It checks whether two arrays are element-wise equal within a tolerance. The relative difference (`rtol * abs(b)`) and the absolute difference `atol` are added together to compare against the absolute difference between `a` and `b`.
>
| Args |
| `a` | The expected numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor), or any arbitrarily nested of structure of these. |
| `b` | The actual numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor), or any arbitrarily nested of structure of these. |
| `rtol` | relative tolerance. |
| `atol` | absolute tolerance. |
| `msg` | Optional message to report on failure. |
| Raises |
| `ValueError` | if only one of `a[p]` and `b[p]` is a dict or `a[p]` and `b[p]` have different length, where `[p]` denotes a path to the nested structure, e.g. given `a = [(1, 1), {'d': (6, 7)}]` and `[p] = [1]['d']`, then `a[p] = (6, 7)`. |
### `assertAllCloseAccordingToType`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3110-L3157)
```
assertAllCloseAccordingToType(
a,
b,
rtol=1e-06,
atol=1e-06,
float_rtol=1e-06,
float_atol=1e-06,
half_rtol=0.001,
half_atol=0.001,
bfloat16_rtol=0.01,
bfloat16_atol=0.01,
msg=None
)
```
Like assertAllClose, but also suitable for comparing fp16 arrays.
In particular, the tolerance is reduced to 1e-3 if at least one of the arguments is of type float16.
| Args |
| `a` | the expected numpy ndarray or anything can be converted to one. |
| `b` | the actual numpy ndarray or anything can be converted to one. |
| `rtol` | relative tolerance. |
| `atol` | absolute tolerance. |
| `float_rtol` | relative tolerance for float32. |
| `float_atol` | absolute tolerance for float32. |
| `half_rtol` | relative tolerance for float16. |
| `half_atol` | absolute tolerance for float16. |
| `bfloat16_rtol` | relative tolerance for bfloat16. |
| `bfloat16_atol` | absolute tolerance for bfloat16. |
| `msg` | Optional message to report on failure. |
### `assertAllEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3184-L3248)
```
assertAllEqual(
a, b, msg=None
)
```
Asserts that two numpy arrays or Tensors have the same values.
| Args |
| `a` | the expected numpy ndarray or anything can be converted to one. |
| `b` | the actual numpy ndarray or anything can be converted to one. |
| `msg` | Optional message to report on failure. |
### `assertAllGreater`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3265-L3276)
```
assertAllGreater(
a, comparison_target
)
```
Assert element values are all greater than a target value.
| Args |
| `a` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `comparison_target` | The target value of comparison. |
### `assertAllGreaterEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3291-L3302)
```
assertAllGreaterEqual(
a, comparison_target
)
```
Assert element values are all greater than or equal to a target value.
| Args |
| `a` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `comparison_target` | The target value of comparison. |
### `assertAllInRange`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3350-L3407)
```
assertAllInRange(
target,
lower_bound,
upper_bound,
open_lower_bound=False,
open_upper_bound=False
)
```
Assert that elements in a Tensor are all in a given range.
| Args |
| `target` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `lower_bound` | lower bound of the range |
| `upper_bound` | upper bound of the range |
| `open_lower_bound` | (`bool`) whether the lower bound is open (i.e., > rather than the default >=) |
| `open_upper_bound` | (`bool`) whether the upper bound is open (i.e., < rather than the default <=) |
| Raises |
| `AssertionError` | if the value tensor does not have an ordered numeric type (float\* or int\*), or if there are nan values, or if any of the elements do not fall in the specified range. |
### `assertAllInSet`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3409-L3429)
```
assertAllInSet(
target, expected_set
)
```
Assert that elements of a Tensor are all in a given closed set.
| Args |
| `target` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `expected_set` | (`list`, `tuple` or `set`) The closed set that the elements of the value of `target` are expected to fall into. |
| Raises |
| `AssertionError` | if any of the elements do not fall into `expected_set`. |
### `assertAllLess`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3278-L3289)
```
assertAllLess(
a, comparison_target
)
```
Assert element values are all less than a target value.
| Args |
| `a` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `comparison_target` | The target value of comparison. |
### `assertAllLessEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3304-L3315)
```
assertAllLessEqual(
a, comparison_target
)
```
Assert element values are all less than or equal to a target value.
| Args |
| `a` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `comparison_target` | The target value of comparison. |
### `assertAlmostEqual`
```
assertAlmostEqual(
first, second, places=None, msg=None, delta=None
)
```
Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
If the two objects compare equal then they will automatically compare almost equal.
### `assertAlmostEquals`
```
assertAlmostEquals(
*args, **kwargs
)
```
### `assertArrayNear`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2901-L2916)
```
assertArrayNear(
farray1, farray2, err, msg=None
)
```
Asserts that two float arrays are near each other.
Checks that for all elements of farray1 and farray2 |f1 - f2| < err. Asserts a test failure if not.
| Args |
| `farray1` | a list of float values. |
| `farray2` | a list of float values. |
| `err` | a float value. |
| `msg` | Optional message to report on failure. |
### `assertBetween`
```
assertBetween(
value, minv, maxv, msg=None
)
```
Asserts that value is between minv and maxv (inclusive).
### `assertCommandFails`
```
assertCommandFails(
command, regexes, env=None, close_fds=True, msg=None
)
```
Asserts a shell command fails and the error matches a regex in a list.
| Args |
| `command` | List or string representing the command to run. |
| `regexes` | the list of regular expression strings. |
| `env` | Dictionary of environment variable settings. If None, no environment variables will be set for the child process. This is to make tests more hermetic. NOTE: this behavior is different than the standard subprocess module. |
| `close_fds` | Whether or not to close all open fd's in the child after forking. |
| `msg` | Optional message to report on failure. |
### `assertCommandSucceeds`
```
assertCommandSucceeds(
command, regexes=(b'',), env=None, close_fds=True, msg=None
)
```
Asserts that a shell command succeeds (i.e. exits with code 0).
| Args |
| `command` | List or string representing the command to run. |
| `regexes` | List of regular expression byte strings that match success. |
| `env` | Dictionary of environment variable settings. If None, no environment variables will be set for the child process. This is to make tests more hermetic. NOTE: this behavior is different than the standard subprocess module. |
| `close_fds` | Whether or not to close all open fd's in the child after forking. |
| `msg` | Optional message to report on failure. |
### `assertContainsExactSubsequence`
```
assertContainsExactSubsequence(
container, subsequence, msg=None
)
```
Asserts that "container" contains "subsequence" as an exact subsequence.
Asserts that "container" contains all the elements of "subsequence", in order, and without other elements interspersed. For example, [1, 2, 3] is an exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0].
| Args |
| `container` | the list we're testing for subsequence inclusion. |
| `subsequence` | the list we hope will be an exact subsequence of container. |
| `msg` | Optional message to report on failure. |
### `assertContainsInOrder`
```
assertContainsInOrder(
strings, target, msg=None
)
```
Asserts that the strings provided are found in the target in order.
This may be useful for checking HTML output.
| Args |
| `strings` | A list of strings, such as [ 'fox', 'dog' ] |
| `target` | A target string in which to look for the strings, such as 'The quick brown fox jumped over the lazy dog'. |
| `msg` | Optional message to report on failure. |
### `assertContainsSubsequence`
```
assertContainsSubsequence(
container, subsequence, msg=None
)
```
Asserts that "container" contains "subsequence" as a subsequence.
Asserts that "container" contains all the elements of "subsequence", in order, but possibly with other elements interspersed. For example, [1, 2, 3] is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0].
| Args |
| `container` | the list we're testing for subsequence inclusion. |
| `subsequence` | the list we hope will be a subsequence of container. |
| `msg` | Optional message to report on failure. |
### `assertContainsSubset`
```
assertContainsSubset(
expected_subset, actual_set, msg=None
)
```
Checks whether actual iterable is a superset of expected iterable.
### `assertCountEqual`
```
assertCountEqual(
first, second, msg=None
)
```
An unordered sequence comparison asserting that the same elements, regardless of order. If the same element occurs more than once, it verifies that the elements occur the same number of times.
```
self.assertEqual(Counter(list(first)),
Counter(list(second)))
```
Example:
```
- [0, 1, 1] and [1, 0, 1] compare equal.
- [0, 0, 1] and [0, 1] compare unequal.
```
### `assertDTypeEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3431-L3444)
```
assertDTypeEqual(
target, expected_dtype
)
```
Assert ndarray data type is equal to expected.
| Args |
| `target` | The numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor). |
| `expected_dtype` | Expected data type. |
### `assertDeviceEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3528-L3540)
```
assertDeviceEqual(
device1, device2, msg=None
)
```
Asserts that the two given devices are the same.
| Args |
| `device1` | A string device name or TensorFlow `DeviceSpec` object. |
| `device2` | A string device name or TensorFlow `DeviceSpec` object. |
| `msg` | Optional message to report on failure. |
### `assertDictContainsSubset`
```
assertDictContainsSubset(
subset, dictionary, msg=None
)
```
Checks whether dictionary is a superset of subset.
### `assertDictEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3542-L3566)
```
assertDictEqual(
a, b, msg=None
)
```
Assert that two given dictionary of tensors are the same.
| Args |
| `a` | Expected dictionary with numpy ndarray or anything else that can be converted to one as values. |
| `b` | Actual dictionary with numpy ndarray or anything else that can be converted to one as values. |
| `msg` | Optional message to report on failure. |
### `assertEmpty`
```
assertEmpty(
container, msg=None
)
```
Asserts that an object has zero length.
| Args |
| `container` | Anything that implements the collections.abc.Sized interface. |
| `msg` | Optional message to report on failure. |
### `assertEndsWith`
```
assertEndsWith(
actual, expected_end, msg=None
)
```
Asserts that actual.endswith(expected\_end) is True.
| Args |
| `actual` | str |
| `expected_end` | str |
| `msg` | Optional message to report on failure. |
### `assertEqual`
```
assertEqual(
first, second, msg=None
)
```
Fail if the two objects are unequal as determined by the '==' operator.
### `assertEquals`
```
assertEquals(
*args, **kwargs
)
```
### `assertFalse`
```
assertFalse(
expr, msg=None
)
```
Check that the expression is false.
### `assertGreater`
```
assertGreater(
a, b, msg=None
)
```
Just like self.assertTrue(a > b), but with a nicer default message.
### `assertGreaterEqual`
```
assertGreaterEqual(
a, b, msg=None
)
```
Just like self.assertTrue(a >= b), but with a nicer default message.
### `assertIn`
```
assertIn(
member, container, msg=None
)
```
Just like self.assertTrue(a in b), but with a nicer default message.
### `assertIs`
```
assertIs(
expr1, expr2, msg=None
)
```
Just like self.assertTrue(a is b), but with a nicer default message.
### `assertIsInstance`
```
assertIsInstance(
obj, cls, msg=None
)
```
Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message.
### `assertIsNone`
```
assertIsNone(
obj, msg=None
)
```
Same as self.assertTrue(obj is None), with a nicer default message.
### `assertIsNot`
```
assertIsNot(
expr1, expr2, msg=None
)
```
Just like self.assertTrue(a is not b), but with a nicer default message.
### `assertIsNotNone`
```
assertIsNotNone(
obj, msg=None
)
```
Included for symmetry with assertIsNone.
### `assertItemsEqual`
```
assertItemsEqual(
first, second, msg=None
)
```
An unordered sequence comparison asserting that the same elements, regardless of order. If the same element occurs more than once, it verifies that the elements occur the same number of times.
```
self.assertEqual(Counter(list(first)),
Counter(list(second)))
```
Example:
```
- [0, 1, 1] and [1, 0, 1] compare equal.
- [0, 0, 1] and [0, 1] compare unequal.
```
### `assertJsonEqual`
```
assertJsonEqual(
first, second, msg=None
)
```
Asserts that the JSON objects defined in two strings are equal.
A summary of the differences will be included in the failure message using assertSameStructure.
| Args |
| `first` | A string containing JSON to decode and compare to second. |
| `second` | A string containing JSON to decode and compare to first. |
| `msg` | Additional text to include in the failure message. |
### `assertLen`
```
assertLen(
container, expected_len, msg=None
)
```
Asserts that an object has the expected length.
| Args |
| `container` | Anything that implements the collections.abc.Sized interface. |
| `expected_len` | The expected length of the container. |
| `msg` | Optional message to report on failure. |
### `assertLess`
```
assertLess(
a, b, msg=None
)
```
Just like self.assertTrue(a < b), but with a nicer default message.
### `assertLessEqual`
```
assertLessEqual(
a, b, msg=None
)
```
Just like self.assertTrue(a <= b), but with a nicer default message.
### `assertListEqual`
```
assertListEqual(
list1, list2, msg=None
)
```
A list-specific equality assertion.
| Args |
| `list1` | The first list to compare. |
| `list2` | The second list to compare. |
| `msg` | Optional message to use on failure instead of a list of differences. |
### `assertLogs`
```
assertLogs(
logger=None, level=None
)
```
Fail unless a log message of level *level* or higher is emitted on *logger\_name* or its children. If omitted, *level* defaults to INFO and *logger* defaults to the root logger.
This method must be used as a context manager, and will yield a recording object with two attributes: `output` and `records`. At the end of the context manager, the `output` attribute will be a list of the matching formatted log messages and the `records` attribute will be a list of the corresponding LogRecord objects.
Example::
```
with self.assertLogs('foo', level='INFO') as cm:
logging.getLogger('foo').info('first message')
logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
```
### `assertMultiLineEqual`
```
assertMultiLineEqual(
first, second, msg=None, **kwargs
)
```
Asserts that two multi-line strings are equal.
### `assertNDArrayNear`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2921-L2931)
```
assertNDArrayNear(
ndarray1, ndarray2, err, msg=None
)
```
Asserts that two numpy arrays have near values.
| Args |
| `ndarray1` | a numpy ndarray. |
| `ndarray2` | a numpy ndarray. |
| `err` | a float. The maximum absolute difference allowed. |
| `msg` | Optional message to report on failure. |
### `assertNear`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2883-L2899)
```
assertNear(
f1, f2, err, msg=None
)
```
Asserts that two floats are near each other.
Checks that |f1 - f2| < err and asserts a test failure if not.
| Args |
| `f1` | A float value. |
| `f2` | A float value. |
| `err` | A float value. |
| `msg` | An optional string message to append to the failure message. |
### `assertNoCommonElements`
```
assertNoCommonElements(
expected_seq, actual_seq, msg=None
)
```
Checks whether actual iterable and expected iterable are disjoint.
### `assertNotAllClose`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3159-L3182)
```
assertNotAllClose(
a, b, rtol=1e-06, atol=1e-06, msg=None
)
```
Assert that two numpy arrays, or Tensors, do not have near values.
| Args |
| `a` | The expected numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor), or any arbitrarily nested of structure of these. |
| `b` | The actual numpy `ndarray`, or anything that can be converted into a numpy `ndarray` (including Tensor), or any arbitrarily nested of structure of these. |
| `rtol` | relative tolerance. |
| `atol` | absolute tolerance. |
| `msg` | Optional message to report on failure. |
| Raises |
| `AssertionError` | If `a` and `b` are unexpectedly close at all elements. |
### `assertNotAllEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3250-L3263)
```
assertNotAllEqual(
a, b, msg=None
)
```
Asserts that two numpy arrays or Tensors do not have the same values.
| Args |
| `a` | the expected numpy ndarray or anything can be converted to one. |
| `b` | the actual numpy ndarray or anything can be converted to one. |
| `msg` | Optional message to report on failure. |
### `assertNotAlmostEqual`
```
assertNotAlmostEqual(
first, second, places=None, msg=None, delta=None
)
```
Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
Objects that are equal automatically fail.
### `assertNotAlmostEquals`
```
assertNotAlmostEquals(
*args, **kwargs
)
```
### `assertNotEmpty`
```
assertNotEmpty(
container, msg=None
)
```
Asserts that an object has non-zero length.
| Args |
| `container` | Anything that implements the collections.abc.Sized interface. |
| `msg` | Optional message to report on failure. |
### `assertNotEndsWith`
```
assertNotEndsWith(
actual, unexpected_end, msg=None
)
```
Asserts that actual.endswith(unexpected\_end) is False.
| Args |
| `actual` | str |
| `unexpected_end` | str |
| `msg` | Optional message to report on failure. |
### `assertNotEqual`
```
assertNotEqual(
first, second, msg=None
)
```
Fail if the two objects are equal as determined by the '!=' operator.
### `assertNotEquals`
```
assertNotEquals(
*args, **kwargs
)
```
### `assertNotIn`
```
assertNotIn(
member, container, msg=None
)
```
Just like self.assertTrue(a not in b), but with a nicer default message.
### `assertNotIsInstance`
```
assertNotIsInstance(
obj, cls, msg=None
)
```
Included for symmetry with assertIsInstance.
### `assertNotRegex`
```
assertNotRegex(
text, unexpected_regex, msg=None
)
```
Fail the test if the text matches the regular expression.
### `assertNotRegexpMatches`
```
assertNotRegexpMatches(
*args, **kwargs
)
```
### `assertNotStartsWith`
```
assertNotStartsWith(
actual, unexpected_start, msg=None
)
```
Asserts that actual.startswith(unexpected\_start) is False.
| Args |
| `actual` | str |
| `unexpected_start` | str |
| `msg` | Optional message to report on failure. |
### `assertProtoEquals`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2550-L2573)
```
assertProtoEquals(
expected_message_maybe_ascii, message, msg=None
)
```
Asserts that message is same as parsed expected\_message\_ascii.
Creates another prototype of message, reads the ascii message into it and then compares them using self.\_AssertProtoEqual().
| Args |
| `expected_message_maybe_ascii` | proto message in original or ascii form. |
| `message` | the message to validate. |
| `msg` | Optional message to report on failure. |
### `assertProtoEqualsVersion`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2575-L2584)
```
assertProtoEqualsVersion(
expected,
actual,
producer=versions.GRAPH_DEF_VERSION,
min_consumer=versions.GRAPH_DEF_VERSION_MIN_CONSUMER,
msg=None
)
```
### `assertRaises`
```
assertRaises(
expected_exception, *args, **kwargs
)
```
Fail unless an exception of class expected\_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
If called with the callable and arguments omitted, will return a context object used like this::
```
with self.assertRaises(SomeException):
do_something()
```
An optional keyword argument 'msg' can be provided when assertRaises is used as a context object.
The context manager keeps a reference to the exception as the 'exception' attribute. This allows you to inspect the exception after the assertion::
```
with self.assertRaises(SomeException) as cm:
do_something()
the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)
```
### `assertRaisesIncompatibleShapesError`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3494-L3498)
```
assertRaisesIncompatibleShapesError(
exception_type=tf.errors.InvalidArgumentError
)
```
### `assertRaisesOpError`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3490-L3492)
```
assertRaisesOpError(
expected_err_re_or_predicate
)
```
### `assertRaisesRegex`
```
assertRaisesRegex(
expected_exception, expected_regex, *args, **kwargs
)
```
Asserts that the message in a raised exception matches a regex.
| Args |
| `expected_exception` | Exception class expected to be raised. |
| `expected_regex` | Regex (re.Pattern object or string) expected to be found in error message. |
| `args` | Function to be called and extra positional args. |
| `kwargs` | Extra kwargs. |
| `msg` | Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager. |
### `assertRaisesRegexp`
```
assertRaisesRegexp(
expected_exception, expected_regex, *args, **kwargs
)
```
Asserts that the message in a raised exception matches a regex.
| Args |
| `expected_exception` | Exception class expected to be raised. |
| `expected_regex` | Regex (re.Pattern object or string) expected to be found in error message. |
| `args` | Function to be called and extra positional args. |
| `kwargs` | Extra kwargs. |
| `msg` | Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager. |
### `assertRaisesWithLiteralMatch`
```
assertRaisesWithLiteralMatch(
expected_exception,
expected_exception_message,
callable_obj=None,
*args,
**kwargs
)
```
Asserts that the message in a raised exception equals the given string.
Unlike assertRaisesRegex, this method takes a literal string, not a regular expression.
with self.assertRaisesWithLiteralMatch(ExType, 'message'): DoSomething()
| Args |
| `expected_exception` | Exception class expected to be raised. |
| `expected_exception_message` | String message expected in the raised exception. For a raise exception e, expected\_exception\_message must equal str(e). |
| `callable_obj` | Function to be called, or None to return a context. |
| `*args` | Extra args. |
| `**kwargs` | Extra kwargs. |
| Returns |
| A context manager if callable\_obj is None. Otherwise, None. |
| Raises |
| self.failureException if callable\_obj does not raise a matching exception. |
### `assertRaisesWithPredicateMatch`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3447-L3486)
```
@contextlib.contextmanager
assertRaisesWithPredicateMatch(
exception_type, expected_err_re_or_predicate
)
```
Returns a context manager to enclose code expected to raise an exception.
If the exception is an OpError, the op stack is also included in the message predicate search.
| Args |
| `exception_type` | The expected type of exception that should be raised. |
| `expected_err_re_or_predicate` | If this is callable, it should be a function of one argument that inspects the passed-in exception and returns True (success) or False (please fail the test). Otherwise, the error message is expected to match this regular expression partially. |
| Returns |
| A context manager to surround code that is expected to raise an exception. |
### `assertRegex`
```
assertRegex(
text, expected_regex, msg=None
)
```
Fail the test unless the text matches the regular expression.
### `assertRegexMatch`
```
assertRegexMatch(
actual_str, regexes, message=None
)
```
Asserts that at least one regex in regexes matches str.
If possible you should use `assertRegex`, which is a simpler version of this method. `assertRegex` takes a single regular expression (a string or re compiled object) instead of a list.
#### Notes:
1. This function uses substring matching, i.e. the matching succeeds if *any* substring of the error message matches *any* regex in the list. This is more convenient for the user than full-string matching.
2. If regexes is the empty list, the matching will always fail.
3. Use regexes=[''] for a regex that will always pass.
4. '.' matches any single character *except* the newline. To match any character, use '(.|\n)'.
5. '^' matches the beginning of each line, not just the beginning of the string. Similarly, '$' matches the end of each line.
6. An exception will be thrown if regexes contains an invalid regex.
| Args |
| `actual_str` | The string we try to match with the items in regexes. |
| `regexes` | The regular expressions we want to match against str. See "Notes" above for detailed notes on how this is interpreted. |
| `message` | The message to be printed if the test fails. |
### `assertRegexpMatches`
```
assertRegexpMatches(
*args, **kwargs
)
```
### `assertSameElements`
```
assertSameElements(
expected_seq, actual_seq, msg=None
)
```
Asserts that two sequences have the same elements (in any order).
This method, unlike assertCountEqual, doesn't care about any duplicates in the expected and actual sequences.
> assertSameElements([1, 1, 1, 0, 0, 0], [0, 1]) # Doesn't raise an AssertionError
>
>
If possible, you should use assertCountEqual instead of assertSameElements.
| Args |
| `expected_seq` | A sequence containing elements we are expecting. |
| `actual_seq` | The sequence that we are testing. |
| `msg` | The message to be printed if the test fails. |
### `assertSameStructure`
```
assertSameStructure(
a, b, aname='a', bname='b', msg=None
)
```
Asserts that two values contain the same structural content.
The two arguments should be data trees consisting of trees of dicts and lists. They will be deeply compared by walking into the contents of dicts and lists; other items will be compared using the == operator. If the two structures differ in content, the failure message will indicate the location within the structures where the first difference is found. This may be helpful when comparing large structures.
Mixed Sequence and Set types are supported. Mixed Mapping types are supported, but the order of the keys will not be considered in the comparison.
| Args |
| `a` | The first structure to compare. |
| `b` | The second structure to compare. |
| `aname` | Variable name to use for the first structure in assertion messages. |
| `bname` | Variable name to use for the second structure. |
| `msg` | Additional text to include in the failure message. |
### `assertSequenceAlmostEqual`
```
assertSequenceAlmostEqual(
expected_seq, actual_seq, places=None, msg=None, delta=None
)
```
An approximate equality assertion for ordered sequences.
Fail if the two sequences are unequal as determined by their value differences rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between each value in the two sequences is more than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
If the two sequences compare equal then they will automatically compare almost equal.
| Args |
| `expected_seq` | A sequence containing elements we are expecting. |
| `actual_seq` | The sequence that we are testing. |
| `places` | The number of decimal places to compare. |
| `msg` | The message to be printed if the test fails. |
| `delta` | The OK difference between compared values. |
### `assertSequenceEqual`
```
assertSequenceEqual(
seq1, seq2, msg=None, seq_type=None
)
```
An equality assertion for ordered sequences (like lists and tuples).
For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.
| Args |
| `seq1` | The first sequence to compare. |
| `seq2` | The second sequence to compare. |
| `seq_type` | The expected datatype of the sequences, or None if no datatype should be enforced. |
| `msg` | Optional message to use on failure instead of a list of differences. |
### `assertSequenceStartsWith`
```
assertSequenceStartsWith(
prefix, whole, msg=None
)
```
An equality assertion for the beginning of ordered sequences.
If prefix is an empty sequence, it will raise an error unless whole is also an empty sequence.
If prefix is not a sequence, it will raise an error if the first element of whole does not match.
| Args |
| `prefix` | A sequence expected at the beginning of the whole parameter. |
| `whole` | The sequence in which to look for prefix. |
| `msg` | Optional message to report on failure. |
### `assertSetEqual`
```
assertSetEqual(
set1, set2, msg=None
)
```
A set-specific equality assertion.
| Args |
| `set1` | The first set to compare. |
| `set2` | The second set to compare. |
| `msg` | Optional message to use on failure instead of a list of differences. |
assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).
### `assertShapeEqual`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3500-L3526)
```
assertShapeEqual(
input_a, input_b, msg=None
)
```
Asserts that two Numpy or TensorFlow objects have the same shape.
For Tensors, this compares statically known shapes at compile time, not dynamic shapes at runtime.
| Args |
| `input_a` | A Numpy ndarray, Numpy scalar, or a Tensor. |
| `input_b` | A Numpy ndarray, Numpy scalar, or a Tensor. |
| `msg` | Optional message to report on failure. |
| Raises |
| `TypeError` | If the arguments have the wrong type. |
### `assertStartsWith`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2586-L2597)
```
assertStartsWith(
actual, expected_start, msg=None
)
```
Assert that actual.startswith(expected\_start) is True.
| Args |
| `actual` | str |
| `expected_start` | str |
| `msg` | Optional message to report on failure. |
### `assertTotallyOrdered`
```
assertTotallyOrdered(
*groups, **kwargs
)
```
Asserts that total ordering has been implemented correctly.
For example, say you have a class A that compares only on its attribute x. Comparators other than **lt** are omitted for brevity.
class A(object): def **init**(self, x, y): self.x = x self.y = y
def **hash**(self): return hash(self.x)
def **lt**(self, other): try: return self.x < other.x except AttributeError: return NotImplemented
assertTotallyOrdered will check that instances can be ordered correctly. For example,
self.assertTotallyOrdered( [None], # None should come before everything else. [1], # Integers sort earlier. [A(1, 'a')], [A(2, 'b')], # 2 is after 1. [A(3, 'c'), A(3, 'd')], # The second argument is irrelevant. [A(4, 'z')], ['foo']) # Strings sort last.
| Args |
| `*groups` | A list of groups of elements. Each group of elements is a list of objects that are equal. The elements in each group must be less than the elements in the group after it. For example, these groups are totally ordered: [None], [1], [2, 2], [3]. \*\*kwargs: optional msg keyword argument can be passed. |
### `assertTrue`
```
assertTrue(
expr, msg=None
)
```
Check that the expression is true.
### `assertTupleEqual`
```
assertTupleEqual(
tuple1, tuple2, msg=None
)
```
A tuple-specific equality assertion.
| Args |
| `tuple1` | The first tuple to compare. |
| `tuple2` | The second tuple to compare. |
| `msg` | Optional message to use on failure instead of a list of differences. |
### `assertUrlEqual`
```
assertUrlEqual(
a, b, msg=None
)
```
Asserts that urls are equal, ignoring ordering of query params.
### `assertWarns`
```
assertWarns(
expected_warning, *args, **kwargs
)
```
Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception.
If called with the callable and arguments omitted, will return a context object used like this::
```
with self.assertWarns(SomeWarning):
do_something()
```
An optional keyword argument 'msg' can be provided when assertWarns is used as a context object.
The context manager keeps a reference to the first matching warning as the 'warning' attribute; similarly, the 'filename' and 'lineno' attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion::
```
with self.assertWarns(SomeWarning) as cm:
do_something()
the_warning = cm.warning
self.assertEqual(the_warning.some_attribute, 147)
```
### `assertWarnsRegex`
```
assertWarnsRegex(
expected_warning, expected_regex, *args, **kwargs
)
```
Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.
| Args |
| `expected_warning` | Warning class expected to be triggered. |
| `expected_regex` | Regex (re.Pattern object or string) expected to be found in error message. |
| `args` | Function to be called and extra positional args. |
| `kwargs` | Extra kwargs. |
| `msg` | Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager. |
### `assert_`
```
assert_(
*args, **kwargs
)
```
### `cached_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2694-L2744)
```
@contextlib.contextmanager
cached_session(
graph=None, config=None, use_gpu=True, force_gpu=False
)
```
Returns a TensorFlow Session for use in executing tests.
This method behaves differently than self.session(): for performance reasons `cached_session` will by default reuse the same session within the same test. The session returned by this function will only be closed at the end of the test (in the TearDown function).
Use the `use_gpu` and `force_gpu` options to control where ops are run. If `force_gpu` is True, all ops are pinned to `/device:GPU:0`. Otherwise, if `use_gpu` is True, TensorFlow tries to run as many ops on the GPU as possible. If both `force_gpu and`use\_gpu` are False, all ops are pinned to the CPU.
#### Example:
```
class MyOperatorTest(test_util.TensorFlowTestCase):
def testMyOperator(self):
with self.cached_session() as sess:
valid_input = [1.0, 2.0, 3.0, 4.0, 5.0]
result = MyOperator(valid_input).eval()
self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0]
invalid_input = [-1.0, 2.0, 7.0]
with self.assertRaisesOpError("negative input not supported"):
MyOperator(invalid_input).eval()
```
| Args |
| `graph` | Optional graph to use during the returned session. |
| `config` | An optional config\_pb2.ConfigProto to use to configure the session. |
| `use_gpu` | If True, attempt to run as many ops as possible on GPU. |
| `force_gpu` | If True, pin all ops to `/device:GPU:0`. |
| Yields |
| A Session object that should be used as a context manager to surround the graph building and execution code in a test case. |
### `captureWritesToStream`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2490-L2533)
```
@contextlib.contextmanager
captureWritesToStream(
stream
)
```
A context manager that captures the writes to a given stream.
This context manager captures all writes to a given stream inside of a `CapturedWrites` object. When this context manager is created, it yields the `CapturedWrites` object. The captured contents can be accessed by calling `.contents()` on the `CapturedWrites`.
For this function to work, the stream must have a file descriptor that can be modified using `os.dup` and `os.dup2`, and the stream must support a `.flush()` method. The default python sys.stdout and sys.stderr are examples of this. Note that this does not work in Colab or Jupyter notebooks, because those use alternate stdout streams.
#### Example:
```
class MyOperatorTest(test_util.TensorFlowTestCase):
def testMyOperator(self):
input = [1.0, 2.0, 3.0, 4.0, 5.0]
with self.captureWritesToStream(sys.stdout) as captured:
result = MyOperator(input).eval()
self.assertStartsWith(captured.contents(), "This was printed.")
```
| Args |
| `stream` | The stream whose writes should be captured. This stream must have a file descriptor, support writing via using that file descriptor, and must have a `.flush()` method. |
| Yields |
| A `CapturedWrites` object that contains all writes to the specified stream made during this context. |
### `checkedThread`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2862-L2880)
```
checkedThread(
target, args=None, kwargs=None
)
```
Returns a Thread wrapper that asserts 'target' completes successfully.
This method should be used to create all threads in test cases, as otherwise there is a risk that a thread will silently fail, and/or assertions made in the thread will not be respected.
| Args |
| `target` | A callable object to be executed in the thread. |
| `args` | The argument tuple for the target invocation. Defaults to (). |
| `kwargs` | A dictionary of keyword arguments for the target invocation. Defaults to {}. |
| Returns |
| A wrapper for threading.Thread that supports start() and join() methods. |
### `countTestCases`
```
countTestCases()
```
### `create_tempdir`
```
create_tempdir(
name=None, cleanup=None
)
```
Create a temporary directory specific to the test.
>
> **Note:** The directory and its contents will be recursively cleared before creation. This ensures that there is no pre-existing state.
>
This creates a named directory on disk that is isolated to this test, and will be properly cleaned up by the test. This avoids several pitfalls of creating temporary directories for test purposes, as well as makes it easier to setup directories and verify their contents. For example:
```
def test_foo(self):
out_dir = self.create_tempdir()
out_log = out_dir.create_file('output.log')
expected_outputs = [
os.path.join(out_dir, 'data-0.txt'),
os.path.join(out_dir, 'data-1.txt'),
]
code_under_test(out_dir)
self.assertTrue(os.path.exists(expected_paths[0]))
self.assertTrue(os.path.exists(expected_paths[1]))
self.assertEqual('foo', out_log.read_text())
```
See also: `create_tempfile()` for creating temporary files.
| Args |
| `name` | Optional name of the directory. If not given, a unique name will be generated and used. |
| `cleanup` | Optional cleanup policy on when/if to remove the directory (and all its contents) at the end of the test. If None, then uses `self.tempfile_cleanup`. |
| Returns |
| A \_TempDir representing the created directory; see \_TempDir class docs for usage. |
### `create_tempfile`
```
create_tempfile(
file_path=None,
content=None,
mode='w',
encoding='utf8',
errors='strict',
cleanup=None
)
```
Create a temporary file specific to the test.
This creates a named file on disk that is isolated to this test, and will be properly cleaned up by the test. This avoids several pitfalls of creating temporary files for test purposes, as well as makes it easier to setup files, their data, read them back, and inspect them when a test fails. For example:
```
def test_foo(self):
output = self.create_tempfile()
code_under_test(output)
self.assertGreater(os.path.getsize(output), 0)
self.assertEqual('foo', output.read_text())
```
>
> **Note:** This will zero-out the file. This ensures there is no pre-existing state. NOTE: If the file already exists, it will be made writable and overwritten.
>
See also: `create_tempdir()` for creating temporary directories, and `_TempDir.create_file` for creating files within a temporary directory.
| Args |
| `file_path` | Optional file path for the temp file. If not given, a unique file name will be generated and used. Slashes are allowed in the name; any missing intermediate directories will be created. NOTE: This path is the path that will be cleaned up, including any directories in the path, e.g., 'foo/bar/baz.txt' will `rm -r foo`. |
| `content` | Optional string or bytes to initially write to the file. If not specified, then an empty file is created. |
| `mode` | Mode string to use when writing content. Only used if `content` is non-empty. |
| `encoding` | Encoding to use when writing string content. Only used if `content` is text. |
| `errors` | How to handle text to bytes encoding errors. Only used if `content` is text. |
| `cleanup` | Optional cleanup policy on when/if to remove the directory (and all its contents) at the end of the test. If None, then uses `self.tempfile_cleanup`. |
| Returns |
| A \_TempFile representing the created file; see \_TempFile class docs for usage. |
### `debug`
```
debug()
```
Run the test without collecting errors in a TestResult
### `defaultTestResult`
```
defaultTestResult()
```
### `doCleanups`
```
doCleanups()
```
Execute all cleanup functions. Normally called for you after tearDown.
### `enter_context`
```
enter_context()
```
### `evaluate`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2630-L2647)
```
evaluate(
tensors
)
```
Evaluates tensors and returns numpy values.
| Args |
| `tensors` | A Tensor or a nested list/tuple of Tensors. |
| Returns |
| tensors numpy values. |
### `evaluate_if_both_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2944-L2950)
```
evaluate_if_both_tensors(
a, b
)
```
### `fail`
```
fail(
msg=None, prefix=None
)
```
Fail immediately with the given message, optionally prefixed.
### `failIf`
```
failIf(
*args, **kwargs
)
```
### `failIfAlmostEqual`
```
failIfAlmostEqual(
*args, **kwargs
)
```
### `failIfEqual`
```
failIfEqual(
*args, **kwargs
)
```
### `failUnless`
```
failUnless(
*args, **kwargs
)
```
### `failUnlessAlmostEqual`
```
failUnlessAlmostEqual(
*args, **kwargs
)
```
### `failUnlessEqual`
```
failUnlessEqual(
*args, **kwargs
)
```
### `failUnlessRaises`
```
failUnlessRaises(
*args, **kwargs
)
```
### `get_temp_dir`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2472-L2488)
```
get_temp_dir()
```
Returns a unique temporary directory for the test to use.
If you call this method multiple times during in a test, it will return the same folder. However, across different runs the directories will be different. This will ensure that across different runs tests will not be able to pollute each others environment. If you need multiple unique directories within a single test, you should use tempfile.mkdtemp as follows: tempfile.mkdtemp(dir=self.get\_temp\_dir()):
| Returns |
| string, the path to the unique temporary directory created for this test. |
### `id`
```
id()
```
### `run`
```
run(
result=None
)
```
### `session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2650-L2692)
```
@contextlib.contextmanager
session(
graph=None, config=None, use_gpu=True, force_gpu=False
)
```
A context manager for a TensorFlow Session for use in executing tests.
Note that this will set this session and the graph as global defaults.
Use the `use_gpu` and `force_gpu` options to control where ops are run. If `force_gpu` is True, all ops are pinned to `/device:GPU:0`. Otherwise, if `use_gpu` is True, TensorFlow tries to run as many ops on the GPU as possible. If both `force_gpu and`use\_gpu` are False, all ops are pinned to the CPU.
#### Example:
```
class MyOperatorTest(test_util.TensorFlowTestCase):
def testMyOperator(self):
with self.session():
valid_input = [1.0, 2.0, 3.0, 4.0, 5.0]
result = MyOperator(valid_input).eval()
self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0]
invalid_input = [-1.0, 2.0, 7.0]
with self.assertRaisesOpError("negative input not supported"):
MyOperator(invalid_input).eval()
```
| Args |
| `graph` | Optional graph to use during the returned session. |
| `config` | An optional config\_pb2.ConfigProto to use to configure the session. |
| `use_gpu` | If True, attempt to run as many ops as possible on GPU. |
| `force_gpu` | If True, pin all ops to `/device:GPU:0`. |
| Yields |
| A Session object that should be used as a context manager to surround the graph building and execution code in a test case. |
### `setUp`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2428-L2452)
```
setUp()
```
Hook method for setting up the test fixture before exercising it.
### `setUpClass`
```
@classmethod
setUpClass()
```
Hook method for setting up class fixture before running tests in the class.
### `shortDescription`
```
shortDescription()
```
Formats both the test method name and the first line of its docstring.
If no docstring is given, only returns the method name.
This method overrides unittest.TestCase.shortDescription(), which only returns the first line of the docstring, obscuring the name of the test upon failure.
| Returns |
| `desc` | A short description of a test method. |
### `skipTest`
```
skipTest(
reason
)
```
Skip this test.
### `subTest`
```
@contextlib.contextmanager
subTest(
msg=_subtest_msg_sentinel, **params
)
```
Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed.
### `tearDown`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2454-L2465)
```
tearDown()
```
Hook method for deconstructing the test fixture after testing it.
### `tearDownClass`
```
@classmethod
tearDownClass()
```
Hook method for deconstructing the class fixture after running all tests in the class.
### `test_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L2746-L2772)
```
@contextlib.contextmanager
test_session(
graph=None, config=None, use_gpu=True, force_gpu=False
)
```
Use cached\_session instead. (deprecated)
### `__call__`
```
__call__(
*args, **kwds
)
```
Call self as a function.
### `__eq__`
```
__eq__(
other
)
```
Return self==value.
| Class Variables |
| longMessage | `True` |
| maxDiff | `1600` |
| tempfile\_cleanup | `<TempFileCleanup.ALWAYS: 'always'>` |
| programming_docs |
tensorflow tf.test.is_gpu_available tf.test.is\_gpu\_available
==========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L1880-L1935) |
Returns whether TensorFlow can access a GPU. (deprecated)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.is_gpu_available`](https://www.tensorflow.org/api_docs/python/tf/test/is_gpu_available)
```
tf.test.is_gpu_available(
cuda_only=False, min_cuda_compute_capability=None
)
```
For example,
```
>>> gpu_available = tf.test.is_gpu_available()
>>> is_cuda_gpu_available = tf.test.is_gpu_available(cuda_only=True)
>>> is_cuda_gpu_min_3 = tf.test.is_gpu_available(True, (3,0))
```
| Args |
| `cuda_only` | limit the search to CUDA GPUs. |
| `min_cuda_compute_capability` | a (major,minor) pair that indicates the minimum CUDA compute capability required, or None if no requirement. |
Note that the keyword arg name "cuda\_only" is misleading (since routine will return true when a GPU device is available irrespective of whether TF was built with CUDA support or ROCm support. However no changes here because
++ Changing the name "cuda\_only" to something more generic would break backward compatibility
++ Adding an equivalent "rocm\_only" would require the implementation check the build type. This in turn would require doing the same for CUDA and thus potentially break backward compatibility
++ Adding a new "cuda\_or\_rocm\_only" would not break backward compatibility, but would require most (if not all) callers to update the call to use "cuda\_or\_rocm\_only" instead of "cuda\_only"
| Returns |
| True if a GPU device of the requested kind is available. |
tensorflow tf.test.compute_gradient tf.test.compute\_gradient
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/gradient_checker_v2.py#L293-L342) |
Computes the theoretical and numeric Jacobian of `f`.
```
tf.test.compute_gradient(
f, x, delta=None
)
```
With y = f(x), computes the theoretical and numeric Jacobian dy/dx.
| Args |
| `f` | the function. |
| `x` | the arguments for the function as a list or tuple of values convertible to a Tensor. |
| `delta` | (optional) perturbation used to compute numeric Jacobian. |
| Returns |
| A pair of lists, where the first is a list of 2-d numpy arrays representing the theoretical Jacobians for each argument, and the second list is the numerical ones. Each 2-d array has "y\_size" rows and "x\_size" columns where "x\_size" is the number of elements in the corresponding argument and "y\_size" is the number of elements in f(x). |
| Raises |
| `ValueError` | If result is empty but the gradient is nonzero. |
| `ValueError` | If x is not list, but any other type. |
#### Example:
```
@tf.function
def test_func(x):
return x*x
class MyTest(tf.test.TestCase):
def test_gradient_of_test_func(self):
theoretical, numerical = tf.test.compute_gradient(test_func, [1.0])
# ((array([[2.]], dtype=float32),),
# (array([[2.000004]], dtype=float32),))
self.assertAllClose(theoretical, numerical)
```
tensorflow tf.test.create_local_cluster tf.test.create\_local\_cluster
==============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/test_util.py#L3719-L3805) |
Create and start local servers and return the associated `Server` objects.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.create_local_cluster`](https://www.tensorflow.org/api_docs/python/tf/test/create_local_cluster)
```
tf.test.create_local_cluster(
num_workers,
num_ps,
protocol='grpc',
worker_config=None,
ps_config=None
)
```
"PS" stands for "parameter server": a task responsible for storing and updating the model's parameters. Other tasks send updates to these parameters as they work on optimizing the parameters. This particular division of labor between tasks is not required, but is common for distributed training.
Read more at <https://www.tensorflow.org/guide/extend/architecture>
Figure illustrates the interaction of these components. "/job:worker/task:0" and "/job:ps/task:0" are both tasks with worker services.
#### Example:
```
workers, _ = tf.test.create_local_cluster(num_workers=2, num_ps=2)
worker_sessions = [tf.compat.v1.Session(w.target) for w in workers]
with tf.device("/job:ps/task:0"):
...
with tf.device("/job:ps/task:1"):
...
with tf.device("/job:worker/task:0"):
...
with tf.device("/job:worker/task:1"):
...
worker_sessions[0].run(...)
```
| Args |
| `num_workers` | Number of worker servers to start. |
| `num_ps` | Number of PS servers to start. |
| `protocol` | Communication protocol. Allowed values are documented in the documentation of [`tf.distribute.Server`](../distribute/server). |
| `worker_config` | (optional) `tf.ConfigProto` to initialize workers. Can be used to instantiate multiple devices etc. |
| `ps_config` | (optional) `tf.ConfigProto` to initialize PS servers. |
| Returns |
| A tuple `(worker_servers, ps_servers)`. `worker_servers` is a list of `num_workers` objects of type [`tf.distribute.Server`](../distribute/server) (all running locally); and `ps_servers` is a list of `num_ps` objects of similar type. |
| Raises |
| `ImportError` | if portpicker module was not found at load time |
tensorflow tf.test.is_built_with_cuda tf.test.is\_built\_with\_cuda
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/test.py#L92-L110) |
Returns whether TensorFlow was built with CUDA (GPU) support.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.is_built_with_cuda`](https://www.tensorflow.org/api_docs/python/tf/test/is_built_with_cuda)
```
tf.test.is_built_with_cuda()
```
This method should only be used in tests written with [`tf.test.TestCase`](testcase). A typical usage is to skip tests that should only run with CUDA (GPU).
```
class MyTest(tf.test.TestCase):
def test_add_on_gpu(self):
if not tf.test.is_built_with_cuda():
self.skipTest("test is only applicable on GPU")
with tf.device("GPU:0"):
self.assertEqual(tf.math.add(1.0, 2.0), 3.0)
```
TensorFlow official binary is built with CUDA.
tensorflow tf.test.TestCase.failureException tf.test.TestCase.failureException
=================================
Assertion failed.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.test.TestCase.failureException`](https://www.tensorflow.org/api_docs/python/tf/test/TestCase/failureException)
```
tf.test.TestCase.failureException(
*args, **kwargs
)
```
tensorflow tf.Variable.SaveSliceInfo tf.Variable.SaveSliceInfo
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/variables.py#L1250-L1329) |
Information on how to save this Variable as a slice.
#### View aliases
**Main aliases**
[`tf.experimental.dtensor.DVariable.SaveSliceInfo`](https://www.tensorflow.org/api_docs/python/tf/Variable/SaveSliceInfo)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.Variable.SaveSliceInfo`](https://www.tensorflow.org/api_docs/python/tf/Variable/SaveSliceInfo)
```
tf.Variable.SaveSliceInfo(
full_name=None,
full_shape=None,
var_offset=None,
var_shape=None,
save_slice_info_def=None,
import_scope=None
)
```
Provides internal support for saving variables as slices of a larger variable. This API is not public and is subject to change.
#### Available properties:
* full\_name
* full\_shape
* var\_offset
* var\_shape
| Args |
| `full_name` | Name of the full variable of which this `Variable` is a slice. |
| `full_shape` | Shape of the full variable, as a list of int. |
| `var_offset` | Offset of this `Variable` into the full variable, as a list of int. |
| `var_shape` | Shape of this `Variable`, as a list of int. |
| `save_slice_info_def` | `SaveSliceInfoDef` protocol buffer. If not `None`, recreates the SaveSliceInfo object its contents. `save_slice_info_def` and other arguments are mutually exclusive. |
| `import_scope` | Optional `string`. Name scope to add. Only used when initializing from protocol buffer. |
| Attributes |
| `spec` | Computes the spec string used for saving. |
Methods
-------
### `to_proto`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/variables.py#L1307-L1329)
```
to_proto(
export_scope=None
)
```
Returns a SaveSliceInfoDef() proto.
| Args |
| `export_scope` | Optional `string`. Name scope to remove. |
| Returns |
| A `SaveSliceInfoDef` protocol buffer, or None if the `Variable` is not in the specified name scope. |
tensorflow tf.autodiff.ForwardAccumulator tf.autodiff.ForwardAccumulator
==============================
Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff.
```
tf.autodiff.ForwardAccumulator(
primals, tangents
)
```
Compare to [`tf.GradientTape`](../gradienttape) which computes vector-Jacobian products ("VJP"s) using reverse-mode autodiff (backprop). Reverse mode is more attractive when computing gradients of a scalar-valued function with respect to many inputs (e.g. a neural network with many parameters and a scalar loss). Forward mode works best on functions with many outputs and few inputs. Since it does not hold on to intermediate activations, it is much more memory efficient than backprop where it is applicable.
Consider a simple linear regression:
```
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
targets = tf.constant([[1.], [-1.]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
with tf.autodiff.ForwardAccumulator(
primals=dense.kernel,
tangents=tf.constant([[1.], [0.]])) as acc:
loss = tf.reduce_sum((dense(x) - targets) ** 2.)
acc.jvp(loss)
<tf.Tensor: shape=(), dtype=float32, numpy=...>
```
The example has two variables containing parameters, `dense.kernel` (2 parameters) and `dense.bias` (1 parameter). Considering the training data `x` as a constant, this means the Jacobian matrix for the function mapping from parameters to loss has one row and three columns.
With forwardprop, we specify a length-three vector in advance which multiplies the Jacobian. The `primals` constructor argument is the parameter (a [`tf.Tensor`](../tensor) or [`tf.Variable`](../variable)) we're specifying a vector for, and the `tangents` argument is the "vector" in Jacobian-vector product. If our goal is to compute the entire Jacobian matrix, forwardprop computes one column at a time while backprop computes one row at a time. Since the Jacobian in the linear regression example has only one row, backprop requires fewer invocations:
```
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
targets = tf.constant([[1.], [-1.]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
loss_fn = lambda: tf.reduce_sum((dense(x) - targets) ** 2.)
kernel_fprop = []
with tf.autodiff.ForwardAccumulator(
dense.kernel, tf.constant([[1.], [0.]])) as acc:
kernel_fprop.append(acc.jvp(loss_fn()))
with tf.autodiff.ForwardAccumulator(
dense.kernel, tf.constant([[0.], [1.]])) as acc:
kernel_fprop.append(acc.jvp(loss_fn()))
with tf.autodiff.ForwardAccumulator(dense.bias, tf.constant([1.])) as acc:
bias_fprop = acc.jvp(loss_fn())
with tf.GradientTape() as tape:
loss = loss_fn()
kernel_grad, bias_grad = tape.gradient(loss, (dense.kernel, dense.bias))
np.testing.assert_allclose(
kernel_grad, tf.stack(kernel_fprop)[:, tf.newaxis])
np.testing.assert_allclose(bias_grad, bias_fprop[tf.newaxis])
```
Implicit in the `tape.gradient` call is a length-one vector which left-multiplies the Jacobian, a vector-Jacobian product.
`ForwardAccumulator` maintains JVPs corresponding primal tensors it is watching, derived from the original `primals` specified in the constructor. As soon as a primal tensor is deleted, `ForwardAccumulator` deletes the corresponding JVP.
`acc.jvp(x)` retrieves `acc`'s JVP corresponding to the primal tensor `x`. It does not perform any computation. `acc.jvp` calls can be repeated as long as `acc` is accessible, whether the context manager is active or not. New JVPs are only computed while the context manager is active.
Note that `ForwardAccumulator`s are always applied in the order their context managers were entered, so inner accumulators will not see JVP computation from outer accumulators. Take higher-order JVPs from outer accumulators:
```
primal = tf.constant(1.1)
with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as outer:
with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as inner:
primal_out = primal ** tf.constant(3.5)
inner_jvp = inner.jvp(primal_out)
inner_jvp # 3.5 * 1.1 ** 2.5
<tf.Tensor: shape=(), dtype=float32, numpy=4.4417057>
outer.jvp(inner_jvp) # 3.5 * 2.5 * 1.1 ** 1.5
<tf.Tensor: shape=(), dtype=float32, numpy=10.094786>
```
Reversing the collection in the last line to instead retrieve `inner.jvp(outer.jvp(primal_out))` will not work.
Strict nesting also applies to combinations of `ForwardAccumulator` and [`tf.GradientTape`](../gradienttape). More deeply nested `GradientTape` objects will ignore the products of outer `ForwardAccumulator` objects. This allows (for example) memory-efficient forward-over-backward computation of Hessian-vector products, where the inner `GradientTape` would otherwise hold on to all intermediate JVPs:
```
v = tf.Variable([1., 2.])
with tf.autodiff.ForwardAccumulator(
v,
# The "vector" in Hessian-vector product.
tf.constant([1., 0.])) as acc:
with tf.GradientTape() as tape:
y = tf.reduce_sum(v ** 3.)
backward = tape.gradient(y, v)
backward # gradient from backprop
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 3., 12.], dtype=float32)>
acc.jvp(backward) # forward-over-backward Hessian-vector product
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([6., 0.], dtype=float32)>
```
| Args |
| `primals` | A tensor or nested structure of tensors to watch. |
| `tangents` | A tensor or nested structure of tensors, with the same nesting structure as `primals`, with each element being a vector with the same size as the corresponding primal element. |
| Raises |
| `ValueError` | If the same tensor or variable is specified multiple times in `primals`. |
Methods
-------
### `jvp`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/forwardprop.py#L413-L446)
```
jvp(
primals,
unconnected_gradients=tf.UnconnectedGradients.NONE
)
```
Fetches the Jacobian-vector product computed for `primals`.
Note that this method performs no computation, and simply looks up a JVP that was already computed (unlike backprop using a [`tf.GradientTape`](../gradienttape), where the computation happens on the call to `tape.gradient`).
| Args |
| `primals` | A watched Tensor or structure of Tensors to fetch the JVPs for. |
| `unconnected_gradients` | A value which can either hold 'none' or 'zero' and alters the value which will be returned if no JVP was computed for `primals`. The possible values and effects are detailed in 'tf.UnconnectedGradients' and it defaults to 'none'. |
| Returns |
| Tensors with the same shapes and dtypes as `primals`, or None if no JVP is available. |
### `__enter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/forwardprop.py#L362-L364)
```
__enter__()
```
### `__exit__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/forwardprop.py#L366-L368)
```
__exit__(
typ, value, traceback
)
```
tensorflow tf.config.set_logical_device_configuration tf.config.set\_logical\_device\_configuration
=============================================
Set the logical device configuration for a [`tf.config.PhysicalDevice`](physicaldevice).
#### View aliases
**Main aliases**
[`tf.config.experimental.set_virtual_device_configuration`](https://www.tensorflow.org/api_docs/python/tf/config/set_logical_device_configuration)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.set_virtual_device_configuration`](https://www.tensorflow.org/api_docs/python/tf/config/set_logical_device_configuration), [`tf.compat.v1.config.set_logical_device_configuration`](https://www.tensorflow.org/api_docs/python/tf/config/set_logical_device_configuration)
```
tf.config.set_logical_device_configuration(
device, logical_devices
)
```
A visible [`tf.config.PhysicalDevice`](physicaldevice) will by default have a single [`tf.config.LogicalDevice`](logicaldevice) associated with it once the runtime is initialized. Specifying a list of [`tf.config.LogicalDeviceConfiguration`](logicaldeviceconfiguration) objects allows multiple devices to be created on the same [`tf.config.PhysicalDevice`](physicaldevice).
Logical device configurations can be modified by calling this function as long as the runtime is uninitialized. After the runtime is initialized calling this function raises a RuntimeError.
The following example splits the CPU into 2 logical devices:
```
physical_devices = tf.config.list_physical_devices('CPU')
assert len(physical_devices) == 1, "No CPUs found"
# Specify 2 virtual CPUs. Note currently memory limit is not supported.
try:
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration()])
logical_devices = tf.config.list_logical_devices('CPU')
assert len(logical_devices) == 2
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration()])
except:
# Cannot modify logical devices once initialized.
pass
```
The following example splits the GPU into 2 logical devices with 100 MB each:
```
physical_devices = tf.config.list_physical_devices('GPU')
try:
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=100),
tf.config.LogicalDeviceConfiguration(memory_limit=100)])
logical_devices = tf.config.list_logical_devices('GPU')
assert len(logical_devices) == len(physical_devices) + 1
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=10),
tf.config.LogicalDeviceConfiguration(memory_limit=10)])
except:
# Invalid device or cannot modify logical devices once initialized.
pass
```
| Args |
| `device` | The `PhysicalDevice` to configure. |
| `logical_devices` | (optional) List of [`tf.config.LogicalDeviceConfiguration`](logicaldeviceconfiguration) objects to allocate for the specified `PhysicalDevice`. If None, the default configuration will be used. |
| Raises |
| `ValueError` | If argument validation fails. |
| `RuntimeError` | Runtime is already initialized. |
tensorflow Module: tf.config.optimizer Module: tf.config.optimizer
===========================
Public API for tf.config.optimizer namespace.
Functions
---------
[`get_experimental_options(...)`](optimizer/get_experimental_options): Get experimental optimizer options.
[`get_jit(...)`](optimizer/get_jit): Returns JIT compilation configuration for code inside [`tf.function`](../function).
[`set_experimental_options(...)`](optimizer/set_experimental_options): Set experimental optimizer options.
[`set_jit(...)`](optimizer/set_jit): Configure JIT compilation. (deprecated argument values)
| programming_docs |
tensorflow tf.config.get_visible_devices tf.config.get\_visible\_devices
===============================
Get the list of visible physical devices.
#### View aliases
**Main aliases**
[`tf.config.experimental.get_visible_devices`](https://www.tensorflow.org/api_docs/python/tf/config/get_visible_devices)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_visible_devices`](https://www.tensorflow.org/api_docs/python/tf/config/get_visible_devices), [`tf.compat.v1.config.get_visible_devices`](https://www.tensorflow.org/api_docs/python/tf/config/get_visible_devices)
```
tf.config.get_visible_devices(
device_type=None
)
```
Returns the list of `PhysicalDevice`s currently marked as visible to the runtime. A visible device will have at least one `LogicalDevice` associated with it once the runtime is initialized.
The following example verifies all visible GPUs have been disabled:
```
physical_devices = tf.config.list_physical_devices('GPU')
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
for device in visible_devices:
assert device.device_type != 'GPU'
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
```
| Args |
| `device_type` | (optional string) Only include devices matching this device type. For example "CPU" or "GPU". |
| Returns |
| List of visible `PhysicalDevice`s |
tensorflow tf.config.PhysicalDevice tf.config.PhysicalDevice
========================
Abstraction for a locally visible physical device.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.PhysicalDevice`](https://www.tensorflow.org/api_docs/python/tf/config/PhysicalDevice)
```
tf.config.PhysicalDevice(
name, device_type
)
```
TensorFlow can utilize various devices such as the CPU or multiple GPUs for computation. Before initializing a local device for use, the user can customize certain properties of the device such as it's visibility or memory configuration.
Once a visible [`tf.config.PhysicalDevice`](physicaldevice) is initialized one or more [`tf.config.LogicalDevice`](logicaldevice) objects are created. Use [`tf.config.set_visible_devices`](set_visible_devices) to configure the visibility of a physical device and [`tf.config.set_logical_device_configuration`](set_logical_device_configuration) to configure multiple [`tf.config.LogicalDevice`](logicaldevice) objects for a [`tf.config.PhysicalDevice`](physicaldevice). This is useful when separation between models is needed or to simulate a multi-device environment.
#### Fields:
* **`name`**: Unique identifier for device.
* **`device_type`**: String declaring the type of device such as "CPU" or "GPU".
| Attributes |
| `name` | A `namedtuple` alias for field number 0 |
| `device_type` | A `namedtuple` alias for field number 1 |
tensorflow tf.config.get_logical_device_configuration tf.config.get\_logical\_device\_configuration
=============================================
Get the virtual device configuration for a [`tf.config.PhysicalDevice`](physicaldevice).
#### View aliases
**Main aliases**
[`tf.config.experimental.get_virtual_device_configuration`](https://www.tensorflow.org/api_docs/python/tf/config/get_logical_device_configuration)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_virtual_device_configuration`](https://www.tensorflow.org/api_docs/python/tf/config/get_logical_device_configuration), [`tf.compat.v1.config.get_logical_device_configuration`](https://www.tensorflow.org/api_docs/python/tf/config/get_logical_device_configuration)
```
tf.config.get_logical_device_configuration(
device
)
```
Returns the list of [`tf.config.LogicalDeviceConfiguration`](logicaldeviceconfiguration) objects previously configured by a call to [`tf.config.set_logical_device_configuration`](set_logical_device_configuration).
#### For example:
```
physical_devices = tf.config.list_physical_devices('CPU')
assert len(physical_devices) == 1, "No CPUs found"
configs = tf.config.get_logical_device_configuration(
physical_devices[0])
try:
assert configs is None
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration()])
configs = tf.config.get_logical_device_configuration(
physical_devices[0])
assert len(configs) == 2
except:
# Cannot modify virtual devices once initialized.
pass
```
| Args |
| `device` | `PhysicalDevice` to query |
| Returns |
| List of [`tf.config.LogicalDeviceConfiguration`](logicaldeviceconfiguration) objects or `None` if no virtual device configuration has been set for this physical device. |
tensorflow tf.config.LogicalDevice tf.config.LogicalDevice
=======================
Abstraction for a logical device initialized by the runtime.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.LogicalDevice`](https://www.tensorflow.org/api_docs/python/tf/config/LogicalDevice)
```
tf.config.LogicalDevice(
name, device_type
)
```
A [`tf.config.LogicalDevice`](logicaldevice) corresponds to an initialized logical device on a [`tf.config.PhysicalDevice`](physicaldevice) or a remote device visible to the cluster. Tensors and operations can be placed on a specific logical device by calling [`tf.device`](../device) with a specified [`tf.config.LogicalDevice`](logicaldevice).
#### Fields:
* **`name`**: The fully qualified name of the device. Can be used for Op or function placement.
* **`device_type`**: String declaring the type of device such as "CPU" or "GPU".
| Attributes |
| `name` | A `namedtuple` alias for field number 0 |
| `device_type` | A `namedtuple` alias for field number 1 |
tensorflow Module: tf.config.threading Module: tf.config.threading
===========================
Public API for tf.config.threading namespace.
Functions
---------
[`get_inter_op_parallelism_threads(...)`](threading/get_inter_op_parallelism_threads): Get number of threads used for parallelism between independent operations.
[`get_intra_op_parallelism_threads(...)`](threading/get_intra_op_parallelism_threads): Get number of threads used within an individual op for parallelism.
[`set_inter_op_parallelism_threads(...)`](threading/set_inter_op_parallelism_threads): Set number of threads used for parallelism between independent operations.
[`set_intra_op_parallelism_threads(...)`](threading/set_intra_op_parallelism_threads): Set number of threads used within an individual op for parallelism.
tensorflow tf.config.LogicalDeviceConfiguration tf.config.LogicalDeviceConfiguration
====================================
Configuration class for a logical devices.
#### View aliases
**Main aliases**
[`tf.config.experimental.VirtualDeviceConfiguration`](https://www.tensorflow.org/api_docs/python/tf/config/LogicalDeviceConfiguration)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.LogicalDeviceConfiguration`](https://www.tensorflow.org/api_docs/python/tf/config/LogicalDeviceConfiguration), [`tf.compat.v1.config.experimental.VirtualDeviceConfiguration`](https://www.tensorflow.org/api_docs/python/tf/config/LogicalDeviceConfiguration)
```
tf.config.LogicalDeviceConfiguration(
memory_limit=None, experimental_priority=None
)
```
The class specifies the parameters to configure a [`tf.config.PhysicalDevice`](physicaldevice) as it is initialized to a [`tf.config.LogicalDevice`](logicaldevice) during runtime initialization. Not all fields are valid for all device types.
See [`tf.config.get_logical_device_configuration`](get_logical_device_configuration) and [`tf.config.set_logical_device_configuration`](set_logical_device_configuration) for usage examples.
#### Fields:
* **`memory_limit`**: (optional) Maximum memory (in MB) to allocate on the virtual device. Currently only supported for GPUs.
* **`experimental_priority`**: (optional) Priority to assign to a virtual device. Lower values have higher priorities and 0 is the default. Within a physical GPU, the GPU scheduler will prioritize ops on virtual devices with higher priority. Currently only supported for Nvidia GPUs.
| Attributes |
| `memory_limit` | A `namedtuple` alias for field number 0 |
| `experimental_priority` | A `namedtuple` alias for field number 1 |
tensorflow tf.config.list_logical_devices tf.config.list\_logical\_devices
================================
Return a list of logical devices created by runtime.
#### View aliases
**Main aliases**
[`tf.config.experimental.list_logical_devices`](https://www.tensorflow.org/api_docs/python/tf/config/list_logical_devices)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.list_logical_devices`](https://www.tensorflow.org/api_docs/python/tf/config/list_logical_devices), [`tf.compat.v1.config.list_logical_devices`](https://www.tensorflow.org/api_docs/python/tf/config/list_logical_devices)
```
tf.config.list_logical_devices(
device_type=None
)
```
Logical devices may correspond to physical devices or remote devices in the cluster. Operations and tensors may be placed on these devices by using the `name` of the [`tf.config.LogicalDevice`](logicaldevice).
Calling [`tf.config.list_logical_devices`](list_logical_devices) triggers the runtime to configure any [`tf.config.PhysicalDevice`](physicaldevice) visible to the runtime, thereby preventing further configuration. To avoid runtime initialization, call [`tf.config.list_physical_devices`](list_physical_devices) instead.
#### For example:
```
logical_devices = tf.config.list_logical_devices('GPU')
if len(logical_devices) > 0:
# Allocate on GPU:0
with tf.device(logical_devices[0].name):
one = tf.constant(1)
# Allocate on GPU:1
with tf.device(logical_devices[1].name):
two = tf.constant(2)
```
| Args |
| `device_type` | (optional string) Only include devices matching this device type. For example "CPU" or "GPU". |
| Returns |
| List of initialized `LogicalDevice`s |
tensorflow tf.config.list_physical_devices tf.config.list\_physical\_devices
=================================
Return a list of physical devices visible to the host runtime.
#### View aliases
**Main aliases**
[`tf.config.experimental.list_physical_devices`](https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.list_physical_devices`](https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices), [`tf.compat.v1.config.list_physical_devices`](https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices)
```
tf.config.list_physical_devices(
device_type=None
)
```
Physical devices are hardware devices present on the host machine. By default all discovered CPU and GPU devices are considered visible.
This API allows querying the physical hardware resources prior to runtime initialization. Thus, giving an opportunity to call any additional configuration APIs. This is in contrast to [`tf.config.list_logical_devices`](list_logical_devices), which triggers runtime initialization in order to list the configured devices.
The following example lists the number of visible GPUs on the host.
```
physical_devices = tf.config.list_physical_devices('GPU')
print("Num GPUs:", len(physical_devices))
Num GPUs: ...
```
However, the number of GPUs available to the runtime may change during runtime initialization due to marking certain devices as not visible or configuring multiple logical devices.
| Args |
| `device_type` | (optional string) Only include devices matching this device type. For example "CPU" or "GPU". |
| Returns |
| List of discovered [`tf.config.PhysicalDevice`](physicaldevice) objects |
tensorflow tf.config.get_soft_device_placement tf.config.get\_soft\_device\_placement
======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L244-L259) |
Return status of soft device placement flag.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.get_soft_device_placement`](https://www.tensorflow.org/api_docs/python/tf/config/get_soft_device_placement)
```
tf.config.get_soft_device_placement()
```
If enabled, an op will be placed on CPU if any of the following are true
1. there's no GPU implementation for the OP
2. no GPU devices are known or registered
3. need to co-locate with reftype input(s) which are from CPU
If disabled, the placement is strict and CPU fallback is not allowed. An error is raised when an Op cannot be placed onto its intended device.
| Returns |
| A boolean indicating if soft placement is enabled. |
tensorflow Module: tf.config.experimental Module: tf.config.experimental
==============================
Public API for tf.config.experimental namespace.
Classes
-------
[`class ClusterDeviceFilters`](experimental/clusterdevicefilters): Represent a collection of device filters for the remote workers in cluster.
[`class VirtualDeviceConfiguration`](logicaldeviceconfiguration): Configuration class for a logical devices.
Functions
---------
[`disable_mlir_bridge(...)`](experimental/disable_mlir_bridge): Disables experimental MLIR-Based TensorFlow Compiler Bridge.
[`disable_mlir_graph_optimization(...)`](experimental/disable_mlir_graph_optimization): Disables experimental MLIR-Based TensorFlow Compiler Optimizations.
[`enable_mlir_bridge(...)`](experimental/enable_mlir_bridge): Enables experimental MLIR-Based TensorFlow Compiler Bridge.
[`enable_mlir_graph_optimization(...)`](experimental/enable_mlir_graph_optimization): Enables experimental MLIR-Based TensorFlow Compiler Optimizations.
[`enable_op_determinism(...)`](experimental/enable_op_determinism): Configures TensorFlow ops to run deterministically.
[`enable_tensor_float_32_execution(...)`](experimental/enable_tensor_float_32_execution): Enable or disable the use of TensorFloat-32 on supported hardware.
[`get_device_details(...)`](experimental/get_device_details): Returns details about a physical devices.
[`get_device_policy(...)`](experimental/get_device_policy): Gets the current device policy.
[`get_memory_growth(...)`](experimental/get_memory_growth): Get if memory growth is enabled for a `PhysicalDevice`.
[`get_memory_info(...)`](experimental/get_memory_info): Get memory info for the chosen device, as a dict.
[`get_memory_usage(...)`](experimental/get_memory_usage): Get the current memory usage, in bytes, for the chosen device. (deprecated)
[`get_synchronous_execution(...)`](experimental/get_synchronous_execution): Gets whether operations are executed synchronously or asynchronously.
[`get_virtual_device_configuration(...)`](get_logical_device_configuration): Get the virtual device configuration for a [`tf.config.PhysicalDevice`](physicaldevice).
[`get_visible_devices(...)`](get_visible_devices): Get the list of visible physical devices.
[`list_logical_devices(...)`](list_logical_devices): Return a list of logical devices created by runtime.
[`list_physical_devices(...)`](list_physical_devices): Return a list of physical devices visible to the host runtime.
[`reset_memory_stats(...)`](experimental/reset_memory_stats): Resets the tracked memory stats for the chosen device.
[`set_device_policy(...)`](experimental/set_device_policy): Sets the current thread device policy.
[`set_memory_growth(...)`](experimental/set_memory_growth): Set if memory growth should be enabled for a `PhysicalDevice`.
[`set_synchronous_execution(...)`](experimental/set_synchronous_execution): Specifies whether operations are executed synchronously or asynchronously.
[`set_virtual_device_configuration(...)`](set_logical_device_configuration): Set the logical device configuration for a [`tf.config.PhysicalDevice`](physicaldevice).
[`set_visible_devices(...)`](set_visible_devices): Set the list of visible devices.
[`tensor_float_32_execution_enabled(...)`](experimental/tensor_float_32_execution_enabled): Returns whether TensorFloat-32 is enabled.
tensorflow tf.config.functions_run_eagerly tf.config.functions\_run\_eagerly
=================================
Returns the value of the `run_functions_eagerly` setting.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.functions_run_eagerly`](https://www.tensorflow.org/api_docs/python/tf/config/functions_run_eagerly)
```
tf.config.functions_run_eagerly()
```
tensorflow tf.config.experimental_connect_to_cluster tf.config.experimental\_connect\_to\_cluster
============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/remote.py#L76-L234) |
Connects to the given cluster.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental_connect_to_cluster`](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_cluster)
```
tf.config.experimental_connect_to_cluster(
cluster_spec_or_resolver,
job_name='localhost',
task_index=0,
protocol=None,
make_master_device_default=True,
cluster_device_filters=None
)
```
Will make devices on the cluster available to use. Note that calling this more than once will work, but will invalidate any tensor handles on the old remote devices.
If the given local job name is not present in the cluster specification, it will be automatically added, using an unused port on the localhost.
Device filters can be specified to isolate groups of remote tasks to avoid undesired accesses between workers. Workers accessing resources or launching ops / functions on filtered remote devices will result in errors (unknown devices). For any remote task, if no device filter is present, all cluster devices will be visible; if any device filter is specified, it can only see devices matching at least one filter. Devices on the task itself are always visible. Device filters can be particially specified.
For example, for a cluster set up for parameter server training, the following device filters might be specified:
```
cdf = tf.config.experimental.ClusterDeviceFilters()
# For any worker, only the devices on PS nodes and itself are visible
for i in range(num_workers):
cdf.set_device_filters('worker', i, ['/job:ps'])
# Similarly for any ps, only the devices on workers and itself are visible
for i in range(num_ps):
cdf.set_device_filters('ps', i, ['/job:worker'])
tf.config.experimental_connect_to_cluster(cluster_def,
cluster_device_filters=cdf)
```
| Args |
| `cluster_spec_or_resolver` | A `ClusterSpec` or `ClusterResolver` describing the cluster. |
| `job_name` | The name of the local job. |
| `task_index` | The local task index. |
| `protocol` | The communication protocol, such as `"grpc"`. If unspecified, will use the default from `python/platform/remote_utils.py`. |
| `make_master_device_default` | If True and a cluster resolver is passed, will automatically enter the master task device scope, which indicates the master becomes the default device to run ops. It won't do anything if a cluster spec is passed. Will throw an error if the caller is currently already in some device scope. |
| `cluster_device_filters` | an instance of `tf.train.experimental/ClusterDeviceFilters` that specify device filters to the remote tasks in cluster. |
| programming_docs |
tensorflow tf.config.run_functions_eagerly tf.config.run\_functions\_eagerly
=================================
Enables / disables eager execution of [`tf.function`](../function)s.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.run_functions_eagerly`](https://www.tensorflow.org/api_docs/python/tf/config/run_functions_eagerly)
```
tf.config.run_functions_eagerly(
run_eagerly
)
```
Calling [`tf.config.run_functions_eagerly(True)`](run_functions_eagerly) will make all invocations of [`tf.function`](../function) run eagerly instead of running as a traced graph function.
This can be useful for debugging.
```
def my_func(a):
print("Python side effect")
return a + a
a_fn = tf.function(my_func)
```
```
# A side effect the first time the function is traced
a_fn(tf.constant(1))
Python side effect
<tf.Tensor: shape=(), dtype=int32, numpy=2>
```
```
# No further side effect, as the traced function is called
a_fn(tf.constant(2))
<tf.Tensor: shape=(), dtype=int32, numpy=4>
```
```
# Now, switch to eager running
tf.config.run_functions_eagerly(True)
# Side effect, as the function is called directly
a_fn(tf.constant(2))
Python side effect
<tf.Tensor: shape=(), dtype=int32, numpy=4>
```
```
# Turn this back off
tf.config.run_functions_eagerly(False)
```
>
> **Note:** This flag has no effect on functions passed into tf.data transformations as arguments. tf.data functions are never executed eagerly and are always executed as a compiled Tensorflow Graph.
>
| Args |
| `run_eagerly` | Boolean. Whether to run functions eagerly. |
tensorflow tf.config.experimental_functions_run_eagerly tf.config.experimental\_functions\_run\_eagerly
===============================================
Returns the value of the `experimental_run_functions_eagerly` setting. (deprecated)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental_functions_run_eagerly`](https://www.tensorflow.org/api_docs/python/tf/config/experimental_functions_run_eagerly)
```
tf.config.experimental_functions_run_eagerly()
```
tensorflow tf.config.experimental_run_functions_eagerly tf.config.experimental\_run\_functions\_eagerly
===============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/def_function.py#L389-L410) |
Enables / disables eager execution of [`tf.function`](../function)s. (deprecated)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental_run_functions_eagerly`](https://www.tensorflow.org/api_docs/python/tf/config/experimental_run_functions_eagerly)
```
tf.config.experimental_run_functions_eagerly(
run_eagerly
)
```
Calling [`tf.config.experimental_run_functions_eagerly(True)`](experimental_run_functions_eagerly) will make all invocations of [`tf.function`](../function) run eagerly instead of running as a traced graph function.
See [`tf.config.run_functions_eagerly`](run_functions_eagerly) for an example.
>
> **Note:** This flag has no effect on functions passed into tf.data transformations as arguments. tf.data functions are never executed eagerly and are always executed as a compiled Tensorflow Graph.
>
| Args |
| `run_eagerly` | Boolean. Whether to run functions eagerly. |
tensorflow tf.config.set_soft_device_placement tf.config.set\_soft\_device\_placement
======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L262-L277) |
Enable or disable soft device placement.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.set_soft_device_placement`](https://www.tensorflow.org/api_docs/python/tf/config/set_soft_device_placement)
```
tf.config.set_soft_device_placement(
enabled
)
```
If enabled, an op will be placed on CPU if any of the following are true
1. there's no GPU implementation for the OP
2. no GPU devices are known or registered
3. need to co-locate with reftype input(s) which are from CPU
>
> **Note:** by default soft device placement is enabled when running in eager mode (for convenience) and disabled in graph mode (for performance).
>
| Args |
| `enabled` | A boolean indicating whether to enable soft placement. |
tensorflow tf.config.set_visible_devices tf.config.set\_visible\_devices
===============================
Set the list of visible devices.
#### View aliases
**Main aliases**
[`tf.config.experimental.set_visible_devices`](https://www.tensorflow.org/api_docs/python/tf/config/set_visible_devices)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.set_visible_devices`](https://www.tensorflow.org/api_docs/python/tf/config/set_visible_devices), [`tf.compat.v1.config.set_visible_devices`](https://www.tensorflow.org/api_docs/python/tf/config/set_visible_devices)
```
tf.config.set_visible_devices(
devices, device_type=None
)
```
Specifies which `PhysicalDevice` objects are visible to the runtime. TensorFlow will only allocate memory and place operations on visible physical devices, as otherwise no `LogicalDevice` will be created on them. By default all discovered devices are marked as visible.
The following example demonstrates disabling the first GPU on the machine.
```
physical_devices = tf.config.list_physical_devices('GPU')
try:
# Disable first GPU
tf.config.set_visible_devices(physical_devices[1:], 'GPU')
logical_devices = tf.config.list_logical_devices('GPU')
# Logical device was not created for first GPU
assert len(logical_devices) == len(physical_devices) - 1
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
```
| Args |
| `devices` | List of `PhysicalDevice`s to make visible |
| `device_type` | (optional) Only configure devices matching this device type. For example "CPU" or "GPU". Other devices will be left unaltered. |
| Raises |
| `ValueError` | If argument validation fails. |
| `RuntimeError` | Runtime is already initialized. |
tensorflow tf.config.experimental_connect_to_host tf.config.experimental\_connect\_to\_host
=========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/remote.py#L37-L73) |
Connects to a single machine to enable remote execution on it.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental_connect_to_host`](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_host)
```
tf.config.experimental_connect_to_host(
remote_host=None, job_name='worker'
)
```
Will make devices on the remote host available to use. Note that calling this more than once will work, but will invalidate any tensor handles on the old remote devices.
Using the default job\_name of worker, you can schedule ops to run remotely as follows:
```
# When eager execution is enabled, connect to the remote host.
tf.config.experimental_connect_to_host("exampleaddr.com:9876")
with ops.device("job:worker/replica:0/task:1/device:CPU:0"):
# The following tensors should be resident on the remote device, and the op
# will also execute remotely.
x1 = array_ops.ones([2, 2])
x2 = array_ops.ones([2, 2])
y = math_ops.matmul(x1, x2)
```
| Args |
| `remote_host` | a single or a list the remote server addr in host-port format. |
| `job_name` | The job name under which the new server will be accessible. |
| Raises |
| `ValueError` | if remote\_host is None. |
tensorflow tf.config.experimental.get_synchronous_execution tf.config.experimental.get\_synchronous\_execution
==================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L354-L364) |
Gets whether operations are executed synchronously or asynchronously.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_synchronous_execution`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_synchronous_execution)
```
tf.config.experimental.get_synchronous_execution()
```
TensorFlow can execute operations synchronously or asynchronously. If asynchronous execution is enabled, operations may return "non-ready" handles.
| Returns |
| Current thread execution mode |
tensorflow tf.config.experimental.disable_mlir_graph_optimization tf.config.experimental.disable\_mlir\_graph\_optimization
=========================================================
Disables experimental MLIR-Based TensorFlow Compiler Optimizations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.disable_mlir_graph_optimization`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/disable_mlir_graph_optimization)
```
tf.config.experimental.disable_mlir_graph_optimization()
```
tensorflow tf.config.experimental.reset_memory_stats tf.config.experimental.reset\_memory\_stats
===========================================
Resets the tracked memory stats for the chosen device.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.reset_memory_stats`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/reset_memory_stats)
```
tf.config.experimental.reset_memory_stats(
device
)
```
This function sets the tracked peak memory for a device to the device's current memory usage. This allows you to measure the peak memory usage for a specific part of your program. For example:
```
if tf.config.list_physical_devices('GPU'):
# Sets the peak memory to the current memory.
tf.config.experimental.reset_memory_stats('GPU:0')
# Creates the first peak memory usage.
x1 = tf.ones(1000 * 1000, dtype=tf.float64)
del x1 # Frees the memory referenced by `x1`.
peak1 = tf.config.experimental.get_memory_info('GPU:0')['peak']
# Sets the peak memory to the current memory again.
tf.config.experimental.reset_memory_stats('GPU:0')
# Creates the second peak memory usage.
x2 = tf.ones(1000 * 1000, dtype=tf.float32)
del x2
peak2 = tf.config.experimental.get_memory_info('GPU:0')['peak']
assert peak2 < peak1 # tf.float32 consumes less memory than tf.float64.
```
Currently only supports GPU and TPU. If called on a CPU device, an exception will be raised.
| Args |
| `device` | Device string to reset the memory stats, e.g. `"GPU:0"`, `"TPU:0"`. See <https://www.tensorflow.org/api_docs/python/tf/device> for specifying device strings. |
| Raises |
| `ValueError` | No device found with the device name, like '"nonexistent"'. |
| `ValueError` | Invalid device name, like '"GPU"', '"CPU:GPU"', '"CPU:"'. |
| `ValueError` | Multiple devices matched with the device name. |
| `ValueError` | Memory statistics not tracked or clearing memory statistics not supported, like '"CPU:0"'. |
tensorflow tf.config.experimental.get_device_details tf.config.experimental.get\_device\_details
===========================================
Returns details about a physical devices.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_device_details`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_device_details)
```
tf.config.experimental.get_device_details(
device
)
```
This API takes in a [`tf.config.PhysicalDevice`](../physicaldevice) returned by [`tf.config.list_physical_devices`](../list_physical_devices). It returns a dict with string keys containing various details about the device. Each key is only supported by a subset of devices, so you should not assume the returned dict will have any particular key.
```
gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
details = tf.config.experimental.get_device_details(gpu_devices[0])
details.get('device_name', 'Unknown GPU')
```
Currently, details are only returned for GPUs. This function returns an empty dict if passed a non-GPU device.
The returned dict may have the following keys:
* `'device_name'`: A human-readable name of the device as a string, e.g. "Titan V". Unlike [`tf.config.PhysicalDevice.name`](../physicaldevice#name), this will be the same for multiple devices if each device is the same model. Currently only available for GPUs.
* `'compute_capability'`: The [compute capability](https://developer.nvidia.com/cuda-gpus) of the device as a tuple of two ints, in the form `(major_version, minor_version)`. Only available for NVIDIA GPUs
>
> **Note:** This is similar to [`tf.sysconfig.get_build_info`](../../sysconfig/get_build_info) in that both functions can return information relating to GPUs. However, this function returns run-time information about a specific device (such as a GPU's compute capability), while [`tf.sysconfig.get_build_info`](../../sysconfig/get_build_info) returns compile-time information about how TensorFlow was built (such as what version of CUDA TensorFlow was built for).
>
| Args |
| `device` | A [`tf.config.PhysicalDevice`](../physicaldevice) returned by [`tf.config.list_physical_devices`](../list_physical_devices) or [`tf.config.get_visible_devices`](../get_visible_devices). |
| Returns |
| A dict with string keys. |
tensorflow tf.config.experimental.ClusterDeviceFilters tf.config.experimental.ClusterDeviceFilters
===========================================
Represent a collection of device filters for the remote workers in cluster.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.ClusterDeviceFilters`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/ClusterDeviceFilters)
```
tf.config.experimental.ClusterDeviceFilters()
```
>
> **Note:** this is an experimental API and subject to changes.
>
Set device filters for selective jobs and tasks. For each remote worker, the device filters are a list of strings. When any filters are present, the remote worker will ignore all devices which do not match any of its filters. Each filter can be partially specified, e.g. "/job:ps", "/job:worker/replica:3", etc. Note that a device is always visible to the worker it is located on.
For example, to set the device filters for a parameter server cluster:
```
cdf = tf.config.experimental.ClusterDeviceFilters()
for i in range(num_workers):
cdf.set_device_filters('worker', i, ['/job:ps'])
for i in range(num_ps):
cdf.set_device_filters('ps', i, ['/job:worker'])
tf.config.experimental_connect_to_cluster(cluster_def,
cluster_device_filters=cdf)
```
The device filters can be partically specified. For remote tasks that do not have device filters specified, all devices will be visible to them.
Methods
-------
### `set_device_filters`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L533-L539)
```
set_device_filters(
job_name, task_index, device_filters
)
```
Set the device filters for given job name and task id.
tensorflow tf.config.experimental.get_memory_usage tf.config.experimental.get\_memory\_usage
=========================================
Get the current memory usage, in bytes, for the chosen device. (deprecated)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_memory_usage`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_memory_usage)
```
tf.config.experimental.get_memory_usage(
device
)
```
This function is deprecated in favor of [`tf.config.experimental.get_memory_info`](get_memory_info). Calling this function is equivalent to calling `tf.config.experimental.get_memory_info()['current']`.
See <https://www.tensorflow.org/api_docs/python/tf/device> for specifying device strings.
#### For example:
```
gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
tf.config.experimental.get_memory_usage('GPU:0')
```
Does not work for CPU.
For GPUs, TensorFlow will allocate all the memory by default, unless changed with [`tf.config.experimental.set_memory_growth`](set_memory_growth). This function only returns the memory that TensorFlow is actually using, not the memory that TensorFlow has allocated on the GPU.
| Args |
| `device` | Device string to get the bytes in use for, e.g. `"GPU:0"` |
| Returns |
| Total memory usage in bytes. |
| Raises |
| `ValueError` | Non-existent or CPU device specified. |
tensorflow tf.config.experimental.set_synchronous_execution tf.config.experimental.set\_synchronous\_execution
==================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L367-L389) |
Specifies whether operations are executed synchronously or asynchronously.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.set_synchronous_execution`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_synchronous_execution)
```
tf.config.experimental.set_synchronous_execution(
enable
)
```
TensorFlow can execute operations synchronously or asynchronously. If asynchronous execution is enabled, operations may return "non-ready" handles.
When `enable` is set to None, an appropriate value will be picked automatically. The value picked may change between TensorFlow releases.
| Args |
| `enable` | Whether operations should be dispatched synchronously. Valid values: * None: sets the system default.
* True: executes each operation synchronously.
* False: executes each operation asynchronously.
|
tensorflow tf.config.experimental.tensor_float_32_execution_enabled tf.config.experimental.tensor\_float\_32\_execution\_enabled
============================================================
Returns whether TensorFloat-32 is enabled.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.tensor_float_32_execution_enabled`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/tensor_float_32_execution_enabled)
```
tf.config.experimental.tensor_float_32_execution_enabled()
```
By default, TensorFloat-32 is enabled, but this can be changed with [`tf.config.experimental.enable_tensor_float_32_execution`](enable_tensor_float_32_execution).
| Returns |
| True if TensorFloat-32 is enabled (the default) and False otherwise |
tensorflow tf.config.experimental.disable_mlir_bridge tf.config.experimental.disable\_mlir\_bridge
============================================
Disables experimental MLIR-Based TensorFlow Compiler Bridge.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.disable_mlir_bridge`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/disable_mlir_bridge)
```
tf.config.experimental.disable_mlir_bridge()
```
| programming_docs |
tensorflow tf.config.experimental.get_memory_info tf.config.experimental.get\_memory\_info
========================================
Get memory info for the chosen device, as a dict.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_memory_info`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_memory_info)
```
tf.config.experimental.get_memory_info(
device
)
```
This function returns a dict containing information about the device's memory usage. For example:
```
if tf.config.list_physical_devices('GPU'):
# Returns a dict in the form {'current': <current mem usage>,
# 'peak': <peak mem usage>}
tf.config.experimental.get_memory_info('GPU:0')
```
Currently returns the following keys:
* `'current'`: The current memory used by the device, in bytes.
* `'peak'`: The peak memory used by the device across the run of the program, in bytes. Can be reset with [`tf.config.experimental.reset_memory_stats`](reset_memory_stats).
More keys may be added in the future, including device-specific keys.
Currently only supports GPU and TPU. If called on a CPU device, an exception will be raised.
For GPUs, TensorFlow will allocate all the memory by default, unless changed with [`tf.config.experimental.set_memory_growth`](set_memory_growth). The dict specifies only the current and peak memory that TensorFlow is actually using, not the memory that TensorFlow has allocated on the GPU.
| Args |
| `device` | Device string to get the memory information for, e.g. `"GPU:0"`, `"TPU:0"`. See <https://www.tensorflow.org/api_docs/python/tf/device> for specifying device strings. |
| Returns |
| A dict with keys `'current'` and `'peak'`, specifying the current and peak memory usage respectively. |
| Raises |
| `ValueError` | No device found with the device name, like '"nonexistent"'. |
| `ValueError` | Invalid device name, like '"GPU"', '"CPU:GPU"', '"CPU:"'. |
| `ValueError` | Multiple devices matched with the device name. |
| `ValueError` | Memory statistics not tracked, like '"CPU:0"'. |
tensorflow tf.config.experimental.enable_mlir_graph_optimization tf.config.experimental.enable\_mlir\_graph\_optimization
========================================================
Enables experimental MLIR-Based TensorFlow Compiler Optimizations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.enable_mlir_graph_optimization`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_mlir_graph_optimization)
```
tf.config.experimental.enable_mlir_graph_optimization()
```
DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.
>
> **Note:** MLIR-Based TensorFlow Compiler is under active development and has missing features, please refrain from using. This API exists for development and testing only.
>
TensorFlow Compiler Optimizations are responsible general graph level optimizations that in the current stack mostly done by Grappler graph optimizers.
tensorflow tf.config.experimental.enable_mlir_bridge tf.config.experimental.enable\_mlir\_bridge
===========================================
Enables experimental MLIR-Based TensorFlow Compiler Bridge.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.enable_mlir_bridge`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_mlir_bridge)
```
tf.config.experimental.enable_mlir_bridge()
```
DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.
>
> **Note:** MLIR-Based TensorFlow Compiler is under active development and has missing features, please refrain from using. This API exists for development and testing only.
>
TensorFlow Compiler Bridge (TF Bridge) is responsible for translating parts of TensorFlow graph into a form that can be accepted as an input by a backend compiler such as XLA.
tensorflow tf.config.experimental.enable_tensor_float_32_execution tf.config.experimental.enable\_tensor\_float\_32\_execution
===========================================================
Enable or disable the use of TensorFloat-32 on supported hardware.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.enable_tensor_float_32_execution`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution)
```
tf.config.experimental.enable_tensor_float_32_execution(
enabled
)
```
[TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format), or TF32 for short, is a math mode for NVIDIA Ampere GPUs. TensorFloat-32 execution causes certain float32 ops, such as matrix multiplications and convolutions, to run much faster on Ampere GPUs but with reduced precision. This reduced precision should not impact convergence of deep learning models in practice.
TensorFloat-32 is enabled by default. TensorFloat-32 is only supported on Ampere GPUs, so all other hardware will use the full float32 precision regardless of whether TensorFloat-32 is enabled or not. If you want to use the full float32 precision on Ampere, you can disable TensorFloat-32 execution with this function. For example:
```
x = tf.fill((2, 2), 1.0001)
y = tf.fill((2, 2), 1.)
# TensorFloat-32 is enabled, so matmul is run with reduced precision
print(tf.linalg.matmul(x, y)) # [[2., 2.], [2., 2.]]
tf.config.experimental.enable_tensor_float_32_execution(False)
# Matmul is run with full precision
print(tf.linalg.matmul(x, y)) # [[2.0002, 2.0002], [2.0002, 2.0002]]
```
To check whether TensorFloat-32 execution is currently enabled, use [`tf.config.experimental.tensor_float_32_execution_enabled`](tensor_float_32_execution_enabled).
If TensorFloat-32 is enabled, float32 inputs of supported ops, such as [`tf.linalg.matmul`](../../linalg/matmul), will be rounded from 23 bits of precision to 10 bits of precision in most cases. This allows the ops to execute much faster by utilizing the GPU's tensor cores. TensorFloat-32 has the same dynamic range as float32, meaning it is no more likely to underflow or overflow than float32. Ops still use float32 accumulation when TensorFloat-32 is enabled. Enabling or disabling TensorFloat-32 only affects Ampere GPUs and subsequent GPUs that support TensorFloat-32.
Note TensorFloat-32 is not always used in supported ops, as only inputs of certain shapes are supported. Support for more input shapes and more ops may be added in the future. As a result, precision of float32 ops may decrease in minor versions of TensorFlow.
TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32 is used in fewer cases for complex64 as it is for float32.
| Args |
| `enabled` | Bool indicating whether to enable TensorFloat-32 execution. |
tensorflow tf.config.experimental.set_device_policy tf.config.experimental.set\_device\_policy
==========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L308-L351) |
Sets the current thread device policy.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.set_device_policy`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_device_policy)
```
tf.config.experimental.set_device_policy(
device_policy
)
```
The device policy controls how operations requiring inputs on a specific device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1).
When using the default, an appropriate policy will be picked automatically. The default policy may change over time.
This function only sets the device policy for the current thread. Any subsequently started thread will again use the default policy.
| Args |
| `device_policy` | A device policy. Valid values: * None: Switch to a system default.
* 'warn': Copies the tensors which are not on the right device and logs a warning.
* 'explicit': Raises an error if the placement is not as required.
* 'silent': Silently copies the tensors. Note that this may hide performance problems as there is no notification provided when operations are blocked on the tensor being copied between devices.
* 'silent\_for\_int32': silently copies `int32` tensors, raising errors on the other ones.
|
| Raises |
| `ValueError` | If an invalid `device_policy` is passed. |
tensorflow tf.config.experimental.set_memory_growth tf.config.experimental.set\_memory\_growth
==========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L691-L716) |
Set if memory growth should be enabled for a `PhysicalDevice`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.set_memory_growth`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_memory_growth)
```
tf.config.experimental.set_memory_growth(
device, enable
)
```
If memory growth is enabled for a `PhysicalDevice`, the runtime initialization will not allocate all memory on the device. Memory growth cannot be configured on a `PhysicalDevice` with virtual devices configured.
#### For example:
```
physical_devices = tf.config.list_physical_devices('GPU')
try:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
```
| Args |
| `device` | `PhysicalDevice` to configure |
| `enable` | (Boolean) Whether to enable or disable memory growth |
| Raises |
| `ValueError` | Invalid `PhysicalDevice` specified. |
| `RuntimeError` | Runtime is already initialized. |
tensorflow tf.config.experimental.enable_op_determinism tf.config.experimental.enable\_op\_determinism
==============================================
Configures TensorFlow ops to run deterministically.
```
tf.config.experimental.enable_op_determinism()
```
When op determinism is enabled, TensorFlow ops will be deterministic. This means that if an op is run multiple times with the same inputs on the same hardware, it will have the exact same outputs each time. This is useful for debugging models. Note that determinism in general comes at the expense of lower performance and so your model may run slower when op determinism is enabled.
If you want your TensorFlow program to run deterministically, put the following code near the start of your program.
```
tf.keras.utils.set_random_seed(1)
tf.config.experimental.enable_op_determinism()
```
Calling [`tf.keras.utils.set_random_seed`](../../keras/utils/set_random_seed) sets the Python seed, the NumPy seed, and the TensorFlow seed. Setting these seeds is necessary to ensure any random numbers your program generates are also deterministic.
By default, op determinism is not enabled, so ops might return different results when run with the same inputs. These differences are often caused by the use of asynchronous threads within the op nondeterministically changing the order in which floating-point numbers are added. Most of these cases of nondeterminism occur on GPUs, which have thousands of hardware threads that are used to run ops. Enabling determinism directs such ops to use a different algorithm, one that does not use threads in a nondeterministic way.
Another potential source of nondeterminism is [`tf.data`](../../data) based data processing. Typically, this can introduce nondeterminsm due to the use of parallelism in methods such as [`Dataset.map`](../../data/dataset#map) producing inputs or running stateful ops in a nondeterministic order. Enabling determinism will remove such sources of nondeterminism.
Enabling determinism will likely make your model or your [`tf.data`](../../data) data processing slower. For example, [`Dataset.map`](../../data/dataset#map) can become several orders of magnitude slower when the map function has random ops or other stateful ops. See the “Determinism and tf.data” section below for more details. In future TensorFlow releases, we plan on improving the performance of determinism, especially for common scenarios such as [`Dataset.map`](../../data/dataset#map).
Certain ops will raise an `UnimplementedError` because they do not yet have a deterministic implementation. Additionally, due to bugs, some ops might be nondeterministic and not raise an `UnimplementedError`. If you encounter such ops, please [file an issue](https://github.com/tensorflow/tensorflow/issues).
An example of enabling determinism follows. The [`tf.nn.softmax_cross_entropy_with_logits`](../../nn/softmax_cross_entropy_with_logits) op is run multiple times and the output is shown to be the same each time. This example would likely fail when run on a GPU if determinism were not enabled, because [`tf.nn.softmax_cross_entropy_with_logits`](../../nn/softmax_cross_entropy_with_logits) uses a nondeterministic algorithm on GPUs by default.
```
labels = tf.random.normal((1, 10000))
logits = tf.random.normal((1, 10000))
output = tf.nn.softmax_cross_entropy_with_logits(labels=labels,
logits=logits)
for _ in range(5):
output2 = tf.nn.softmax_cross_entropy_with_logits(labels=labels,
logits=logits)
tf.debugging.assert_equal(output, output2)
```
Writing deterministic models
----------------------------
You can make your models deterministic by enabling op determinism. This means that you can train a model and finish each run with exactly the same trainable variables. This also means that the inferences of your previously-trained model will be exactly the same on each run. Typically, models can be made deterministic by simply setting the seeds and enabling op determinism, as in the example above. However, to guarantee that your model operates deterministically, you must meet all the following requirements:
* Call [`tf.config.experimental.enable_op_determinism()`](enable_op_determinism), as mentioned above.
* Reproducibly reset any pseudorandom number generators (PRNGs) you’re using, such as by setting the seeds for the default PRNGs in TensorFlow, Python, and NumPy, as mentioned above. Note that certain newer NumPy classes like `numpy.random.default_rng` ignore the global NumPy seed, so a seed must be explicitly passed to such classes, if used.
* Use the same hardware configuration in every run.
* Use the same software environment in every run (OS, checkpoints, version of CUDA and TensorFlow, environmental variables, etc). Note that determinism is not guaranteed across different versions of TensorFlow.
* Do not use constructs outside TensorFlow that are nondeterministic, such as reading from `/dev/random` or using multiple threads/processes in ways that influence TensorFlow’s behavior.
* Ensure your input pipeline is deterministic. If you use [`tf.data`](../../data), this is done automatically (at the expense of performance). See "Determinism and tf.data" below for more information.
* Do not use [`tf.compat.v1.Session`](../../compat/v1/session) and [`tf.distribute.experimental.ParameterServerStrategy`](../../distribute/experimental/parameterserverstrategy), which can introduce nondeterminism. Besides ops (including [`tf.data`](../../data) ops), these are the only known potential sources of nondeterminism within TensorFlow, (if you find more, please file an issue). Note that [`tf.compat.v1.Session`](../../compat/v1/session) is required to use the TF1 API, so determinism cannot be guaranteed when using the TF1 API.
* Do not use nondeterministic custom ops.
Additional details on determinism
---------------------------------
For stateful ops to be deterministic, the state of the system must be the same every time the op is run. For example the output of [`tf.Variable.sparse_read`](../../variable#sparse_read) (obviously) depends on both the variable value and the `indices` function parameter. When determinism is enabled, the side effects of stateful ops are deterministic.
TensorFlow’s random ops, such as [`tf.random.normal`](../../random/normal), will raise a `RuntimeError` if determinism is enabled and a seed has not been set. However, attempting to generate nondeterministic random numbers using Python or NumPy will not raise such errors. Make sure you remember to set the Python and NumPy seeds. Calling [`tf.keras.utils.set_random_seed`](../../keras/utils/set_random_seed) is an easy way to set all three seeds.
Note that latency, memory consumption, throughput, and other performance characteristics are *not* made deterministic by enabling op determinism. Only op outputs and side effects are made deterministic. Additionally, a model may nondeterministically raise a [`tf.errors.ResourceExhaustedError`](../../errors/resourceexhaustederror) from a lack of memory due to the fact that memory consumption is nondeterministic.
Determinism and tf.data
-----------------------
Enabling deterministic ops makes [`tf.data`](../../data) deterministic in several ways:
1. For dataset methods with a `deterministic` argument, such as [`Dataset.map`](../../data/dataset#map) and [`Dataset.batch`](../../data/dataset#batch), the `deterministic` argument is overridden to be `True` irrespective of its setting.
2. The `tf.data.Option.experimental_deterministic` option is overridden to be `True` irrespective of its setting..
3. In [`Dataset.map`](../../data/dataset#map) and [`Dataset.interleave`](../../data/dataset#interleave), if the map or interleave function has stateful random ops or other stateful ops, the function will run serially instead of in parallel. This means the `num_parallel_calls` argument to `map` and `interleave` is effectively ignored.
4. Prefetching with [`Dataset.prefetch`](../../data/dataset#prefetch) will be disabled if any function run as part of the input pipeline has certain stateful ops. Similarly, any dataset method with a `num_parallel_calls` argument will be made to run serially if any function in the input pipeline has such stateful ops. Legacy random ops such as [`tf.random.normal`](../../random/normal) will *not* cause such datasets to be changed, but most other stateful ops will.
Unfortunately, due to (3), performance can be greatly reduced when stateful ops are used in [`Dataset.map`](../../data/dataset#map) due to no longer running the map function in parallel. A common example of stateful ops used in [`Dataset.map`](../../data/dataset#map) are random ops, such as [`tf.random.normal`](../../random/normal), which are typically used for distortions. One way to work around this is to use stateless random ops instead. Alternatively you can hoist all random ops into its own separate [`Dataset.map`](../../data/dataset#map) call, making the original [`Dataset.map`](../../data/dataset#map) call stateless and thus avoid the need to serialize its execution.
(4) can also cause performance to be reduced, but occurs less frequently than (3) because legacy random ops do not cause (4) to take effect. However, unlike (3), when there are non-random stateful ops in a user-defined function, every `map` and `interleave` dataset is affected, instead of just the `map` or `interleave` dataset with the function that has stateful ops. Additionally, `prefetch` datasets and any dataset with the `num_parallel_calls` argument are also affected.
| programming_docs |
tensorflow tf.config.experimental.get_memory_growth tf.config.experimental.get\_memory\_growth
==========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L662-L688) |
Get if memory growth is enabled for a `PhysicalDevice`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_memory_growth`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_memory_growth)
```
tf.config.experimental.get_memory_growth(
device
)
```
If memory growth is enabled for a `PhysicalDevice`, the runtime initialization will not allocate all memory on the device.
#### For example:
```
physical_devices = tf.config.list_physical_devices('GPU')
try:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
assert tf.config.experimental.get_memory_growth(physical_devices[0])
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
```
| Args |
| `device` | `PhysicalDevice` to query |
| Returns |
| A boolean indicating the memory growth setting for the `PhysicalDevice`. |
| Raises |
| `ValueError` | Invalid `PhysicalDevice` specified. |
tensorflow tf.config.experimental.get_device_policy tf.config.experimental.get\_device\_policy
==========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L280-L305) |
Gets the current device policy.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.experimental.get_device_policy`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_device_policy)
```
tf.config.experimental.get_device_policy()
```
The device policy controls how operations requiring inputs on a specific device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1).
This function only gets the device policy for the current thread. Any subsequently started thread will again use the default policy.
| Returns |
| Current thread device policy |
tensorflow tf.config.optimizer.set_experimental_options tf.config.optimizer.set\_experimental\_options
==============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L203-L241) |
Set experimental optimizer options.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.optimizer.set_experimental_options`](https://www.tensorflow.org/api_docs/python/tf/config/optimizer/set_experimental_options)
```
tf.config.optimizer.set_experimental_options(
options
)
```
Note that optimizations are only applied in graph mode, (within tf.function). In addition, as these are experimental options, the list is subject to change.
| Args |
| `options` | Dictionary of experimental optimizer options to configure. Valid keys: * layout\_optimizer: Optimize tensor layouts e.g. This will try to use NCHW layout on GPU which is faster.
* constant\_folding: Fold constants Statically infer the value of tensors when possible, and materialize the result using constants.
* shape\_optimization: Simplify computations made on shapes.
* remapping: Remap subgraphs onto more efficient implementations.
* arithmetic\_optimization: Simplify arithmetic ops with common sub-expression elimination and arithmetic simplification.
* dependency\_optimization: Control dependency optimizations. Remove redundant control dependencies, which may enable other optimization. This optimizer is also essential for pruning Identity and NoOp nodes.
* loop\_optimization: Loop optimizations.
* function\_optimization: Function optimizations and inlining.
* debug\_stripper: Strips debug-related nodes from the graph.
* disable\_model\_pruning: Disable removal of unnecessary ops from the graph
* scoped\_allocator\_optimization: Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops.
* pin\_to\_host\_optimization: Force small ops onto the CPU.
* implementation\_selector: Enable the swap of kernel implementations based on the device placement.
* auto\_mixed\_precision: Change certain float32 ops to float16 on Volta GPUs and above. Without the use of loss scaling, this can cause numerical underflow (see [`keras.mixed_precision.experimental.LossScaleOptimizer`](https://www.tensorflow.org/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer)).
* disable\_meta\_optimizer: Disable the entire meta optimizer.
* min\_graph\_nodes: The minimum number of nodes in a graph to optimizer. For smaller graphs, optimization is skipped.
|
tensorflow tf.config.optimizer.get_jit tf.config.optimizer.get\_jit
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L147-L158) |
Returns JIT compilation configuration for code inside [`tf.function`](../../function).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.optimizer.get_jit`](https://www.tensorflow.org/api_docs/python/tf/config/optimizer/get_jit)
```
tf.config.optimizer.get_jit() -> str
```
Possible return values: -`"autoclustering"` if [autoclustering](https://www.tensorflow.org/xla#auto-clustering) is enabled
* `""` when no default compilation is applied.
tensorflow tf.config.optimizer.set_jit tf.config.optimizer.set\_jit
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L161-L184) |
Configure JIT compilation. (deprecated argument values)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.optimizer.set_jit`](https://www.tensorflow.org/api_docs/python/tf/config/optimizer/set_jit)
```
tf.config.optimizer.set_jit(
enabled: Union[bool, str]
)
```
>
> **Note:** compilation is only applied to code that is compiled into a graph (in TF2 that's only a code inside [`tf.function`](../../function)).
>
| Args |
| `enabled` | JIT compilation configuration. Possible values: * `"autoclustering"` (`True` is a deprecated alias): perform [autoclustering](https://www.tensorflow.org/xla#auto-clustering) (automatically identify and compile clusters of nodes) on all graphs using [XLA](https://www.tensorflow.org/xla).
* `False`: do not automatically compile any graphs.
|
tensorflow tf.config.optimizer.get_experimental_options tf.config.optimizer.get\_experimental\_options
==============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L187-L200) |
Get experimental optimizer options.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.optimizer.get_experimental_options`](https://www.tensorflow.org/api_docs/python/tf/config/optimizer/get_experimental_options)
```
tf.config.optimizer.get_experimental_options()
```
Refer to tf.config.optimizer.set\_experimental\_options for a list of current options.
Note that optimizations are only applied in graph mode, (within tf.function). In addition, as these are experimental options, the list is subject to change.
| Returns |
| Dictionary of configured experimental optimizer options |
tensorflow tf.config.threading.set_inter_op_parallelism_threads tf.config.threading.set\_inter\_op\_parallelism\_threads
========================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L134-L144) |
Set number of threads used for parallelism between independent operations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.threading.set_inter_op_parallelism_threads`](https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads)
```
tf.config.threading.set_inter_op_parallelism_threads(
num_threads
)
```
Determines the number of threads used by independent non-blocking operations. 0 means the system picks an appropriate number.
| Args |
| `num_threads` | Number of parallel threads |
tensorflow tf.config.threading.set_intra_op_parallelism_threads tf.config.threading.set\_intra\_op\_parallelism\_threads
========================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L107-L118) |
Set number of threads used within an individual op for parallelism.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.threading.set_intra_op_parallelism_threads`](https://www.tensorflow.org/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads)
```
tf.config.threading.set_intra_op_parallelism_threads(
num_threads
)
```
Certain operations like matrix multiplication and reductions can utilize parallel threads for speed ups. A value of 0 means the system picks an appropriate number.
| Args |
| `num_threads` | Number of parallel threads |
tensorflow tf.config.threading.get_inter_op_parallelism_threads tf.config.threading.get\_inter\_op\_parallelism\_threads
========================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L121-L131) |
Get number of threads used for parallelism between independent operations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.threading.get_inter_op_parallelism_threads`](https://www.tensorflow.org/api_docs/python/tf/config/threading/get_inter_op_parallelism_threads)
```
tf.config.threading.get_inter_op_parallelism_threads()
```
Determines the number of threads used by independent non-blocking operations. 0 means the system picks an appropriate number.
| Returns |
| Number of parallel threads |
tensorflow tf.config.threading.get_intra_op_parallelism_threads tf.config.threading.get\_intra\_op\_parallelism\_threads
========================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/config.py#L93-L104) |
Get number of threads used within an individual op for parallelism.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.config.threading.get_intra_op_parallelism_threads`](https://www.tensorflow.org/api_docs/python/tf/config/threading/get_intra_op_parallelism_threads)
```
tf.config.threading.get_intra_op_parallelism_threads()
```
Certain operations like matrix multiplication and reductions can utilize parallel threads for speed ups. A value of 0 means the system picks an appropriate number.
| Returns |
| Number of parallel threads |
tensorflow tf.lite.Optimize tf.lite.Optimize
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L101-L156) |
Enum defining the optimizations to apply when generating a tflite model.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.Optimize`](https://www.tensorflow.org/api_docs/python/tf/lite/Optimize)
DEFAULT Default optimization strategy that quantizes model weights. Enhanced optimizations are gained by providing a representative dataset that quantizes biases and activations as well. Converter will do its best to reduce size and latency, while minimizing the loss in accuracy.
OPTIMIZE\_FOR\_SIZE Deprecated. Does the same as DEFAULT.
OPTIMIZE\_FOR\_LATENCY Deprecated. Does the same as DEFAULT.
EXPERIMENTAL\_SPARSITY Experimental flag, subject to change.
```
Enable optimization by taking advantage of the sparse model weights
trained with pruning.
The converter will inspect the sparsity pattern of the model weights and
do its best to improve size and latency.
The flag can be used alone to optimize float32 models with sparse weights.
It can also be used together with the DEFAULT optimization mode to
optimize quantized models with sparse weights.
```
| Class Variables |
| DEFAULT | `<Optimize.DEFAULT: 'DEFAULT'>` |
| EXPERIMENTAL\_SPARSITY | `<Optimize.EXPERIMENTAL_SPARSITY: 'EXPERIMENTAL_SPARSITY'>` |
| OPTIMIZE\_FOR\_LATENCY | `<Optimize.OPTIMIZE_FOR_LATENCY: 'OPTIMIZE_FOR_LATENCY'>` |
| OPTIMIZE\_FOR\_SIZE | `<Optimize.OPTIMIZE_FOR_SIZE: 'OPTIMIZE_FOR_SIZE'>` |
tensorflow tf.lite.TargetSpec tf.lite.TargetSpec
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L185-L229) |
Specification of target device used to optimize the model.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.TargetSpec`](https://www.tensorflow.org/api_docs/python/tf/lite/TargetSpec)
```
tf.lite.TargetSpec(
supported_ops=None,
supported_types=None,
experimental_select_user_tf_ops=None,
experimental_supported_backends=None
)
```
| Attributes |
| `supported_ops` | Experimental flag, subject to change. Set of [`tf.lite.OpsSet`](opsset) options, where each option represents a set of operators supported by the target device. (default {tf.lite.OpsSet.TFLITE\_BUILTINS})) |
| `supported_types` | Set of [`tf.dtypes.DType`](../dtypes/dtype) data types supported on the target device. If initialized, optimization might be driven by the smallest type in this set. (default set()) |
| `experimental_select_user_tf_ops` | Experimental flag, subject to change. Set of user's TensorFlow operators' names that are required in the TensorFlow Lite runtime. These ops will be exported as select TensorFlow ops in the model (in conjunction with the tf.lite.OpsSet.SELECT\_TF\_OPS flag). This is an advanced feature that should only be used if the client is using TF ops that may not be linked in by default with the TF ops that are provided when using the SELECT\_TF\_OPS path. The client is responsible for linking these ops into the target runtime. |
| `experimental_supported_backends` | Experimental flag, subject to change. Set containing names of supported backends. Currently only "GPU" is supported, more options will be available later. |
tensorflow tf.lite.TFLiteConverter tf.lite.TFLiteConverter
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L1625-L1860) |
Converts a TensorFlow model into TensorFlow Lite model.
```
tf.lite.TFLiteConverter(
funcs, trackable_obj=None
)
```
#### Example usage:
```
# Converting a SavedModel to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
# Converting a tf.Keras model to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Converting ConcreteFunctions to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_concrete_functions([func], model)
tflite_model = converter.convert()
# Converting a Jax model to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.experimental_from_jax([func], [[
('input1', input1), ('input2', input2)])
tflite_model = converter.convert()
```
| Args |
| `funcs` | List of TensorFlow ConcreteFunctions. The list should not contain duplicate elements. |
| `trackable_obj` | tf.AutoTrackable object associated with `funcs`. A reference to this object needs to be maintained so that Variables do not get garbage collected since functions have a weak reference to Variables. This is only required when the tf.AutoTrackable object is not maintained by the user (e.g. `from_saved_model`). |
| Attributes |
| `optimizations` | Experimental flag, subject to change. Set of optimizations to apply. e.g {tf.lite.Optimize.DEFAULT}. (default None, must be None or a set of values of type [`tf.lite.Optimize`](optimize)) |
| `representative_dataset` | A generator function used for integer quantization where each generated sample has the same order, type and shape as the inputs to the model. Usually, this is a small subset of a few hundred samples randomly chosen, in no particular order, from the training or evaluation dataset. This is an optional attribute, but required for full integer quantization, i.e, if [`tf.int8`](../../tf#int8) is the only supported type in `target_spec.supported_types`. Refer to [`tf.lite.RepresentativeDataset`](representativedataset). (default None) |
| `target_spec` | Experimental flag, subject to change. Specifications of target device, including supported ops set, supported types and a set of user's defined TensorFlow operators required in the TensorFlow Lite runtime. Refer to [`tf.lite.TargetSpec`](targetspec). |
| `inference_input_type` | Data type of the input layer. Note that integer types (tf.int8 and tf.uint8) are currently only supported for post training integer quantization and quantization aware training. (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8}) |
| `inference_output_type` | Data type of the output layer. Note that integer types (tf.int8 and tf.uint8) are currently only supported for post training integer quantization and quantization aware training. (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8}) |
| `allow_custom_ops` | Boolean indicating whether to allow custom operations. When False, any unknown operation is an error. When True, custom ops are created for any op that is unknown. The developer needs to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) |
| `exclude_conversion_metadata` | Whether not to embed the conversion metadata into the converted model. (default False) |
| `experimental_new_converter` | Experimental flag, subject to change. Enables MLIR-based conversion. (default True) |
| `experimental_new_quantizer` | Experimental flag, subject to change. Enables MLIR-based quantization conversion instead of Flatbuffer-based conversion. (default True) |
| `experimental_enable_resource_variables` | Experimental flag, subject to change. Enables resource variables to be converted by this converter. This is only allowed if from\_saved\_model interface is used. (default True) |
Methods
-------
### `convert`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L1847-L1860)
```
convert()
```
Converts a TensorFlow GraphDef based on instance variables.
| Returns |
| The converted data in serialized format. |
| Raises |
| `ValueError` | No concrete functions is specified. Multiple concrete functions are specified. Input shape is not specified. Invalid quantization parameters. |
### `experimental_from_jax`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L1824-L1844)
```
@classmethod
experimental_from_jax(
serving_funcs, inputs
)
```
Creates a TFLiteConverter object from a Jax model with its inputs.
| Args |
| `serving_funcs` | A array of Jax functions with all the weights applied already. |
| `inputs` | A array of Jax input placeholders tuples list, e.g., jnp.zeros(INPUT\_SHAPE). Each tuple list should correspond with the serving function. |
| Returns |
| TFLiteConverter object. |
### `from_concrete_functions`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L1704-L1740)
```
@classmethod
from_concrete_functions(
funcs, trackable_obj=None
)
```
Creates a TFLiteConverter object from ConcreteFunctions.
| Args |
| `funcs` | List of TensorFlow ConcreteFunctions. The list should not contain duplicate elements. Currently converter can only convert a single ConcreteFunction. Converting multiple functions is under development. |
| `trackable_obj` | An `AutoTrackable` object (typically `tf.module`) associated with `funcs`. A reference to this object needs to be maintained so that Variables do not get garbage collected since functions have a weak reference to Variables. |
| Returns |
| TFLiteConverter object. |
| Raises |
| Invalid input type. |
### `from_keras_model`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L1808-L1822)
```
@classmethod
from_keras_model(
model
)
```
Creates a TFLiteConverter object from a Keras model.
| Args |
| `model` | tf.Keras.Model |
| Returns |
| TFLiteConverter object. |
### `from_saved_model`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L1742-L1806)
```
@classmethod
from_saved_model(
saved_model_dir, signature_keys=None, tags=None
)
```
Creates a TFLiteConverter object from a SavedModel directory.
| Args |
| `saved_model_dir` | SavedModel directory to convert. |
| `signature_keys` | List of keys identifying SignatureDef containing inputs and outputs. Elements should not be duplicated. By default the `signatures` attribute of the MetaGraphdef is used. (default saved\_model.signatures) |
| `tags` | Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. (default {tf.saved\_model.SERVING} or {'serve'}) |
| Returns |
| TFLiteConverter object. |
| Raises |
| Invalid signature keys. |
| programming_docs |
tensorflow tf.lite.Interpreter tf.lite.Interpreter
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L355-L940) |
Interpreter interface for running TensorFlow Lite models.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.Interpreter`](https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter)
```
tf.lite.Interpreter(
model_path=None,
model_content=None,
experimental_delegates=None,
num_threads=None,
experimental_op_resolver_type=tf.lite.experimental.OpResolverType.AUTO,
experimental_preserve_all_tensors=False
)
```
Models obtained from `TfLiteConverter` can be run in Python with `Interpreter`.
As an example, lets generate a simple Keras model and convert it to TFLite (`TfLiteConverter` also supports other input formats with `from_saved_model` and `from_concrete_function`)
```
x = np.array([[1.], [2.]])
y = np.array([[2.], [4.]])
model = tf.keras.models.Sequential([
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(x, y, epochs=1)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
```
`tflite_model` can be saved to a file and loaded later, or directly into the `Interpreter`. Since TensorFlow Lite pre-plans tensor allocations to optimize inference, the user needs to call `allocate_tensors()` before any inference.
```
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors() # Needed before execution!
```
#### Sample execution:
```
output = interpreter.get_output_details()[0] # Model has single output.
input = interpreter.get_input_details()[0] # Model has single input.
input_data = tf.constant(1., shape=[1, 1])
interpreter.set_tensor(input['index'], input_data)
interpreter.invoke()
interpreter.get_tensor(output['index']).shape
(1, 1)
```
Use `get_signature_runner()` for a more user-friendly inference API.
| Args |
| `model_path` | Path to TF-Lite Flatbuffer file. |
| `model_content` | Content of model. |
| `experimental_delegates` | Experimental. Subject to change. List of [TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates) objects returned by lite.load\_delegate(). |
| `num_threads` | Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading. num\_threads should be >= -1. Setting num\_threads to 0 has the effect to disable multithreading, which is equivalent to setting num\_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent. |
| `experimental_op_resolver_type` | The op resolver used by the interpreter. It must be an instance of OpResolverType. By default, we use the built-in op resolver which corresponds to tflite::ops::builtin::BuiltinOpResolver in C++. |
| `experimental_preserve_all_tensors` | If true, then intermediate tensors used during computation are preserved for inspection, and if the passed op resolver type is AUTO or BUILTIN, the type will be changed to BUILTIN\_WITHOUT\_DEFAULT\_DELEGATES so that no Tensorflow Lite default delegates are applied. If false, getting intermediate tensors could result in undefined values or None, especially when the graph is successfully modified by the Tensorflow Lite default delegate. |
| Raises |
| `ValueError` | If the interpreter was unable to create. |
Methods
-------
### `allocate_tensors`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L511-L513)
```
allocate_tensors()
```
### `get_input_details`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L651-L679)
```
get_input_details()
```
Gets model input tensor details.
| Returns |
| A list in which each item is a dictionary with details about an input tensor. Each dictionary contains the following fields that describe the tensor: * `name`: The tensor name.
* `index`: The tensor index in the interpreter.
* `shape`: The shape of the tensor.
* `shape_signature`: Same as `shape` for models with known/fixed shapes. If any dimension sizes are unkown, they are indicated with `-1`.
* `dtype`: The numpy data type (such as `np.int32` or `np.uint8`).
* `quantization`: Deprecated, use `quantization_parameters`. This field only works for per-tensor quantization, whereas `quantization_parameters` works in all cases.
* `quantization_parameters`: A dictionary of parameters used to quantize the tensor: ~ `scales`: List of scales (one if per-tensor quantization). ~ `zero_points`: List of zero\_points (one if per-tensor quantization). ~ `quantized_dimension`: Specifies the dimension of per-axis quantization, in the case of multiple scales/zero\_points.
* `sparsity_parameters`: A dictionary of parameters used to encode a sparse tensor. This is empty if the tensor is dense.
|
### `get_output_details`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L728-L738)
```
get_output_details()
```
Gets model output tensor details.
| Returns |
| A list in which each item is a dictionary with details about an output tensor. The dictionary contains the same fields as described for `get_input_details()`. |
### `get_signature_list`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L740-L765)
```
get_signature_list()
```
Gets list of SignatureDefs in the model.
Example,
```
signatures = interpreter.get_signature_list()
print(signatures)
# {
# 'add': {'inputs': ['x', 'y'], 'outputs': ['output_0']}
# }
Then using the names in the signature list you can get a callable from
get_signature_runner().
```
| Returns |
| A list of SignatureDef details in a dictionary structure. It is keyed on the SignatureDef method name, and the value holds dictionary of inputs and outputs. |
### `get_signature_runner`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L790-L835)
```
get_signature_runner(
signature_key=None
)
```
Gets callable for inference of specific SignatureDef.
Example usage,
```
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
fn = interpreter.get_signature_runner('div_with_remainder')
output = fn(x=np.array([3]), y=np.array([2]))
print(output)
# {
# 'quotient': array([1.], dtype=float32)
# 'remainder': array([1.], dtype=float32)
# }
```
None can be passed for signature\_key if the model has a single Signature only.
All names used are this specific SignatureDef names.
| Args |
| `signature_key` | Signature key for the SignatureDef, it can be None if and only if the model has a single SignatureDef. Default value is None. |
| Returns |
| This returns a callable that can run inference for SignatureDef defined by argument 'signature\_key'. The callable will take key arguments corresponding to the arguments of the SignatureDef, that should have numpy values. The callable will returns dictionary that maps from output names to numpy values of the computed results. |
| Raises |
| `ValueError` | If passed signature\_key is invalid. |
### `get_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L837-L850)
```
get_tensor(
tensor_index
)
```
Gets the value of the output tensor (get a copy).
If you wish to avoid the copy, use `tensor()`. This function cannot be used to read intermediate results.
| Args |
| `tensor_index` | Tensor index of tensor to get. This value can be gotten from the 'index' field in get\_output\_details. |
| Returns |
| a numpy array. |
### `get_tensor_details`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L634-L649)
```
get_tensor_details()
```
Gets tensor details for every tensor with valid tensor details.
Tensors where required information about the tensor is not found are not added to the list. This includes temporary tensors without a name.
| Returns |
| A list of dictionaries containing tensor information. |
### `invoke`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L902-L915)
```
invoke()
```
Invoke the interpreter.
Be sure to set the input sizes, allocate tensors and fill values before calling this. Also, note that this function releases the GIL so heavy computation can be done in the background while the Python interpreter continues. No other function on this object should be called while the invoke() call has not finished.
| Raises |
| `ValueError` | When the underlying interpreter fails raise ValueError. |
### `reset_all_variables`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L917-L918)
```
reset_all_variables()
```
### `resize_tensor_input`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L699-L726)
```
resize_tensor_input(
input_index, tensor_size, strict=False
)
```
Resizes an input tensor.
| Args |
| `input_index` | Tensor index of input to set. This value can be gotten from the 'index' field in get\_input\_details. |
| `tensor_size` | The tensor\_shape to resize the input to. |
| `strict` | Only unknown dimensions can be resized when `strict` is True. Unknown dimensions are indicated as `-1` in the `shape_signature` attribute of a given tensor. (default False) |
| Raises |
| `ValueError` | If the interpreter could not resize the input tensor. |
#### Usage:
```
interpreter = Interpreter(model_content=tflite_model)
interpreter.resize_tensor_input(0, [num_test_images, 224, 224, 3])
interpreter.allocate_tensors()
interpreter.set_tensor(0, test_images)
interpreter.invoke()
```
### `set_tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L681-L697)
```
set_tensor(
tensor_index, value
)
```
Sets the value of the input tensor.
Note this copies data in `value`.
If you want to avoid copying, you can use the `tensor()` function to get a numpy buffer pointing to the input buffer in the tflite interpreter.
| Args |
| `tensor_index` | Tensor index of tensor to set. This value can be gotten from the 'index' field in get\_input\_details. |
| `value` | Value of tensor to set. |
| Raises |
| `ValueError` | If the interpreter could not set the tensor. |
### `tensor`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L852-L900)
```
tensor(
tensor_index
)
```
Returns function that gives a numpy view of the current tensor buffer.
This allows reading and writing to this tensors w/o copies. This more closely mirrors the C++ Interpreter class interface's tensor() member, hence the name. Be careful to not hold these output references through calls to `allocate_tensors()` and `invoke()`. This function cannot be used to read intermediate results.
#### Usage:
```
interpreter.allocate_tensors()
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
for i in range(10):
input().fill(3.)
interpreter.invoke()
print("inference %s" % output())
```
Notice how this function avoids making a numpy array directly. This is because it is important to not hold actual numpy views to the data longer than necessary. If you do, then the interpreter can no longer be invoked, because it is possible the interpreter would resize and invalidate the referenced tensors. The NumPy API doesn't allow any mutability of the the underlying buffers.
#### WRONG:
```
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])()
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])()
interpreter.allocate_tensors() # This will throw RuntimeError
for i in range(10):
input.fill(3.)
interpreter.invoke() # this will throw RuntimeError since input,output
```
| Args |
| `tensor_index` | Tensor index of tensor to get. This value can be gotten from the 'index' field in get\_output\_details. |
| Returns |
| A function that can return a new numpy array pointing to the internal TFLite tensor state at any point. It is safe to hold the function forever, but it is not safe to hold the numpy array forever. |
tensorflow Module: tf.lite.experimental Module: tf.lite.experimental
============================
Public API for tf.lite.experimental namespace.
Modules
-------
[`authoring`](experimental/authoring) module: Public API for tf.lite.experimental.authoring namespace.
Classes
-------
[`class Analyzer`](experimental/analyzer): Provides a collection of TFLite model analyzer tools.
[`class OpResolverType`](experimental/opresolvertype): Different types of op resolvers for Tensorflow Lite.
[`class QuantizationDebugOptions`](experimental/quantizationdebugoptions): Debug options to set up a given QuantizationDebugger.
[`class QuantizationDebugger`](experimental/quantizationdebugger): Debugger for Quantized TensorFlow Lite debug mode models.
Functions
---------
[`load_delegate(...)`](experimental/load_delegate): Returns loaded Delegate object.
tensorflow tf.lite.RepresentativeDataset tf.lite.RepresentativeDataset
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/lite.py#L161-L181) |
Representative dataset used to optimize the model.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.RepresentativeDataset`](https://www.tensorflow.org/api_docs/python/tf/lite/RepresentativeDataset)
```
tf.lite.RepresentativeDataset(
input_gen
)
```
This is a generator function that provides a small dataset to calibrate or estimate the range, i.e, (min, max) of all floating-point arrays in the model (such as model input, activation outputs of intermediate layers, and model output) for quantization. Usually, this is a small subset of a few hundred samples randomly chosen, in no particular order, from the training or evaluation dataset.
| Args |
| `input_gen` | A generator function that generates input samples for the model and has the same order, type and shape as the inputs to the model. Usually, this is a small subset of a few hundred samples randomly chosen, in no particular order, from the training or evaluation dataset. |
tensorflow tf.lite.OpsSet tf.lite.OpsSet
==============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/convert.py#L160-L197) |
Enum class defining the sets of ops available to generate TFLite models.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.OpsSet`](https://www.tensorflow.org/api_docs/python/tf/lite/OpsSet)
| Class Variables |
| EXPERIMENTAL\_TFLITE\_BUILTINS\_ACTIVATIONS\_INT16\_WEIGHTS\_INT8 | `<OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8: 'EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8'>` |
| SELECT\_TF\_OPS | `<OpsSet.SELECT_TF_OPS: 'SELECT_TF_OPS'>` |
| TFLITE\_BUILTINS | `<OpsSet.TFLITE_BUILTINS: 'TFLITE_BUILTINS'>` |
| TFLITE\_BUILTINS\_INT8 | `<OpsSet.TFLITE_BUILTINS_INT8: 'TFLITE_BUILTINS_INT8'>` |
tensorflow tf.lite.experimental.Analyzer tf.lite.experimental.Analyzer
=============================
Provides a collection of TFLite model analyzer tools.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.experimental.Analyzer`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/Analyzer)
#### Example:
```
model = tf.keras.applications.MobileNetV3Large()
fb_model = tf.lite.TFLiteConverterV2.from_keras_model(model).convert()
tf.lite.experimental.Analyzer.analyze(model_content=fb_model)
# === TFLite ModelAnalyzer ===
#
# Your TFLite model has ‘1’ subgraph(s). In the subgraph description below,
# T# represents the Tensor numbers. For example, in Subgraph#0, the MUL op
# takes tensor #0 and tensor #19 as input and produces tensor #136 as output.
#
# Subgraph#0 main(T#0) -> [T#263]
# Op#0 MUL(T#0, T#19) -> [T#136]
# Op#1 ADD(T#136, T#18) -> [T#137]
# Op#2 CONV_2D(T#137, T#44, T#93) -> [T#138]
# Op#3 HARD_SWISH(T#138) -> [T#139]
# Op#4 DEPTHWISE_CONV_2D(T#139, T#94, T#24) -> [T#140]
# ...
```
Methods
-------
### `analyze`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/analyzer.py#L63-L105)
```
@staticmethod
analyze(
model_path=None, model_content=None, gpu_compatibility=False, **kwargs
)
```
Analyzes the given tflite\_model with dumping model structure.
This tool provides a way to understand users' TFLite flatbuffer model by dumping internal graph structure. It also provides additional features like checking GPU delegate compatibility.
| Args |
| `model_path` | TFLite flatbuffer model path. |
| `model_content` | TFLite flatbuffer model object. |
| `gpu_compatibility` | Whether to check GPU delegate compatibility. |
| `**kwargs` | Experimental keyword arguments to analyze API. |
| Returns |
| Print analyzed report via console output. |
tensorflow tf.lite.experimental.QuantizationDebugger tf.lite.experimental.QuantizationDebugger
=========================================
Debugger for Quantized TensorFlow Lite debug mode models.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.experimental.QuantizationDebugger`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/QuantizationDebugger)
```
tf.lite.experimental.QuantizationDebugger(
quant_debug_model_path: Optional[str] = None,
quant_debug_model_content: Optional[bytes] = None,
float_model_path: Optional[str] = None,
float_model_content: Optional[bytes] = None,
debug_dataset: Optional[Callable[[], Iterable[Sequence[np.ndarray]]]] = None,
debug_options: Optional[tf.lite.experimental.QuantizationDebugOptions] = None,
converter: Optional[TFLiteConverter] = None
) -> None
```
This can run the TensorFlow Lite converted models equipped with debug ops and collect debug information. This debugger calculates statistics from user-defined post-processing functions as well as default ones.
| Args |
| `quant_debug_model_path` | Path to the quantized debug TFLite model file. |
| `quant_debug_model_content` | Content of the quantized debug TFLite model. |
| `float_model_path` | Path to float TFLite model file. |
| `float_model_content` | Content of the float TFLite model. |
| `debug_dataset` | a factory function that returns dataset generator which is used to generate input samples (list of np.ndarray) for the model. The generated elements must have same types and shape as inputs to the model. |
| `debug_options` | Debug options to debug the given model. |
| `converter` | Optional, use converter instead of quantized model. |
| Raises |
| `ValueError` | If the debugger was unable to be created. |
| Attributes |
| `options` | |
Methods
-------
### `get_debug_quantized_model`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L261-L273)
```
get_debug_quantized_model() -> bytes
```
Returns an instrumented quantized model.
Convert the quantized model with the initialized converter and return bytes for model. The model will be instrumented with numeric verification operations and should only be used for debugging.
| Returns |
| Model bytes corresponding to the model. |
| Raises |
| `ValueError` | if converter is not passed to the debugger. |
### `get_nondebug_quantized_model`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L247-L259)
```
get_nondebug_quantized_model() -> bytes
```
Returns a non-instrumented quantized model.
Convert the quantized model with the initialized converter and return bytes for nondebug model. The model will not be instrumented with numeric verification operations.
| Returns |
| Model bytes corresponding to the model. |
| Raises |
| `ValueError` | if converter is not passed to the debugger. |
### `layer_statistics_dump`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L521-L544)
```
layer_statistics_dump(
file: IO[str]
) -> None
```
Dumps layer statistics into file, in csv format.
| Args |
| `file` | file, or file-like object to write. |
### `run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L326-L330)
```
run() -> None
```
Runs models and gets metrics.
| programming_docs |
tensorflow tf.lite.experimental.load_delegate tf.lite.experimental.load\_delegate
===================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/lite/python/interpreter.py#L133-L178) |
Returns loaded Delegate object.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.experimental.load_delegate`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/load_delegate)
```
tf.lite.experimental.load_delegate(
library, options=None
)
```
#### Example usage:
```
import tensorflow as tf
try:
delegate = tf.lite.experimental.load_delegate('delegate.so')
except ValueError:
// Fallback to CPU
if delegate:
interpreter = tf.lite.Interpreter(
model_path='model.tflite',
experimental_delegates=[delegate])
else:
interpreter = tf.lite.Interpreter(model_path='model.tflite')
```
This is typically used to leverage EdgeTPU for running TensorFlow Lite models. For more information see: <https://coral.ai/docs/edgetpu/tflite-python/>
| Args |
| `library` | Name of shared library containing the [TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates). |
| `options` | Dictionary of options that are required to load the delegate. All keys and values in the dictionary should be convertible to str. Consult the documentation of the specific delegate for required and legal options. (default None) |
| Returns |
| Delegate object. |
| Raises |
| `ValueError` | Delegate failed to load. |
| `RuntimeError` | If delegate loading is used on unsupported platform. |
tensorflow tf.lite.experimental.OpResolverType tf.lite.experimental.OpResolverType
===================================
Different types of op resolvers for Tensorflow Lite.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.experimental.OpResolverType`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/OpResolverType)
* `AUTO`: Indicates the op resolver that is chosen by default in TfLite Python, which is the "BUILTIN" as described below.
* `BUILTIN`: Indicates the op resolver for built-in ops with optimized kernel implementation.
* `BUILTIN_REF`: Indicates the op resolver for built-in ops with reference kernel implementation. It's generally used for testing and debugging.
* `BUILTIN_WITHOUT_DEFAULT_DELEGATES`: Indicates the op resolver for built-in ops with optimized kernel implementation, but it will disable the application of default TfLite delegates (like the XNNPACK delegate) to the model graph. Generally this should not be used unless there are issues with the default configuration.
| Class Variables |
| AUTO | `<OpResolverType.AUTO: 0>` |
| BUILTIN | `<OpResolverType.BUILTIN: 1>` |
| BUILTIN\_REF | `<OpResolverType.BUILTIN_REF: 2>` |
| BUILTIN\_WITHOUT\_DEFAULT\_DELEGATES | `<OpResolverType.BUILTIN_WITHOUT_DEFAULT_DELEGATES: 3>` |
tensorflow Module: tf.lite.experimental.authoring Module: tf.lite.experimental.authoring
======================================
Public API for tf.lite.experimental.authoring namespace.
Functions
---------
[`compatible(...)`](authoring/compatible): Wraps [`tf.function`](../../function) into a callable function with TFLite compatibility checking.
tensorflow tf.lite.experimental.QuantizationDebugOptions tf.lite.experimental.QuantizationDebugOptions
=============================================
Debug options to set up a given QuantizationDebugger.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.experimental.QuantizationDebugOptions`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/QuantizationDebugOptions)
```
tf.lite.experimental.QuantizationDebugOptions(
layer_debug_metrics: Optional[Mapping[str, Callable[[np.ndarray], float]]] = None,
model_debug_metrics: Optional[Mapping[str, Callable[[Sequence[np.ndarray], Sequence[np.ndarray]],
float]]] = None,
layer_direct_compare_metrics: Optional[Mapping[str, Callable[[Sequence[np.ndarray], Sequence[np.ndarray],
float, int], float]]] = None,
denylisted_ops: Optional[List[str]] = None,
denylisted_nodes: Optional[List[str]] = None,
fully_quantize: bool = False
) -> None
```
| Args |
| `layer_debug_metrics` | a dict to specify layer debug functions {function\_name\_str: function} where the function accepts result of NumericVerify Op, which is value difference between float and dequantized op results. The function returns single scalar value. |
| `model_debug_metrics` | a dict to specify model debug functions {function\_name\_str: function} where the function accepts outputs from two models, and returns single scalar value for a metric. (e.g. accuracy, IoU) |
| `layer_direct_compare_metrics` | a dict to specify layer debug functions {function\_name\_str: function}. The signature is different from that of `layer_debug_metrics`, and this one gets passed (original float value, original quantized value, scale, zero point). The function's implementation is responsible for correctly dequantize the quantized value to compare. Use this one when comparing diff is not enough. (Note) quantized value is passed as int8, so cast to int32 is needed. |
| `denylisted_ops` | a list of op names which is expected to be removed from quantization. |
| `denylisted_nodes` | a list of op's output tensor names to be removed from quantization. |
| `fully_quantize` | Bool indicating whether to fully quantize the model. Besides model body, the input/output will be quantized as well. Corresponding to mlir\_quantize's fully\_quantize parameter. |
| Raises |
| `ValueError` | when there are duplicate keys |
tensorflow tf.lite.experimental.authoring.compatible tf.lite.experimental.authoring.compatible
=========================================
Wraps [`tf.function`](../../../function) into a callable function with TFLite compatibility checking.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.lite.experimental.authoring.compatible`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/authoring/compatible)
```
tf.lite.experimental.authoring.compatible(
target=None, converter_target_spec=None, **kwargs
)
```
#### Example:
```
@tf.lite.experimental.authoring.compatible
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.float32)
])
def f(x):
return tf.cosh(x)
result = f(tf.constant([0.0]))
# COMPATIBILITY WARNING: op 'tf.Cosh' require(s) "Select TF Ops" for model
# conversion for TensorFlow Lite.
# Op: tf.Cosh
# - tensorflow/python/framework/op_def_library.py:748
# - tensorflow/python/ops/gen_math_ops.py:2458
# - <stdin>:6
```
| Args |
| `target` | A [`tf.function`](../../../function) to decorate. |
| `converter_target_spec` | target\_spec of TFLite converter parameter. |
| `**kwargs` | The keyword arguments of the decorator class \_Compatible. |
| Returns |
| A callable object of `tf.lite.experimental.authoring._Compatible`. |
tensorflow tf.feature_column.sequence_categorical_column_with_vocabulary_file tf.feature\_column.sequence\_categorical\_column\_with\_vocabulary\_file
========================================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/sequence_feature_column.py#L181-L243) |
A sequence of categorical terms where ids use a vocabulary file.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_file`](https://www.tensorflow.org/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_file)
```
tf.feature_column.sequence_categorical_column_with_vocabulary_file(
key,
vocabulary_file,
vocabulary_size=None,
num_oov_buckets=0,
default_value=None,
dtype=tf.dtypes.string
)
```
Pass this to `embedding_column` or `indicator_column` to convert sequence categorical data into dense representation for input to sequence NN, such as RNN.
#### Example:
```
states = sequence_categorical_column_with_vocabulary_file(
key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
num_oov_buckets=5)
states_embedding = embedding_column(states, dimension=10)
columns = [states_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
```
| Args |
| `key` | A unique string identifying the input feature. |
| `vocabulary_file` | The vocabulary file name. |
| `vocabulary_size` | Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`. |
| `num_oov_buckets` | Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`. |
| `default_value` | The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`. |
| `dtype` | The type of features. Only string and integer types are supported. |
| Returns |
| A `SequenceCategoricalColumn`. |
| Raises |
| `ValueError` | `vocabulary_file` is missing or cannot be opened. |
| `ValueError` | `vocabulary_size` is missing or < 1. |
| `ValueError` | `num_oov_buckets` is a negative integer. |
| `ValueError` | `num_oov_buckets` and `default_value` are both specified. |
| `ValueError` | `dtype` is neither string nor integer. |
tensorflow tf.feature_column.make_parse_example_spec tf.feature\_column.make\_parse\_example\_spec
=============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L452-L511) |
Creates parsing spec dictionary from input feature\_columns.
```
tf.feature_column.make_parse_example_spec(
feature_columns
)
```
The returned dictionary can be used as arg 'features' in [`tf.io.parse_example`](../io/parse_example).
#### Typical usage example:
```
# Define features and transformations
feature_a = tf.feature_column.categorical_column_with_vocabulary_file(...)
feature_b = tf.feature_column.numeric_column(...)
feature_c_bucketized = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column("feature_c"), ...)
feature_a_x_feature_c = tf.feature_column.crossed_column(
columns=["feature_a", feature_c_bucketized], ...)
feature_columns = set(
[feature_b, feature_c_bucketized, feature_a_x_feature_c])
features = tf.io.parse_example(
serialized=serialized_examples,
features=tf.feature_column.make_parse_example_spec(feature_columns))
```
For the above example, make\_parse\_example\_spec would return the dict:
```
{
"feature_a": parsing_ops.VarLenFeature(tf.string),
"feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
"feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
}
```
| Args |
| `feature_columns` | An iterable containing all feature columns. All items should be instances of classes derived from `FeatureColumn`. |
| Returns |
| A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` value. |
| Raises |
| `ValueError` | If any of the given `feature_columns` is not a `FeatureColumn` instance. |
tensorflow tf.feature_column.shared_embeddings tf.feature\_column.shared\_embeddings
=====================================
List of dense columns that convert from sparse, categorical input.
```
tf.feature_column.shared_embeddings(
categorical_columns,
dimension,
combiner='mean',
initializer=None,
shared_embedding_collection_name=None,
ckpt_to_load_from=None,
tensor_name_in_ckpt=None,
max_norm=None,
trainable=True,
use_safe_embedding_lookup=True
)
```
This is similar to `embedding_column`, except that it produces a list of embedding columns that share the same embedding weights.
Use this when your inputs are sparse and of the same type (e.g. watched and impression video IDs that share the same vocabulary), and you want to convert them to a dense representation (e.g., to feed to a DNN).
Inputs must be a list of categorical columns created by any of the `categorical_column_*` function. They must all be of the same type and have the same arguments except `key`. E.g. they can be categorical\_column\_with\_vocabulary\_file with the same vocabulary\_file. Some or all columns could also be weighted\_categorical\_column.
Here is an example embedding of two features for a DNNClassifier model:
```
watched_video_id = categorical_column_with_vocabulary_file(
'watched_video_id', video_vocabulary_file, video_vocabulary_size)
impression_video_id = categorical_column_with_vocabulary_file(
'impression_video_id', video_vocabulary_file, video_vocabulary_size)
columns = shared_embedding_columns(
[watched_video_id, impression_video_id], dimension=10)
estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)
label_column = ...
def input_fn():
features = tf.io.parse_example(
..., features=make_parse_example_spec(columns + [label_column]))
labels = features.pop(label_column.name)
return features, labels
estimator.train(input_fn=input_fn, steps=100)
```
Here is an example using `shared_embedding_columns` with model\_fn:
```
def model_fn(features, ...):
watched_video_id = categorical_column_with_vocabulary_file(
'watched_video_id', video_vocabulary_file, video_vocabulary_size)
impression_video_id = categorical_column_with_vocabulary_file(
'impression_video_id', video_vocabulary_file, video_vocabulary_size)
columns = shared_embedding_columns(
[watched_video_id, impression_video_id], dimension=10)
dense_tensor = input_layer(features, columns)
# Form DNN layers, calculate loss, and return EstimatorSpec.
...
```
| Args |
| `categorical_columns` | List of categorical columns created by a `categorical_column_with_*` function. These columns produce the sparse IDs that are inputs to the embedding lookup. All columns must be of the same type and have the same arguments except `key`. E.g. they can be categorical\_column\_with\_vocabulary\_file with the same vocabulary\_file. Some or all columns could also be weighted\_categorical\_column. |
| `dimension` | An integer specifying dimension of the embedding, must be > 0. |
| `combiner` | A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`. |
| `initializer` | A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`. |
| `shared_embedding_collection_name` | Optional collective name of these columns. If not given, a reasonable name will be chosen based on the names of `categorical_columns`. |
| `ckpt_to_load_from` | String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`. |
| `tensor_name_in_ckpt` | Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`. |
| `max_norm` | If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining. |
| `trainable` | Whether or not the embedding is trainable. Default is True. |
| `use_safe_embedding_lookup` | If true, uses safe\_embedding\_lookup\_sparse instead of embedding\_lookup\_sparse. safe\_embedding\_lookup\_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted. |
| Returns |
| A list of dense columns that converts from sparse input. The order of results follows the ordering of `categorical_columns`. |
| Raises |
| `ValueError` | if `dimension` not > 0. |
| `ValueError` | if any of the given `categorical_columns` is of different type or has different arguments than the others. |
| `ValueError` | if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt` is specified. |
| `ValueError` | if `initializer` is specified and is not callable. |
| `RuntimeError` | if eager execution is enabled. |
tensorflow tf.feature_column.categorical_column_with_identity tf.feature\_column.categorical\_column\_with\_identity
======================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1599-L1672) |
A `CategoricalColumn` that returns identity values.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.categorical_column_with_identity`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_identity)
```
tf.feature_column.categorical_column_with_identity(
key, num_buckets, default_value=None
)
```
Use this when your inputs are integers in the range `[0, num_buckets)`, and you want to use the input value itself as the categorical ID. Values outside this range will result in `default_value` if specified, otherwise it will fail.
Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider `categorical_column_with_hash_bucket` in that case.
For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.
In the following examples, each input in the range `[0, 1000000)` is assigned the same value. All other inputs are assigned `default_value` 0. Note that a literal 0 in inputs will result in the same default ID.
#### Linear model:
```
import tensorflow as tf
video_id = tf.feature_column.categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [video_id]
features = {'video_id': tf.sparse.from_dense([[2, 85, 0, 0, 0],
[33,78, 2, 73, 1]])}
linear_prediction = tf.compat.v1.feature_column.linear_model(features,
columns)
```
Embedding for a DNN model:
```
import tensorflow as tf
video_id = tf.feature_column.categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [tf.feature_column.embedding_column(video_id, 9)]
features = {'video_id': tf.sparse.from_dense([[2, 85, 0, 0, 0],
[33,78, 2, 73, 1]])}
input_layer = tf.keras.layers.DenseFeatures(columns)
dense_tensor = input_layer(features)
```
| Args |
| `key` | A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns. |
| `num_buckets` | Range of inputs and outputs is `[0, num_buckets)`. |
| `default_value` | If set, values outside of range `[0, num_buckets)` will be replaced with this value. If not set, values >= num\_buckets will cause a failure while values < 0 will be dropped. |
| Returns |
| A `CategoricalColumn` that returns identity values. |
| Raises |
| `ValueError` | if `num_buckets` is less than one. |
| `ValueError` | if `default_value` is not in range `[0, num_buckets)`. |
| programming_docs |
tensorflow tf.feature_column.categorical_column_with_vocabulary_list tf.feature\_column.categorical\_column\_with\_vocabulary\_list
==============================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1482-L1596) |
A `CategoricalColumn` with in-memory vocabulary.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list)
```
tf.feature_column.categorical_column_with_vocabulary_list(
key, vocabulary_list, dtype=None, default_value=-1, num_oov_buckets=0
)
```
Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.
For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.
Example with `num_oov_buckets`: In the following example, each input in `vocabulary_list` is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5.
```
colors = categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
num_oov_buckets=2)
columns = [colors, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
```
Example with `default_value`: In the following example, each input in `vocabulary_list` is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned `default_value` 0.
```
colors = categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('X', 'R', 'G', 'B', 'Y'), default_value=0)
columns = [colors, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
```
And to make an embedding with either:
```
columns = [embedding_column(colors, 3),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
```
| Args |
| `key` | A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns. |
| `vocabulary_list` | An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`. |
| `dtype` | The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`. |
| `default_value` | The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`. |
| `num_oov_buckets` | Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`. |
| Returns |
| A `CategoricalColumn` with in-memory vocabulary. |
| Raises |
| `ValueError` | if `vocabulary_list` is empty, or contains duplicate keys. |
| `ValueError` | `num_oov_buckets` is a negative integer. |
| `ValueError` | `num_oov_buckets` and `default_value` are both specified. |
| `ValueError` | if `dtype` is not integer or string. |
tensorflow tf.feature_column.sequence_categorical_column_with_hash_bucket tf.feature\_column.sequence\_categorical\_column\_with\_hash\_bucket
====================================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/sequence_feature_column.py#L135-L178) |
A sequence of categorical terms where ids are set by hashing.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.sequence_categorical_column_with_hash_bucket`](https://www.tensorflow.org/api_docs/python/tf/feature_column/sequence_categorical_column_with_hash_bucket)
```
tf.feature_column.sequence_categorical_column_with_hash_bucket(
key,
hash_bucket_size,
dtype=tf.dtypes.string
)
```
Pass this to `embedding_column` or `indicator_column` to convert sequence categorical data into dense representation for input to sequence NN, such as RNN.
#### Example:
```
tokens = sequence_categorical_column_with_hash_bucket(
'tokens', hash_bucket_size=1000)
tokens_embedding = embedding_column(tokens, dimension=10)
columns = [tokens_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
```
| Args |
| `key` | A unique string identifying the input feature. |
| `hash_bucket_size` | An int > 1. The number of buckets. |
| `dtype` | The type of features. Only string and integer types are supported. |
| Returns |
| A `SequenceCategoricalColumn`. |
| Raises |
| `ValueError` | `hash_bucket_size` is not greater than 1. |
| `ValueError` | `dtype` is neither string nor integer. |
tensorflow tf.feature_column.weighted_categorical_column tf.feature\_column.weighted\_categorical\_column
================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1718-L1791) |
Applies weight values to a `CategoricalColumn`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.weighted_categorical_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/weighted_categorical_column)
```
tf.feature_column.weighted_categorical_column(
categorical_column,
weight_feature_key,
dtype=tf.dtypes.float32
)
```
Use this when each of your sparse inputs has both an ID and a value. For example, if you're representing text documents as a collection of word frequencies, you can provide 2 parallel sparse input features ('terms' and 'frequencies' below).
#### Example:
Input `tf.Example` objects:
```
[
features {
feature {
key: "terms"
value {bytes_list {value: "very" value: "model"} }
}
feature {
key: "frequencies"
value {float_list {value: 0.3 value: 0.1} }
}
},
features {
feature {
key: "terms"
value {bytes_list {value: "when" value: "course" value: "human"} }
}
feature {
key: "frequencies"
value {float_list {value: 0.4 value: 0.1 value: 0.2} }
}
}
]
```
```
categorical_column = categorical_column_with_hash_bucket(
column_name='terms', hash_bucket_size=1000)
weighted_column = weighted_categorical_column(
categorical_column=categorical_column, weight_feature_key='frequencies')
columns = [weighted_column, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
```
This assumes the input dictionary contains a `SparseTensor` for key 'terms', and a `SparseTensor` for key 'frequencies'. These 2 tensors must have the same indices and dense shape.
| Args |
| `categorical_column` | A `CategoricalColumn` created by `categorical_column_with_*` functions. |
| `weight_feature_key` | String key for weight values. |
| `dtype` | Type of weights, such as [`tf.float32`](../../tf#float32). Only float and integer weights are supported. |
| Returns |
| A `CategoricalColumn` composed of two sparse features: one represents id, the other represents weight (value) of the id feature in that example. |
| Raises |
| `ValueError` | if `dtype` is not convertible to float. |
tensorflow tf.feature_column.crossed_column tf.feature\_column.crossed\_column
==================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1794-L1919) |
Returns a column for performing crosses of categorical features.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.crossed_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/crossed_column)
```
tf.feature_column.crossed_column(
keys, hash_bucket_size, hash_key=None
)
```
Crossed features will be hashed according to `hash_bucket_size`. Conceptually, the transformation can be thought of as: Hash(cartesian product of features) % `hash_bucket_size`
For example, if the input features are:
* SparseTensor referred by first key:
```
shape = [2, 2]
{
[0, 0]: "a"
[1, 0]: "b"
[1, 1]: "c"
}
```
* SparseTensor referred by second key:
```
shape = [2, 1]
{
[0, 0]: "d"
[1, 0]: "e"
}
```
then crossed feature will look like:
```
shape = [2, 2]
{
[0, 0]: Hash64("d", Hash64("a")) % hash_bucket_size
[1, 0]: Hash64("e", Hash64("b")) % hash_bucket_size
[1, 1]: Hash64("e", Hash64("c")) % hash_bucket_size
}
```
Here is an example to create a linear model with crosses of string features:
```
keywords_x_doc_terms = crossed_column(['keywords', 'doc_terms'], 50K)
columns = [keywords_x_doc_terms, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
```
You could also use vocabulary lookup before crossing:
```
keywords = categorical_column_with_vocabulary_file(
'keywords', '/path/to/vocabulary/file', vocabulary_size=1K)
keywords_x_doc_terms = crossed_column([keywords, 'doc_terms'], 50K)
columns = [keywords_x_doc_terms, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
```
If an input feature is of numeric type, you can use `categorical_column_with_identity`, or `bucketized_column`, as in the example:
```
# vertical_id is an integer categorical feature.
vertical_id = categorical_column_with_identity('vertical_id', 10K)
price = numeric_column('price')
# bucketized_column converts numerical feature to a categorical one.
bucketized_price = bucketized_column(price, boundaries=[...])
vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)
columns = [vertical_id_x_price, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
```
To use crossed column in DNN model, you need to add it in an embedding column as in this example:
```
vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)
vertical_id_x_price_embedded = embedding_column(vertical_id_x_price, 10)
dense_tensor = input_layer(features, [vertical_id_x_price_embedded, ...])
```
| Args |
| `keys` | An iterable identifying the features to be crossed. Each element can be either: * string: Will use the corresponding feature which must be of string type.
* `CategoricalColumn`: Will use the transformed tensor produced by this column. Does not support hashed categorical column.
|
| `hash_bucket_size` | An int > 1. The number of buckets. |
| `hash_key` | Specify the hash\_key that will be used by the `FingerprintCat64` function to combine the crosses fingerprints on SparseCrossOp (optional). |
| Returns |
| A `CrossedColumn`. |
| Raises |
| `ValueError` | If `len(keys) < 2`. |
| `ValueError` | If any of the keys is neither a string nor `CategoricalColumn`. |
| `ValueError` | If any of the keys is `HashedCategoricalColumn`. |
| `ValueError` | If `hash_bucket_size < 1`. |
tensorflow tf.feature_column.numeric_column tf.feature\_column.numeric\_column
==================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L987-L1083) |
Represents real valued or numerical features.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.numeric_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column)
```
tf.feature_column.numeric_column(
key,
shape=(1,),
default_value=None,
dtype=tf.dtypes.float32,
normalizer_fn=None
)
```
#### Example:
Assume we have data with two features `a` and `b`.
```
data = {'a': [15, 9, 17, 19, 21, 18, 25, 30],
'b': [5.0, 6.4, 10.5, 13.6, 15.7, 19.9, 20.3 , 0.0]}
```
Let us represent the features `a` and `b` as numerical features.
```
a = tf.feature_column.numeric_column('a')
b = tf.feature_column.numeric_column('b')
```
Feature column describe a set of transformations to the inputs.
For example, to "bucketize" feature `a`, wrap the `a` column in a [`feature_column.bucketized_column`](bucketized_column). Providing `5` bucket boundaries, the bucketized\_column api will bucket this feature in total of `6` buckets.
```
a_buckets = tf.feature_column.bucketized_column(a,
boundaries=[10, 15, 20, 25, 30])
```
Create a `DenseFeatures` layer which will apply the transformations described by the set of [`tf.feature_column`](../feature_column) objects:
```
feature_layer = tf.keras.layers.DenseFeatures([a_buckets, b])
print(feature_layer(data))
tf.Tensor(
[[ 0. 0. 1. 0. 0. 0. 5. ]
[ 1. 0. 0. 0. 0. 0. 6.4]
[ 0. 0. 1. 0. 0. 0. 10.5]
[ 0. 0. 1. 0. 0. 0. 13.6]
[ 0. 0. 0. 1. 0. 0. 15.7]
[ 0. 0. 1. 0. 0. 0. 19.9]
[ 0. 0. 0. 0. 1. 0. 20.3]
[ 0. 0. 0. 0. 0. 1. 0. ]], shape=(8, 7), dtype=float32)
```
| Args |
| `key` | A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns. |
| `shape` | An iterable of integers specifies the shape of the `Tensor`. An integer can be given which means a single dimension `Tensor` with given width. The `Tensor` representing the column will have the shape of [batch\_size] + `shape`. |
| `default_value` | A single value compatible with `dtype` or an iterable of values compatible with `dtype` which the column takes on during `tf.Example` parsing if data is missing. A default value of `None` will cause [`tf.io.parse_example`](../io/parse_example) to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the `default_value` should be equal to the given `shape`. |
| `dtype` | defines the type of values. Default value is [`tf.float32`](../../tf#float32). Must be a non-quantized, real integer or floating point type. |
| `normalizer_fn` | If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations. |
| Returns |
| A `NumericColumn`. |
| Raises |
| `TypeError` | if any dimension in shape is not an int |
| `ValueError` | if any dimension in shape is not a positive integer |
| `TypeError` | if `default_value` is an iterable but not compatible with `shape` |
| `TypeError` | if `default_value` is not compatible with `dtype`. |
| `ValueError` | if `dtype` is not convertible to [`tf.float32`](../../tf#float32). |
tensorflow tf.feature_column.categorical_column_with_vocabulary_file tf.feature\_column.categorical\_column\_with\_vocabulary\_file
==============================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1348-L1479) |
A `CategoricalColumn` with a vocabulary file.
```
tf.feature_column.categorical_column_with_vocabulary_file(
key,
vocabulary_file,
vocabulary_size=None,
dtype=tf.dtypes.string,
default_value=None,
num_oov_buckets=0,
file_format=None
)
```
Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of `num_oov_buckets` and `default_value` to specify how to include out-of-vocabulary values.
For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.
Example with `num_oov_buckets`: File `'/us/states.txt'` contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54.
```
states = categorical_column_with_vocabulary_file(
key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
num_oov_buckets=5)
columns = [states, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
```
Example with `default_value`: File `'/us/states.txt'` contains 51 lines - the first line is `'XX'`, and the other 50 each have a 2-character U.S. state abbreviation. Both a literal `'XX'` in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50.
```
states = categorical_column_with_vocabulary_file(
key='states', vocabulary_file='/us/states.txt', vocabulary_size=51,
default_value=0)
columns = [states, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
```
And to make an embedding with either:
```
columns = [embedding_column(states, 3),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
```
| Args |
| `key` | A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns. |
| `vocabulary_file` | The vocabulary file name. |
| `vocabulary_size` | Number of the elements in the vocabulary. This must be no greater than length of `vocabulary_file`, if less than length, later values are ignored. If None, it is set to the length of `vocabulary_file`. |
| `dtype` | The type of features. Only string and integer types are supported. |
| `default_value` | The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`. |
| `num_oov_buckets` | Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`. |
| `file_format` | The format of the vocabulary file. The format is 'text' by default unless `vocabulary_file` is a string which ends in 'tfrecord.gz'. Accepted alternative value for `file_format` is 'tfrecord\_gzip'. |
| Returns |
| A `CategoricalColumn` with a vocabulary file. |
| Raises |
| `ValueError` | `vocabulary_file` is missing or cannot be opened. |
| `ValueError` | `vocabulary_size` is missing or < 1. |
| `ValueError` | `num_oov_buckets` is a negative integer. |
| `ValueError` | `num_oov_buckets` and `default_value` are both specified. |
| `ValueError` | `dtype` is neither string nor integer. |
| programming_docs |
tensorflow tf.feature_column.embedding_column tf.feature\_column.embedding\_column
====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L514-L625) |
`DenseColumn` that converts from sparse, categorical input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.embedding_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)
```
tf.feature_column.embedding_column(
categorical_column,
dimension,
combiner='mean',
initializer=None,
ckpt_to_load_from=None,
tensor_name_in_ckpt=None,
max_norm=None,
trainable=True,
use_safe_embedding_lookup=True
)
```
Use this when your inputs are sparse, but you want to convert them to a dense representation (e.g., to feed to a DNN).
Inputs must be a `CategoricalColumn` created by any of the `categorical_column_*` function. Here is an example of using `embedding_column` with `DNNClassifier`:
```
video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [embedding_column(video_id, 9),...]
estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)
label_column = ...
def input_fn():
features = tf.io.parse_example(
..., features=make_parse_example_spec(columns + [label_column]))
labels = features.pop(label_column.name)
return features, labels
estimator.train(input_fn=input_fn, steps=100)
```
Here is an example using `embedding_column` with model\_fn:
```
def model_fn(features, ...):
video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [embedding_column(video_id, 9),...]
dense_tensor = input_layer(features, columns)
# Form DNN layers, calculate loss, and return EstimatorSpec.
...
```
| Args |
| `categorical_column` | A `CategoricalColumn` created by a `categorical_column_with_*` function. This column produces the sparse IDs that are inputs to the embedding lookup. |
| `dimension` | An integer specifying dimension of the embedding, must be > 0. |
| `combiner` | A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`. |
| `initializer` | A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`. |
| `ckpt_to_load_from` | String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`. |
| `tensor_name_in_ckpt` | Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`. |
| `max_norm` | If not `None`, embedding values are l2-normalized to this value. |
| `trainable` | Whether or not the embedding is trainable. Default is True. |
| `use_safe_embedding_lookup` | If true, uses safe\_embedding\_lookup\_sparse instead of embedding\_lookup\_sparse. safe\_embedding\_lookup\_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted. |
| Returns |
| `DenseColumn` that converts from sparse input. |
| Raises |
| `ValueError` | if `dimension` not > 0. |
| `ValueError` | if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt` is specified. |
| `ValueError` | if `initializer` is specified and is not callable. |
| `RuntimeError` | If eager execution is enabled. |
tensorflow tf.feature_column.bucketized_column tf.feature\_column.bucketized\_column
=====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1086-L1169) |
Represents discretized dense input bucketed by `boundaries`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.bucketized_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column)
```
tf.feature_column.bucketized_column(
source_column, boundaries
)
```
Buckets include the left boundary, and exclude the right boundary. Namely, `boundaries=[0., 1., 2.]` generates buckets `(-inf, 0.)`, `[0., 1.)`, `[1., 2.)`, and `[2., +inf)`.
For example, if the inputs are
```
boundaries = [0, 10, 100]
input tensor = [[-5, 10000]
[150, 10]
[5, 100]]
```
then the output will be
```
output = [[0, 3]
[3, 2]
[1, 3]]
```
#### Example:
```
price = tf.feature_column.numeric_column('price')
bucketized_price = tf.feature_column.bucketized_column(
price, boundaries=[...])
columns = [bucketized_price, ...]
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)
```
A `bucketized_column` can also be crossed with another categorical column using `crossed_column`:
```
price = tf.feature_column.numeric_column('price')
# bucketized_column converts numerical feature to a categorical one.
bucketized_price = tf.feature_column.bucketized_column(
price, boundaries=[...])
# 'keywords' is a string feature.
price_x_keywords = tf.feature_column.crossed_column(
[bucketized_price, 'keywords'], 50K)
columns = [price_x_keywords, ...]
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)
linear_model = tf.keras.experimental.LinearModel(units=...)(dense_tensor)
```
| Args |
| `source_column` | A one-dimensional dense column which is generated with `numeric_column`. |
| `boundaries` | A sorted list or tuple of floats specifying the boundaries. |
| Returns |
| A `BucketizedColumn`. |
| Raises |
| `ValueError` | If `source_column` is not a numeric column, or if it is not one-dimensional. |
| `ValueError` | If `boundaries` is not a sorted list or tuple. |
tensorflow tf.feature_column.sequence_numeric_column tf.feature\_column.sequence\_numeric\_column
============================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/sequence_feature_column.py#L308-L368) |
Returns a feature column that represents sequences of numeric data.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.sequence_numeric_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/sequence_numeric_column)
```
tf.feature_column.sequence_numeric_column(
key,
shape=(1,),
default_value=0.0,
dtype=tf.dtypes.float32,
normalizer_fn=None
)
```
#### Example:
```
temperature = sequence_numeric_column('temperature')
columns = [temperature]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
```
| Args |
| `key` | A unique string identifying the input features. |
| `shape` | The shape of the input data per sequence id. E.g. if `shape=(2,)`, each example must contain `2 * sequence_length` values. |
| `default_value` | A single value compatible with `dtype` that is used for padding the sparse data into a dense `Tensor`. |
| `dtype` | The type of values. |
| `normalizer_fn` | If not `None`, a function that can be used to normalize the value of the tensor after `default_value` is applied for parsing. Normalizer function takes the input `Tensor` as its argument, and returns the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations. |
| Returns |
| A `SequenceNumericColumn`. |
| Raises |
| `TypeError` | if any dimension in shape is not an int. |
| `ValueError` | if any dimension in shape is not a positive integer. |
| `ValueError` | if `dtype` is not convertible to [`tf.float32`](../../tf#float32). |
tensorflow tf.feature_column.sequence_categorical_column_with_identity tf.feature\_column.sequence\_categorical\_column\_with\_identity
================================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/sequence_feature_column.py#L86-L132) |
Returns a feature column that represents sequences of integers.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.sequence_categorical_column_with_identity`](https://www.tensorflow.org/api_docs/python/tf/feature_column/sequence_categorical_column_with_identity)
```
tf.feature_column.sequence_categorical_column_with_identity(
key, num_buckets, default_value=None
)
```
Pass this to `embedding_column` or `indicator_column` to convert sequence categorical data into dense representation for input to sequence NN, such as RNN.
#### Example:
```
watches = sequence_categorical_column_with_identity(
'watches', num_buckets=1000)
watches_embedding = embedding_column(watches, dimension=10)
columns = [watches_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
```
| Args |
| `key` | A unique string identifying the input feature. |
| `num_buckets` | Range of inputs. Namely, inputs are expected to be in the range `[0, num_buckets)`. |
| `default_value` | If `None`, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range `[0, num_buckets)`, and will replace out-of-range inputs. |
| Returns |
| A `SequenceCategoricalColumn`. |
| Raises |
| `ValueError` | if `num_buckets` is less than one. |
| `ValueError` | if `default_value` is not in range `[0, num_buckets)`. |
tensorflow tf.feature_column.indicator_column tf.feature\_column.indicator\_column
====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1675-L1715) |
Represents multi-hot representation of given categorical column.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.indicator_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/indicator_column)
```
tf.feature_column.indicator_column(
categorical_column
)
```
* For DNN model, `indicator_column` can be used to wrap any `categorical_column_*` (e.g., to feed to DNN). Consider to Use `embedding_column` if the number of buckets/unique(values) are large.
* For Wide (aka linear) model, `indicator_column` is the internal representation for categorical column when passing categorical column directly (as any element in feature\_columns) to `linear_model`. See `linear_model` for details.
```
name = indicator_column(categorical_column_with_vocabulary_list(
'name', ['bob', 'george', 'wanda']))
columns = [name, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
dense_tensor == [[1, 0, 0]] # If "name" bytes_list is ["bob"]
dense_tensor == [[1, 0, 1]] # If "name" bytes_list is ["bob", "wanda"]
dense_tensor == [[2, 0, 0]] # If "name" bytes_list is ["bob", "bob"]
```
| Args |
| `categorical_column` | A `CategoricalColumn` which is created by `categorical_column_with_*` or `crossed_column` functions. |
| Returns |
| An `IndicatorColumn`. |
| Raises |
| `ValueError` | If `categorical_column` is not CategoricalColumn type. |
tensorflow tf.feature_column.sequence_categorical_column_with_vocabulary_list tf.feature\_column.sequence\_categorical\_column\_with\_vocabulary\_list
========================================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/sequence_feature_column.py#L246-L305) |
A sequence of categorical terms where ids use an in-memory list.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_list)
```
tf.feature_column.sequence_categorical_column_with_vocabulary_list(
key, vocabulary_list, dtype=None, default_value=-1, num_oov_buckets=0
)
```
Pass this to `embedding_column` or `indicator_column` to convert sequence categorical data into dense representation for input to sequence NN, such as RNN.
#### Example:
```
colors = sequence_categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
num_oov_buckets=2)
colors_embedding = embedding_column(colors, dimension=3)
columns = [colors_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
```
| Args |
| `key` | A unique string identifying the input feature. |
| `vocabulary_list` | An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in `vocabulary_list`. Must be castable to `dtype`. |
| `dtype` | The type of features. Only string and integer types are supported. If `None`, it will be inferred from `vocabulary_list`. |
| `default_value` | The integer ID value to return for out-of-vocabulary feature values, defaults to `-1`. This can not be specified with a positive `num_oov_buckets`. |
| `num_oov_buckets` | Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a hash of the input value. A positive `num_oov_buckets` can not be specified with `default_value`. |
| Returns |
| A `SequenceCategoricalColumn`. |
| Raises |
| `ValueError` | if `vocabulary_list` is empty, or contains duplicate keys. |
| `ValueError` | `num_oov_buckets` is a negative integer. |
| `ValueError` | `num_oov_buckets` and `default_value` are both specified. |
| `ValueError` | if `dtype` is not integer or string. |
tensorflow tf.feature_column.categorical_column_with_hash_bucket tf.feature\_column.categorical\_column\_with\_hash\_bucket
==========================================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/feature_column/feature_column_v2.py#L1172-L1239) |
Represents sparse feature where ids are set by hashing.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.feature_column.categorical_column_with_hash_bucket`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket)
```
tf.feature_column.categorical_column_with_hash_bucket(
key,
hash_bucket_size,
dtype=tf.dtypes.string
)
```
Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output\_id = Hash(input\_feature\_string) % bucket\_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula.
For input dictionary `features`, `features[key]` is either `Tensor` or `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int and `''` for string, which will be dropped by this feature column.
#### Example:
```
import tensorflow as tf
keywords = tf.feature_column.categorical_column_with_hash_bucket("keywords",
10000)
columns = [keywords]
features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',
'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',
'LSTM', 'Keras', 'RNN']])}
linear_prediction, _, _ = tf.compat.v1.feature_column.linear_model(features,
columns)
# or
import tensorflow as tf
keywords = tf.feature_column.categorical_column_with_hash_bucket("keywords",
10000)
keywords_embedded = tf.feature_column.embedding_column(keywords, 16)
columns = [keywords_embedded]
features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',
'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',
'LSTM', 'Keras', 'RNN']])}
input_layer = tf.keras.layers.DenseFeatures(columns)
dense_tensor = input_layer(features)
```
| Args |
| `key` | A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns. |
| `hash_bucket_size` | An int > 1. The number of buckets. |
| `dtype` | The type of features. Only string and integer types are supported. |
| Returns |
| A `HashedCategoricalColumn`. |
| Raises |
| `ValueError` | `hash_bucket_size` is not greater than 1. |
| `ValueError` | `dtype` is neither string nor integer. |
tensorflow tf.debugging.disable_check_numerics tf.debugging.disable\_check\_numerics
=====================================
Disable the eager/graph unified numerics checking mechanism.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.disable_check_numerics`](https://www.tensorflow.org/api_docs/python/tf/debugging/disable_check_numerics)
```
tf.debugging.disable_check_numerics()
```
This method can be used after a call to [`tf.debugging.enable_check_numerics()`](enable_check_numerics) to disable the numerics-checking mechanism that catches infinity and NaN values output by ops executed eagerly or in tf.function-compiled graphs.
This method is idempotent. Calling it multiple times has the same effect as calling it once.
This method takes effect only on the thread in which it is called.
| programming_docs |
tensorflow tf.debugging.set_log_device_placement tf.debugging.set\_log\_device\_placement
========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/context.py#L2398-L2430) |
Turns logging for device placement decisions on or off.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.set_log_device_placement`](https://www.tensorflow.org/api_docs/python/tf/debugging/set_log_device_placement)
```
tf.debugging.set_log_device_placement(
enabled
)
```
Operations execute on a particular device, producing and consuming tensors on that device. This may change the performance of the operation or require TensorFlow to copy data to or from an accelerator, so knowing where operations execute is useful for debugging performance issues.
For more advanced profiling, use the [TensorFlow profiler](https://www.tensorflow.org/guide/profiler).
Device placement for operations is typically controlled by a [`tf.device`](../device) scope, but there are exceptions, for example operations on a [`tf.Variable`](../variable) which follow the initial placement of the variable. Turning off soft device placement (with [`tf.config.set_soft_device_placement`](../config/set_soft_device_placement)) provides more explicit control.
```
tf.debugging.set_log_device_placement(True)
tf.ones([])
# [...] op Fill in device /job:localhost/replica:0/task:0/device:GPU:0
with tf.device("CPU"):
tf.ones([])
# [...] op Fill in device /job:localhost/replica:0/task:0/device:CPU:0
tf.debugging.set_log_device_placement(False)
```
Turning on [`tf.debugging.set_log_device_placement`](set_log_device_placement) also logs the placement of ops inside [`tf.function`](../function) when the function is called.
| Args |
| `enabled` | Whether to enabled device placement logging. |
tensorflow tf.debugging.assert_same_float_dtype tf.debugging.assert\_same\_float\_dtype
=======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L2200-L2232) |
Validate and return float type based on `tensors` and `dtype`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.assert_same_float_dtype`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_same_float_dtype), [`tf.compat.v1.debugging.assert_same_float_dtype`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_same_float_dtype)
```
tf.debugging.assert_same_float_dtype(
tensors=None, dtype=None
)
```
For ops such as matrix multiplication, inputs and weights must be of the same float type. This function validates that all `tensors` are the same type, validates that type is `dtype` (if supplied), and returns the type. Type must be a floating point type. If neither `tensors` nor `dtype` is supplied, the function will return [`dtypes.float32`](../dtypes#float32).
| Args |
| `tensors` | Tensors of input values. Can include `None` elements, which will be ignored. |
| `dtype` | Expected type. |
| Returns |
| Validated type. |
| Raises |
| `ValueError` | if neither `tensors` nor `dtype` is supplied, or result is not float, or the common type of the inputs is not a floating point type. |
tensorflow tf.debugging.assert_non_negative tf.debugging.assert\_non\_negative
==================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L569-L601) |
Assert the condition `x >= 0` holds element-wise.
```
tf.debugging.assert_non_negative(
x, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] >= 0` holds for every element of `x`. If `x` is empty, this is trivially satisfied.
If `x` is not >= 0 everywhere, `message`, as well as the first `summarize` entries of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_non\_negative". |
| Returns |
| Op raising `InvalidArgumentError` unless `x` is all non-negative. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x[i] >= 0` is False. The check can be performed immediately during eager execution or if `x` is statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_less tf.debugging.assert\_less
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L914-L947) |
Assert the condition `x < y` holds element-wise.
#### View aliases
**Main aliases**
[`tf.assert_less`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_less)
```
tf.debugging.assert_less(
x, y, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] < y[i]` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If `x` is not less than `y` element-wise, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `y` | Numeric `Tensor`, same dtype as and broadcastable to `x`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_less". |
| Returns |
| Op that raises `InvalidArgumentError` if `x < y` is False. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x < y` is False. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_non_positive tf.debugging.assert\_non\_positive
==================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L625-L657) |
Assert the condition `x <= 0` holds element-wise.
```
tf.debugging.assert_non_positive(
x, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] <= 0` holds for every element of `x`. If `x` is empty, this is trivially satisfied.
If `x` is not <= 0 everywhere, `message`, as well as the first `summarize` entries of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_non\_positive". |
| Returns |
| Op raising `InvalidArgumentError` unless `x` is all non-positive. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x[i] <= 0` is False. The check can be performed immediately during eager execution or if `x` is statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_type tf.debugging.assert\_type
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1569-L1601) |
Asserts that the given `Tensor` is of the specified type.
```
tf.debugging.assert_type(
tensor, tf_type, message=None, name=None
)
```
This can always be checked statically, so this method returns nothing.
#### Example:
```
a = tf.Variable(1.0)
tf.debugging.assert_type(a, tf_type= tf.float32)
```
```
b = tf.constant(21)
tf.debugging.assert_type(b, tf_type=tf.bool)
Traceback (most recent call last):
TypeError: ...
```
```
c = tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2],
dense_shape=[3, 4])
tf.debugging.assert_type(c, tf_type= tf.int32)
```
| Args |
| `tensor` | A `Tensor`, `SparseTensor` or [`tf.Variable`](../variable) . |
| `tf_type` | A tensorflow type ([`dtypes.float32`](../dtypes#float32), [`tf.int64`](../../tf#int64), [`dtypes.bool`](../dtypes#bool), etc). |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation. Defaults to "assert\_type" |
| Raises |
| `TypeError` | If the tensor's data type doesn't match `tf_type`. |
tensorflow tf.debugging.assert_greater tf.debugging.assert\_greater
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1004-L1038) |
Assert the condition `x > y` holds element-wise.
#### View aliases
**Main aliases**
[`tf.assert_greater`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_greater)
```
tf.debugging.assert_greater(
x, y, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] > y[i]` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If `x` is not greater than `y` element-wise, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `y` | Numeric `Tensor`, same dtype as and broadcastable to `x`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_greater". |
| Returns |
| Op that raises `InvalidArgumentError` if `x > y` is False. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x > y` is False. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_rank tf.debugging.assert\_rank
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1145-L1176) |
Assert that `x` has rank equal to `rank`.
#### View aliases
**Main aliases**
[`tf.assert_rank`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_rank)
```
tf.debugging.assert_rank(
x, rank, message=None, name=None
)
```
This Op checks that the rank of `x` is equal to `rank`.
If `x` has a different rank, `message`, as well as the shape of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | `Tensor`. |
| `rank` | Scalar integer `Tensor`. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation (optional). Defaults to "assert\_rank". |
| Returns |
| Op raising `InvalidArgumentError` unless `x` has specified rank. If static checks determine `x` has correct rank, a `no_op` is returned. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x` does not have rank `rank`. The check can be performed immediately during eager execution or if the shape of `x` is statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_less_equal tf.debugging.assert\_less\_equal
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L958-L992) |
Assert the condition `x <= y` holds element-wise.
```
tf.debugging.assert_less_equal(
x, y, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] <= y[i]` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If `x` is not less or equal than `y` element-wise, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `y` | Numeric `Tensor`, same dtype as and broadcastable to `x`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_less\_equal". |
| Returns |
| Op that raises `InvalidArgumentError` if `x <= y` is False. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x <= y` is False. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.enable_traceback_filtering tf.debugging.enable\_traceback\_filtering
=========================================
Enable filtering out TensorFlow-internal frames in exception stack traces.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.enable_traceback_filtering`](https://www.tensorflow.org/api_docs/python/tf/debugging/enable_traceback_filtering)
```
tf.debugging.enable_traceback_filtering()
```
Raw TensorFlow stack traces involve many internal frames, which can be challenging to read through, while not being actionable for end users. By default, TensorFlow filters internal frames in most exceptions that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).
If you have previously disabled traceback filtering via [`tf.debugging.disable_traceback_filtering()`](disable_traceback_filtering), you can re-enable it via [`tf.debugging.enable_traceback_filtering()`](enable_traceback_filtering).
| Raises |
| `RuntimeError` | If Python version is not at least 3.7. |
tensorflow tf.debugging.enable_check_numerics tf.debugging.enable\_check\_numerics
====================================
Enable tensor numerics checking in an eager/graph unified fashion.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.enable_check_numerics`](https://www.tensorflow.org/api_docs/python/tf/debugging/enable_check_numerics)
```
tf.debugging.enable_check_numerics(
stack_height_limit=30, path_length_limit=50
)
```
The numerics checking mechanism will cause any TensorFlow eager execution or graph execution to error out as soon as an op's output tensor contains infinity or NaN.
This method is idempotent. Calling it multiple times has the same effect as calling it once.
This method takes effect only on the thread in which it is called.
When a op's float-type output tensor contains any Infinity or NaN, an [`tf.errors.InvalidArgumentError`](../errors/invalidargumenterror) will be thrown, with an error message that reveals the following information:
* The type of the op that generated the tensor with bad numerics.
* Data type (dtype) of the tensor.
* Shape of the tensor (to the extent known at the time of eager execution or graph construction).
* Name of the containing graph (if available).
* (Graph mode only): The stack trace of the intra-graph op's creation, with a stack-height limit and a path-length limit for visual clarity. The stack frames that belong to the user's code (as opposed to tensorflow's internal code) are highlighted with a text arrow ("->").
* (Eager mode only): How many of the offending tensor's elements are `Infinity` and `NaN`, respectively.
Once enabled, the check-numerics mechanism can be disabled by using [`tf.debugging.disable_check_numerics()`](disable_check_numerics).
#### Example usage:
1. Catching infinity during the execution of a [`tf.function`](../function) graph:
```
import tensorflow as tf
tf.debugging.enable_check_numerics()
@tf.function
def square_log_x_plus_1(x):
v = tf.math.log(x + 1)
return tf.math.square(v)
x = -1.0
# When the following line runs, a function graph will be compiled
# from the Python function `square_log_x_plus_1()`. Due to the
# `enable_check_numerics()` call above, the graph will contain
# numerics checking ops that will run during the function graph's
# execution. The function call generates an -infinity when the Log
# (logarithm) op operates on the output tensor of the Add op.
# The program errors out at this line, printing an error message.
y = square_log_x_plus_1(x)
z = -y
```
2. Catching NaN during eager execution:
```
import numpy as np
import tensorflow as tf
tf.debugging.enable_check_numerics()
x = np.array([[0.0, -1.0], [4.0, 3.0]])
# The following line executes the Sqrt op eagerly. Due to the negative
# element in the input array, a NaN is generated. Due to the
# `enable_check_numerics()` call above, the program errors immediately
# at this line, printing an error message.
y = tf.math.sqrt(x)
z = tf.matmul(y, y)
```
>
> **Note:** If your code is running on TPUs, be sure to call [`tf.config.set_soft_device_placement(True)`](../config/set_soft_device_placement) before calling [`tf.debugging.enable_check_numerics()`](enable_check_numerics) as this API uses automatic outside compilation on TPUs. For example:
>
```
tf.config.set_soft_device_placement(True)
tf.debugging.enable_check_numerics()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
# ...
```
| Args |
| `stack_height_limit` | Limit to the height of the printed stack trace. Applicable only to ops in [`tf.function`](../function)s (graphs). |
| `path_length_limit` | Limit to the file path included in the printed stack trace. Applicable only to ops in [`tf.function`](../function)s (graphs). |
tensorflow tf.debugging.check_numerics tf.debugging.check\_numerics
============================
Checks a tensor for NaN and Inf values.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.check_numerics`](https://www.tensorflow.org/api_docs/python/tf/debugging/check_numerics), [`tf.compat.v1.debugging.check_numerics`](https://www.tensorflow.org/api_docs/python/tf/debugging/check_numerics)
```
tf.debugging.check_numerics(
tensor, message, name=None
)
```
When run, reports an `InvalidArgument` error if `tensor` has any values that are not a number (NaN) or infinity (Inf). Otherwise, returns the input tensor.
#### Example usage:
```
a = tf.Variable(1.0)
tf.debugging.check_numerics(a, message='')
b = tf.Variable(np.nan)
try:
tf.debugging.check_numerics(b, message='Checking b')
except Exception as e:
assert "Checking b : Tensor had NaN values" in e.message
c = tf.Variable(np.inf)
try:
tf.debugging.check_numerics(c, message='Checking c')
except Exception as e:
assert "Checking c : Tensor had Inf values" in e.message
```
| Args |
| `tensor` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. |
| `message` | A `string`. Prefix of the error message. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `tensor`. |
| programming_docs |
tensorflow tf.debugging.Assert tf.debugging.Assert
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/control_flow_ops.py#L115-L179) |
Asserts that the given condition is true.
#### View aliases
**Main aliases**
[`tf.Assert`](https://www.tensorflow.org/api_docs/python/tf/debugging/Assert)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.Assert`](https://www.tensorflow.org/api_docs/python/tf/debugging/Assert), [`tf.compat.v1.debugging.Assert`](https://www.tensorflow.org/api_docs/python/tf/debugging/Assert)
```
tf.debugging.Assert(
condition, data, summarize=None, name=None
)
```
If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.
| Args |
| `condition` | The condition to evaluate. |
| `data` | The tensors to print out when condition is false. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). |
| Returns |
| `assert_op` | An `Operation` that, when executed, raises a [`tf.errors.InvalidArgumentError`](../errors/invalidargumenterror) if `condition` is not true. |
| Raises |
>
> **Note:** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark\_used() method.
>
TF1 compatibility
-----------------
When in TF V1 mode (that is, outside [`tf.function`](../function)) Assert needs a control dependency on the output to ensure the assertion executes:
```
# Ensure maximum element of x is smaller or equal to 1
assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
with tf.control_dependencies([assert_op]):
... code using x ...
```
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_rank_at_least tf.debugging.assert\_rank\_at\_least
====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1244-L1275) |
Assert that `x` has rank of at least `rank`.
```
tf.debugging.assert_rank_at_least(
x, rank, message=None, name=None
)
```
This Op checks that the rank of `x` is greater or equal to `rank`.
If `x` has a rank lower than `rank`, `message`, as well as the shape of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | `Tensor`. |
| `rank` | Scalar integer `Tensor`. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation (optional). Defaults to "assert\_rank\_at\_least". |
| Returns |
| Op raising `InvalidArgumentError` unless `x` has specified rank or higher. If static checks determine `x` has correct rank, a `no_op` is returned. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | `x` does not have rank at least `rank`, but the rank cannot be statically determined. |
| `ValueError` | If static checks determine `x` has mismatched rank. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_rank_in tf.debugging.assert\_rank\_in
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1411-L1441) |
Assert that `x` has a rank in `ranks`.
```
tf.debugging.assert_rank_in(
x, ranks, message=None, name=None
)
```
This Op checks that the rank of `x` is in `ranks`.
If `x` has a different rank, `message`, as well as the shape of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | `Tensor`. |
| `ranks` | `Iterable` of scalar `Tensor` objects. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation (optional). Defaults to "assert\_rank\_in". |
| Returns |
| Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. If static checks determine `x` has matching rank, a `no_op` is returned. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | `x` does not have rank in `ranks`, but the rank cannot be statically determined. |
| `ValueError` | If static checks determine `x` has mismatched rank. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.is_traceback_filtering_enabled tf.debugging.is\_traceback\_filtering\_enabled
==============================================
Check whether traceback filtering is currently enabled.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.is_traceback_filtering_enabled`](https://www.tensorflow.org/api_docs/python/tf/debugging/is_traceback_filtering_enabled)
```
tf.debugging.is_traceback_filtering_enabled()
```
See also [`tf.debugging.enable_traceback_filtering()`](enable_traceback_filtering) and [`tf.debugging.disable_traceback_filtering()`](disable_traceback_filtering). Note that filtering out internal frames from the tracebacks of exceptions raised by TensorFlow code is the default behavior.
| Returns |
| True if traceback filtering is enabled (e.g. if [`tf.debugging.enable_traceback_filtering()`](enable_traceback_filtering) was called), and False otherwise (e.g. if [`tf.debugging.disable_traceback_filtering()`](disable_traceback_filtering) was called). |
tensorflow tf.debugging.assert_none_equal tf.debugging.assert\_none\_equal
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L728-L764) |
Assert the condition `x != y` holds for all elements.
```
tf.debugging.assert_none_equal(
x, y, summarize=None, message=None, name=None
)
```
This Op checks that `x[i] != y[i]` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If any elements of `x` and `y` are equal, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `y` | Numeric `Tensor`, same dtype as and broadcastable to `x`. |
| `summarize` | Print this many entries of each tensor. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation (optional). Defaults to "assert\_none\_equal". |
| Returns |
| Op that raises `InvalidArgumentError` if `x != y` is ever False. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x != y` is False for any pair of elements in `x` and `y`. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_positive tf.debugging.assert\_positive
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L516-L546) |
Assert the condition `x > 0` holds element-wise.
```
tf.debugging.assert_positive(
x, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] > 0` holds for every element of `x`. If `x` is empty, this is trivially satisfied.
If `x` is not positive everywhere, `message`, as well as the first `summarize` entries of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_positive". |
| Returns |
| Op raising `InvalidArgumentError` unless `x` is all positive. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x[i] > 0` is False. The check can be performed immediately during eager execution or if `x` is statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.is_numeric_tensor tf.debugging.is\_numeric\_tensor
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L2033-L2065) |
Returns `True` if the elements of `tensor` are numbers.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.is_numeric_tensor`](https://www.tensorflow.org/api_docs/python/tf/debugging/is_numeric_tensor), [`tf.compat.v1.is_numeric_tensor`](https://www.tensorflow.org/api_docs/python/tf/debugging/is_numeric_tensor)
```
tf.debugging.is_numeric_tensor(
tensor
)
```
Specifically, returns `True` if the dtype of `tensor` is one of the following:
* [`tf.float16`](../../tf#float16)
* [`tf.float32`](../../tf#float32)
* [`tf.float64`](../../tf#float64)
* [`tf.int8`](../../tf#int8)
* [`tf.int16`](../../tf#int16)
* [`tf.int32`](../../tf#int32)
* [`tf.int64`](../../tf#int64)
* [`tf.uint8`](../../tf#uint8)
* [`tf.uint16`](../../tf#uint16)
* [`tf.uint32`](../../tf#uint32)
* [`tf.uint64`](../../tf#uint64)
* [`tf.qint8`](../../tf#qint8)
* [`tf.qint16`](../../tf#qint16)
* [`tf.qint32`](../../tf#qint32)
* [`tf.quint8`](../../tf#quint8)
* [`tf.quint16`](../../tf#quint16)
* [`tf.complex64`](../../tf#complex64)
* [`tf.complex128`](../../tf#complex128)
* [`tf.bfloat16`](../../tf#bfloat16)
Returns `False` if `tensor` is of a non-numeric type or if `tensor` is not a [`tf.Tensor`](../tensor) object.
tensorflow tf.debugging.assert_integer tf.debugging.assert\_integer
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1509-L1527) |
Assert that `x` is of integer dtype.
```
tf.debugging.assert_integer(
x, message=None, name=None
)
```
If `x` has a non-integer type, `message`, as well as the dtype of `x` are printed, and `InvalidArgumentError` is raised.
This can always be checked statically, so this method returns nothing.
| Args |
| `x` | A `Tensor`. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation (optional). Defaults to "assert\_integer". |
| Raises |
| `TypeError` | If `x.dtype` is not a non-quantized integer type. |
tensorflow tf.debugging.assert_shapes tf.debugging.assert\_shapes
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1693-L1757) |
Assert tensor shapes and dimension size relationships between tensors.
```
tf.debugging.assert_shapes(
shapes, data=None, summarize=None, message=None, name=None
)
```
This Op checks that a collection of tensors shape relationships satisfies given constraints.
#### Example:
```
n = 10
q = 3
d = 7
x = tf.zeros([n,q])
y = tf.ones([n,d])
param = tf.Variable([1.0, 2.0, 3.0])
scalar = 1.0
tf.debugging.assert_shapes([
(x, ('N', 'Q')),
(y, ('N', 'D')),
(param, ('Q',)),
(scalar, ()),
])
```
```
tf.debugging.assert_shapes([
(x, ('N', 'D')),
(y, ('N', 'D'))
])
Traceback (most recent call last):
ValueError: ...
```
If `x`, `y`, `param` or `scalar` does not have a shape that satisfies all specified constraints, `message`, as well as the first `summarize` entries of the first encountered violating tensor are printed, and `InvalidArgumentError` is raised.
Size entries in the specified shapes are checked against other entries by their **hash**, except:
* a size entry is interpreted as an explicit size if it can be parsed as an integer primitive.
* a size entry is interpreted as *any* size if it is None or '.'.
If the first entry of a shape is `...` (type `Ellipsis`) or '\*' that indicates a variable number of outer dimensions of unspecified size, i.e. the constraint applies to the inner-most dimensions only.
Scalar tensors and specified shapes of length zero (excluding the 'inner-most' prefix) are both treated as having a single dimension of size one.
| Args |
| `shapes` | dictionary with (`Tensor` to shape) items, or a list of (`Tensor`, shape) tuples. A shape must be an iterable. |
| `data` | The tensors to print out if the condition is False. Defaults to error message and first few entries of the violating tensor. |
| `summarize` | Print this many entries of the tensor. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation (optional). Defaults to "assert\_shapes". |
| Raises |
| `ValueError` | If static checks determine any shape constraint is violated. |
tensorflow Module: tf.debugging.experimental Module: tf.debugging.experimental
=================================
Public API for tf.debugging.experimental namespace.
Functions
---------
[`disable_dump_debug_info(...)`](experimental/disable_dump_debug_info): Disable the currently-enabled debugging dumping.
[`enable_dump_debug_info(...)`](experimental/enable_dump_debug_info): Enable dumping debugging information from a TensorFlow program.
tensorflow tf.debugging.assert_proper_iterable tf.debugging.assert\_proper\_iterable
=====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L430-L459) |
Static assert that values is a "proper" iterable.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.assert_proper_iterable`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_proper_iterable), [`tf.compat.v1.debugging.assert_proper_iterable`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_proper_iterable)
```
tf.debugging.assert_proper_iterable(
values
)
```
`Ops` that expect iterables of `Tensor` can call this to validate input. Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
| Args |
| `values` | Object to be checked. |
| Raises |
| `TypeError` | If `values` is not iterable or is one of `Tensor`, `SparseTensor`, `np.array`, [`tf.compat.bytes_or_text_types`](../compat#bytes_or_text_types). |
tensorflow tf.debugging.assert_all_finite tf.debugging.assert\_all\_finite
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/numerics.py#L50-L68) |
Assert that the tensor does not contain any NaN's or Inf's.
```
tf.debugging.assert_all_finite(
x, message, name=None
)
```
| Args |
| `x` | Tensor to check. |
| `message` | Message to log on failure. |
| `name` | A name for this operation (optional). |
| Returns |
| Same tensor as `x`. |
tensorflow tf.debugging.assert_negative tf.debugging.assert\_negative
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L462-L492) |
Assert the condition `x < 0` holds element-wise.
```
tf.debugging.assert_negative(
x, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] < 0` holds for every element of `x`. If `x` is empty, this is trivially satisfied.
If `x` is not negative everywhere, `message`, as well as the first `summarize` entries of `x` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_negative". |
| Returns |
| Op raising `InvalidArgumentError` unless `x` is all negative. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x[i] < 0` is False. The check can be performed immediately during eager execution or if `x` is statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_greater_equal tf.debugging.assert\_greater\_equal
===================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L1049-L1084) |
Assert the condition `x >= y` holds element-wise.
```
tf.debugging.assert_greater_equal(
x, y, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] >= y[i]` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If `x` is not greater or equal to `y` element-wise, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `y` | Numeric `Tensor`, same dtype as and broadcastable to `x`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_greater\_equal". |
| Returns |
| Op that raises `InvalidArgumentError` if `x >= y` is False. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x >= y` is False. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.assert_scalar tf.debugging.assert\_scalar
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L2235-L2255) |
Asserts that the given `tensor` is a scalar.
```
tf.debugging.assert_scalar(
tensor, message=None, name=None
)
```
This function raises `ValueError` unless it can be certain that the given `tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is unknown.
This is always checked statically, so this method returns nothing.
| Args |
| `tensor` | A `Tensor`. |
| `message` | A string to prefix to the default message. |
| `name` | A name for this operation. Defaults to "assert\_scalar" |
| Raises |
| `ValueError` | If the tensor is not scalar (rank 0), or if its shape is unknown. |
tensorflow tf.debugging.disable_traceback_filtering tf.debugging.disable\_traceback\_filtering
==========================================
Disable filtering out TensorFlow-internal frames in exception stack traces.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.disable_traceback_filtering`](https://www.tensorflow.org/api_docs/python/tf/debugging/disable_traceback_filtering)
```
tf.debugging.disable_traceback_filtering()
```
Raw TensorFlow stack traces involve many internal frames, which can be challenging to read through, while not being actionable for end users. By default, TensorFlow filters internal frames in most exceptions that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).
Calling [`tf.debugging.disable_traceback_filtering`](disable_traceback_filtering) disables this filtering mechanism, meaning that TensorFlow exceptions stack traces will include all frames, in particular TensorFlow-internal ones.
**If you are debugging a TensorFlow-internal issue, you need to call [`tf.debugging.disable_traceback_filtering`](disable_traceback_filtering)**. To re-enable traceback filtering afterwards, you can call [`tf.debugging.enable_traceback_filtering()`](enable_traceback_filtering).
| programming_docs |
tensorflow tf.debugging.get_log_device_placement tf.debugging.get\_log\_device\_placement
========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/eager/context.py#L2388-L2395) |
Get if device placements are logged.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.get_log_device_placement`](https://www.tensorflow.org/api_docs/python/tf/debugging/get_log_device_placement)
```
tf.debugging.get_log_device_placement()
```
| Returns |
| If device placements are logged. |
tensorflow tf.debugging.assert_near tf.debugging.assert\_near
=========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L777-L828) |
Assert the condition `x` and `y` are close element-wise.
```
tf.debugging.assert_near(
x, y, rtol=None, atol=None, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] - y[i] < atol + rtol * tf.abs(y[i])` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If any elements of `x` and `y` are not close, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest representable positive number such that `1 + eps != 1`. This is about `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`. See `numpy.finfo`.
| Args |
| `x` | Float or complex `Tensor`. |
| `y` | Float or complex `Tensor`, same dtype as and broadcastable to `x`. |
| `rtol` | `Tensor`. Same `dtype` as, and broadcastable to, `x`. The relative tolerance. Default is `10 * eps`. |
| `atol` | `Tensor`. Same `dtype` as, and broadcastable to, `x`. The absolute tolerance. Default is `10 * eps`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_near". |
| Returns |
| Op that raises `InvalidArgumentError` if `x` and `y` are not close enough. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x != y` is False for any pair of elements in `x` and `y`. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
numpy compatibility
-------------------
Similar to `numpy.testing.assert_allclose`, except tolerance depends on data type. This is due to the fact that `TensorFlow` is often used with `32bit`, `64bit`, and even `16bit` data.
tensorflow tf.debugging.assert_equal tf.debugging.assert\_equal
==========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L681-L713) |
Assert the condition `x == y` holds element-wise.
#### View aliases
**Main aliases**
[`tf.assert_equal`](https://www.tensorflow.org/api_docs/python/tf/debugging/assert_equal)
```
tf.debugging.assert_equal(
x, y, message=None, summarize=None, name=None
)
```
This Op checks that `x[i] == y[i]` holds for every pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is trivially satisfied.
If `x` and `y` are not equal, `message`, as well as the first `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.
| Args |
| `x` | Numeric `Tensor`. |
| `y` | Numeric `Tensor`, same dtype as and broadcastable to `x`. |
| `message` | A string to prefix to the default message. |
| `summarize` | Print this many entries of each tensor. |
| `name` | A name for this operation (optional). Defaults to "assert\_equal". |
| Returns |
| Op that raises `InvalidArgumentError` if `x == y` is False. This can be used with [`tf.control_dependencies`](../control_dependencies) inside of [`tf.function`](../function)s to block followup computation until the check has executed. |
| Raises |
| `InvalidArgumentError` | if the check can be performed immediately and `x == y` is False. The check can be performed immediately during eager execution or if `x` and `y` are statically known. |
eager compatibility
-------------------
returns None
tensorflow tf.debugging.experimental.disable_dump_debug_info tf.debugging.experimental.disable\_dump\_debug\_info
====================================================
Disable the currently-enabled debugging dumping.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.experimental.disable_dump_debug_info`](https://www.tensorflow.org/api_docs/python/tf/debugging/experimental/disable_dump_debug_info)
```
tf.debugging.experimental.disable_dump_debug_info()
```
If the `enable_dump_debug_info()` method under the same Python namespace has been invoked before, calling this method disables it. If no call to `enable_dump_debug_info()` has been made, calling this method is a no-op. Calling this method more than once is idempotent.
tensorflow tf.debugging.experimental.enable_dump_debug_info tf.debugging.experimental.enable\_dump\_debug\_info
===================================================
Enable dumping debugging information from a TensorFlow program.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.debugging.experimental.enable_dump_debug_info`](https://www.tensorflow.org/api_docs/python/tf/debugging/experimental/enable_dump_debug_info)
```
tf.debugging.experimental.enable_dump_debug_info(
dump_root,
tensor_debug_mode=DEFAULT_TENSOR_DEBUG_MODE,
circular_buffer_size=1000,
op_regex=None,
tensor_dtypes=None
)
```
The debugging information is dumped to a directory on the file system specified as `dump_root`.
The dumped debugging information can be ingested by debugger UIs.
The files in the dump directory contain the following information:
* TensorFlow Function construction (e.g., compilation of Python functions decorated with @tf.function), the op types, names (if available), context, the input and output tensors, and the associated stack traces.
* Execution of TensorFlow operations (ops) and Functions and their stack traces, op types, names (if available) and contexts. In addition, depending on the value of the `tensor_debug_mode` argument (see Args section below), the value(s) of the output tensors or more concise summaries of the tensor values will be dumped.
* A snapshot of Python source files involved in the execution of the TensorFlow program.
Once enabled, the dumping can be disabled with the corresponding `disable_dump_debug_info()` method under the same Python namespace. Calling this method more than once with the same `dump_root` is idempotent. Calling this method more than once with different `tensor_debug_mode`s leads to a `ValueError`. Calling this method more than once with different `circular_buffer_size`s leads to a `ValueError`. Calling this method with a different `dump_root` abolishes the previously-enabled `dump_root`.
#### Usage example:
```
tf.debugging.experimental.enable_dump_debug_info('/tmp/my-tfdbg-dumps')
# Code to build, train and run your TensorFlow model...
```
>
> **Note:** If your code is running on TPUs, be sure to call [`tf.config.set_soft_device_placement(True)`](../../config/set_soft_device_placement) before calling [`tf.debugging.experimental.enable_dump_debug_info()`](enable_dump_debug_info) as this API uses automatic outside compilation on TPUs. For example:
>
```
tf.config.set_soft_device_placement(True)
tf.debugging.experimental.enable_dump_debug_info(
logdir, tensor_debug_mode="FULL_HEALTH")
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
# ...
```
| Args |
| `dump_root` | The directory path where the dumping information will be written. |
| `tensor_debug_mode` | Debug mode for tensor values, as a string. The currently supported options are: * "NO\_TENSOR": (Default) Only traces the output tensors of all executed ops (including those executed eagerly at the Python level or as a part of a TensorFlow graph) and functions, while not extracting any information from the values of the tensors.
* "CURT\_HEALTH": For each floating-dtype tensor (e.g., tensors of dtypes such as `float32`, `float64` and `bfloat16`), extracts a binary bit indicating whether it contains any -infinity, +infinity or NaN.
* "CONCISE\_HEALTH": For each floating-dtype tensor, extract total element count, and counts of -infinity, +infinity and NaN elements.
* "FULL\_HEALTH": For each floating-dtype tensor, extracts the dtype, rank (number of dimensions), total element count, and counts of -infinity, +infinity and NaN elements.
* "SHAPE": For each tensor (regardless of dtype), extracts its dtype, rank, total element count and shape.
|
| `circular_buffer_size` | Size of the circular buffers for execution events. These circular buffers are designed to reduce the overhead of debugging dumping. They hold the most recent debug events concerning eager execution of ops and [`tf.function`](../../function)s and traces of tensor values computed inside [`tf.function`](../../function)s. They are written to the file system only when the proper flushing method is called (see description of return values below). Expected to be an integer. If <= 0, the circular-buffer behavior will be disabled, i.e., the execution debug events will be written to the file writers in the same way as non-execution events such as op creations and source-file snapshots. |
| `op_regex` | Dump data from only the tensors from op types that matches to the regular expression (through Python's `re.match()`). "Op type" refers to the names of the TensorFlow operations (e.g., "MatMul", "LogSoftmax"), which may repeat in a TensorFlow function. It does *not* refer to the names of nodes (e.g., "dense/MatMul", "dense\_1/MatMul\_1") which are unique within a function. - Example 1: Dump tensor data from only MatMul and Relu ops `op_regex="^(MatMul|Relu)$"`.
- Example 2: Dump tensors from all ops *except* Relu: `op_regex="(?!^Relu$)"`. This filter operates in a logical AND relation with `tensor_dtypes`.
|
| `tensor_dtypes` | Dump data from only the tensors of which the specified dtypes. This optional argument can be in any of the following format: - a list or tuple of `DType` objects or strings that can be converted to `DType` objects via [`tf.as_dtype()`](../../dtypes/as_dtype). Examples:
* `tensor_dtype=[tf.float32, tf.float64]`,
* `tensor_dtype=["float32", "float64"]`,
* `tensor_dtypes=(tf.int32, tf.bool)`,
* `tensor_dtypes=("int32", "bool")`
- a callable that takes a single `DType` argument and returns a Python `boolean` indicating whether the dtype is to be included in the data dumping. Examples:
* `tensor_dtype=lambda dtype: dtype.is_integer`. This filter operates in a logical AND relation with `op_regex`.
|
| Returns |
| A DebugEventsWriter instance used by the dumping callback. The caller may use its flushing methods, including `FlushNonExecutionFiles()` and `FlushExecutionFiles()`. |
tensorflow tf.estimator.WarmStartSettings tf.estimator.WarmStartSettings
==============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L2175-L2369) |
Settings for warm-starting in `tf.estimator.Estimators`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.WarmStartSettings`](https://www.tensorflow.org/api_docs/python/tf/estimator/WarmStartSettings)
```
tf.estimator.WarmStartSettings(
ckpt_to_initialize_from,
vars_to_warm_start='.*',
var_name_to_vocab_info=None,
var_name_to_prev_var_name=None
)
```
Example Use with canned [`tf.estimator.DNNEstimator`](dnnestimator):
```
emb_vocab_file = tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_vocabulary_file(
"sc_vocab_file", "new_vocab.txt", vocab_size=100),
dimension=8)
emb_vocab_list = tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_vocabulary_list(
"sc_vocab_list", vocabulary_list=["a", "b"]),
dimension=8)
estimator = tf.estimator.DNNClassifier(
hidden_units=[128, 64], feature_columns=[emb_vocab_file, emb_vocab_list],
warm_start_from=ws)
```
where `ws` could be defined as:
Warm-start all weights in the model (input layer and hidden weights). Either the directory or a specific checkpoint can be provided (in the case of the former, the latest checkpoint will be used):
```
ws = WarmStartSettings(ckpt_to_initialize_from="/tmp")
ws = WarmStartSettings(ckpt_to_initialize_from="/tmp/model-1000")
```
Warm-start only the embeddings (input layer):
```
ws = WarmStartSettings(ckpt_to_initialize_from="/tmp",
vars_to_warm_start=".*input_layer.*")
```
Warm-start all weights but the embedding parameters corresponding to `sc_vocab_file` have a different vocab from the one used in the current model:
```
vocab_info = tf.estimator.VocabInfo(
new_vocab=sc_vocab_file.vocabulary_file,
new_vocab_size=sc_vocab_file.vocabulary_size,
num_oov_buckets=sc_vocab_file.num_oov_buckets,
old_vocab="old_vocab.txt"
)
ws = WarmStartSettings(
ckpt_to_initialize_from="/tmp",
var_name_to_vocab_info={
"input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info
})
```
Warm-start only `sc_vocab_file` embeddings (and no other variables), which have a different vocab from the one used in the current model:
```
vocab_info = tf.estimator.VocabInfo(
new_vocab=sc_vocab_file.vocabulary_file,
new_vocab_size=sc_vocab_file.vocabulary_size,
num_oov_buckets=sc_vocab_file.num_oov_buckets,
old_vocab="old_vocab.txt"
)
ws = WarmStartSettings(
ckpt_to_initialize_from="/tmp",
vars_to_warm_start=None,
var_name_to_vocab_info={
"input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info
})
```
Warm-start all weights but the parameters corresponding to `sc_vocab_file` have a different vocab from the one used in current checkpoint, and only 100 of those entries were used:
```
vocab_info = tf.estimator.VocabInfo(
new_vocab=sc_vocab_file.vocabulary_file,
new_vocab_size=sc_vocab_file.vocabulary_size,
num_oov_buckets=sc_vocab_file.num_oov_buckets,
old_vocab="old_vocab.txt",
old_vocab_size=100
)
ws = WarmStartSettings(
ckpt_to_initialize_from="/tmp",
var_name_to_vocab_info={
"input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info
})
```
Warm-start all weights but the parameters corresponding to `sc_vocab_file` have a different vocab from the one used in current checkpoint and the parameters corresponding to `sc_vocab_list` have a different name from the current checkpoint:
```
vocab_info = tf.estimator.VocabInfo(
new_vocab=sc_vocab_file.vocabulary_file,
new_vocab_size=sc_vocab_file.vocabulary_size,
num_oov_buckets=sc_vocab_file.num_oov_buckets,
old_vocab="old_vocab.txt",
old_vocab_size=100
)
ws = WarmStartSettings(
ckpt_to_initialize_from="/tmp",
var_name_to_vocab_info={
"input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info
},
var_name_to_prev_var_name={
"input_layer/sc_vocab_list_embedding/embedding_weights":
"old_tensor_name"
})
```
Warm-start all TRAINABLE variables:
```
ws = WarmStartSettings(ckpt_to_initialize_from="/tmp",
vars_to_warm_start=".*")
```
Warm-start all variables (including non-TRAINABLE):
```
ws = WarmStartSettings(ckpt_to_initialize_from="/tmp",
vars_to_warm_start=[".*"])
```
Warm-start non-TRAINABLE variables "v1", "v1/Momentum", and "v2" but not "v2/momentum":
```
ws = WarmStartSettings(ckpt_to_initialize_from="/tmp",
vars_to_warm_start=["v1", "v2[^/]"])
```
| Attributes |
| `ckpt_to_initialize_from` | [Required] A string specifying the directory with checkpoint file(s) or path to checkpoint from which to warm-start the model parameters. |
| `vars_to_warm_start` | [Optional] One of the following: * A regular expression (string) that captures which variables to warm-start (see tf.compat.v1.get\_collection). This expression will only consider variables in the TRAINABLE\_VARIABLES collection -- if you need to warm-start non\_TRAINABLE vars (such as optimizer accumulators or batch norm statistics), please use the below option.
* A list of strings, each a regex scope provided to tf.compat.v1.get\_collection with GLOBAL\_VARIABLES (please see tf.compat.v1.get\_collection). For backwards compatibility reasons, this is separate from the single-string argument type.
* A list of Variables to warm-start. If you do not have access to the `Variable` objects at the call site, please use the above option.
* `None`, in which case only TRAINABLE variables specified in `var_name_to_vocab_info` will be warm-started.
Defaults to `'.*'`, which warm-starts all variables in the TRAINABLE\_VARIABLES collection. Note that this excludes variables such as accumulators and moving statistics from batch norm. |
| `var_name_to_vocab_info` | [Optional] Dict of variable names (strings) to [`tf.estimator.VocabInfo`](vocabinfo). The variable names should be "full" variables, not the names of the partitions. If not explicitly provided, the variable is assumed to have no (changes to) vocabulary. |
| `var_name_to_prev_var_name` | [Optional] Dict of variable names (strings) to name of the previously-trained variable in `ckpt_to_initialize_from`. If not explicitly provided, the name of the variable is assumed to be same between previous checkpoint and current model. Note that this has no effect on the set of variables that is warm-started, and only controls name mapping (use `vars_to_warm_start` for controlling what variables to warm-start). |
tensorflow tf.estimator.StepCounterHook tf.estimator.StepCounterHook
============================
Hook that counts steps per second.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.StepCounterHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/StepCounterHook), [`tf.compat.v1.train.StepCounterHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/StepCounterHook)
```
tf.estimator.StepCounterHook(
every_n_steps=100, every_n_secs=None, output_dir=None, summary_writer=None
)
```
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L719-L751)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L707-L708)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L698-L705)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
| programming_docs |
tensorflow tf.estimator.FinalExporter tf.estimator.FinalExporter
==========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L369-L418) |
This class exports the serving graph and checkpoints at the end.
Inherits From: [`Exporter`](exporter)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.FinalExporter`](https://www.tensorflow.org/api_docs/python/tf/estimator/FinalExporter)
```
tf.estimator.FinalExporter(
name, serving_input_receiver_fn, assets_extra=None, as_text=False
)
```
This class performs a single export at the end of training.
| Args |
| `name` | unique name of this `Exporter` that is going to be used in the export path. |
| `serving_input_receiver_fn` | a function that takes no arguments and returns a `ServingInputReceiver`. |
| `assets_extra` | An optional dict specifying how to populate the assets.extra directory within the exported SavedModel. Each key should give the destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. |
| `as_text` | whether to write the SavedModel proto in text format. Defaults to `False`. |
| Raises |
| `ValueError` | if any arguments is invalid. |
| Attributes |
| `name` | Directory name. A directory name under the export base directory where exports of this type are written. Should not be `None` nor empty. |
Methods
-------
### `export`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L408-L418)
```
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
```
Exports the given `Estimator` to a specific format.
| Args |
| `estimator` | the `Estimator` to export. |
| `export_path` | A string containing a directory where to write the export. |
| `checkpoint_path` | The checkpoint path to export. |
| `eval_result` | The output of [`Estimator.evaluate`](../compat/v1/estimator/estimator#evaluate) on this checkpoint. |
| `is_the_final_export` | This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing `Exporter` to [`tf.estimator.train_and_evaluate`](train_and_evaluate) `is_the_final_export` is always False if [`TrainSpec.max_steps`](trainspec#max_steps) is `None`. |
| Returns |
| The string path to the exported directory or `None` if export is skipped. |
tensorflow tf.estimator.BaselineEstimator tf.estimator.BaselineEstimator
==============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/baseline.py#L433-L508) |
An estimator that can establish a simple baseline.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.BaselineEstimator(
head, model_dir=None, optimizer='Ftrl', config=None
)
```
The estimator uses a user-specified head.
This estimator ignores feature values and will learn to predict the average value of each label. E.g. for single-label classification problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label classification problems, it will predict the ratio of examples that contain each class.
#### Example:
```
# Build baseline multi-label classifier.
estimator = tf.estimator.BaselineEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3))
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
estimator.train(input_fn=input_fn_train)
# Evaluates cross entropy between the test and train labels.
loss = estimator.evaluate(input_fn=input_fn_eval)["loss"]
# For each class, predicts the ratio of training examples that contain the
# class.
predictions = estimator.predict(new_samples)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is specified in the `head` constructor (and not None) for the head passed to BaselineEstimator's constructor, a feature with `key=weight_column` whose value is a `Tensor`.
| Args |
| `head` | A `Head` instance constructed with a method such as [`tf.estimator.MultiLabelHead`](multilabelhead). |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `optimizer` | String, `tf.keras.optimizers.*` object, or callable that creates the optimizer to use for training. If not specified, will use `Ftrl` as the default optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
| programming_docs |
tensorflow tf.estimator.BinaryClassHead tf.estimator.BinaryClassHead
============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/binary_class_head.py#L34-L604) |
Creates a `Head` for single label binary classification.
Inherits From: [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.BinaryClassHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/BinaryClassHead)
```
tf.estimator.BinaryClassHead(
weight_column=None,
thresholds=None,
label_vocabulary=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
loss_fn=None,
name=None
)
```
Uses `sigmoid_cross_entropy_with_logits` loss.
The head expects `logits` with shape `[D0, D1, ... DN, 1]`. In many applications, the shape is `[batch_size, 1]`.
`labels` must be a dense `Tensor` with shape matching `logits`, namely `[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string `Tensor` with values from the vocabulary. If `label_vocabulary` is not given, `labels` must be float `Tensor` with values in the interval `[0, 1]`.
If `weight_column` is specified, weights must be of shape `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.
The loss is the weighted sum over the input dimensions. Namely, if the input labels have shape `[batch_size, 1]`, the loss is the weighted sum over `batch_size`.
Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or `(labels, logits, features, loss_reduction)` as arguments and returns loss with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support float `labels` with shape `[D0, D1, ... DN, 1]`. Namely, the head applies `label_vocabulary` to the input labels before passing them to `loss_fn`.
#### Usage:
```
head = tf.estimator.BinaryClassHead()
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# expected_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(0, 41) / 2 = 41 / 2 = 20.50
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
20.50
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
accuracy : 0.50
accuracy_baseline : 1.00
auc : 0.00
auc_precision_recall : 1.00
average_loss : 20.50
label/mean : 1.00
precision : 1.00
prediction/mean : 0.50
recall : 0.50
preds = head.predictions(logits)
print(preds['logits'])
tf.Tensor(
[[ 45.]
[-41.]], shape=(2, 1), dtype=float32)
```
Usage with a canned estimator:
```
my_head = tf.estimator.BinaryClassHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
```
It can also be used with a custom `model_fn`. Example:
```
def _my_model_fn(features, labels, mode):
my_head = tf.estimator.BinaryClassHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
```
| Args |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. |
| `thresholds` | Iterable of floats in the range `(0, 1)`. For binary classification metrics such as precision and recall, an eval metric is generated for each threshold value. This threshold is applied to the logistic values to determine the binary classification (i.e., above the threshold is `true`, below is `false`. |
| `label_vocabulary` | A list or tuple of strings representing possible label values. If it is not given, that means labels are already encoded within [0, 1]. If given, labels must be string type and have any value in `label_vocabulary`. Note that errors will be raised if `label_vocabulary` is not provided but labels are strings. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Decides how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch size * label_dimension`. |
| `loss_fn` | Optional loss function. |
| `name` | Name of the head. If provided, summary and metrics keys will be suffixed by `"/" + name`. Also used as `name_scope` when creating ops. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/binary_class_head.py#L274-L297)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Returns regularized training loss. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/binary_class_head.py#L361-L395)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/binary_class_head.py#L299-L359)
```
predictions(
logits, keys=None
)
```
Return predictions based on keys.
See `base_head.Head` for details.
| Args |
| `logits` | logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the shape is `[batch_size, logits_dimension]`. |
| `keys` | a list or tuple of prediction keys. Each key can be either the class variable of prediction\_keys.PredictionKeys or its string value, such as: prediction\_keys.PredictionKeys.CLASSES or 'classes'. If not specified, it will return the predictions for all valid keys. |
| Returns |
| A dict of predictions. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/binary_class_head.py#L439-L490)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.SummarySaverHook tf.estimator.SummarySaverHook
=============================
Saves summaries every N steps.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.SummarySaverHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/SummarySaverHook), [`tf.compat.v1.train.SummarySaverHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/SummarySaverHook)
```
tf.estimator.SummarySaverHook(
save_steps=None,
save_secs=None,
output_dir=None,
summary_writer=None,
scaffold=None,
summary_op=None
)
```
| Args |
| `save_steps` | `int`, save summaries every N steps. Exactly one of `save_secs` and `save_steps` should be set. |
| `save_secs` | `int`, save summaries every N seconds. |
| `output_dir` | `string`, the directory to save the summaries to. Only used if no `summary_writer` is supplied. |
| `summary_writer` | `SummaryWriter`. If `None` and an `output_dir` was passed, one will be created accordingly. |
| `scaffold` | `Scaffold` to get summary\_op if it's not provided. |
| `summary_op` | `Tensor` of type `string` containing the serialized `Summary` protocol buffer or a list of `Tensor`. They are most likely an output by TF summary methods like [`tf.compat.v1.summary.scalar`](../compat/v1/summary/scalar) or [`tf.compat.v1.summary.merge_all`](../compat/v1/summary/merge_all). It can be passed in as one tensor; if more than one, they must be passed in as a list. |
| Raises |
| `ValueError` | Exactly one of scaffold or summary\_op should be set. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L856-L876)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L845-L854)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L836-L843)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L878-L880)
```
end(
session=None
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.DNNLinearCombinedEstimator tf.estimator.DNNLinearCombinedEstimator
=======================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/dnn_linear_combined.py#L687-L848) |
An estimator for TensorFlow Linear and DNN joined models with custom head.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.DNNLinearCombinedEstimator(
head,
model_dir=None,
linear_feature_columns=None,
linear_optimizer='Ftrl',
dnn_feature_columns=None,
dnn_optimizer='Adagrad',
dnn_hidden_units=None,
dnn_activation_fn=tf.nn.relu,
dnn_dropout=None,
config=None,
batch_norm=False,
linear_sparse_combiner='sum'
)
```
>
> **Note:** This estimator is also known as wide-n-deep.
>
#### Example:
```
numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...))
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using mean squared error.
| Args |
| `head` | A `Head` instance constructed with a method such as [`tf.estimator.MultiLabelHead`](multilabelhead). |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. |
| `linear_feature_columns` | An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from `FeatureColumn`. |
| `linear_optimizer` | An instance of `tf.keras.optimizers.*` used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. |
| `dnn_feature_columns` | An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from `FeatureColumn`. |
| `dnn_optimizer` | An instance of `tf.keras.optimizers.*` used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer. |
| `dnn_hidden_units` | List of hidden units per layer. All layers are fully connected. |
| `dnn_activation_fn` | Activation function applied to each layer. If None, will use [`tf.nn.relu`](../nn/relu). |
| `dnn_dropout` | When not None, the probability we will drop out a given coordinate. |
| `config` | RunConfig object to configure the runtime settings. |
| `batch_norm` | Whether to use batch normalization after each hidden layer. |
| `linear_sparse_combiner` | A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see `tf.feature_column.linear_model`. |
| Raises |
| `ValueError` | If both linear\_feature\_columns and dnn\_features\_columns are empty at the same time. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.ProfilerHook tf.estimator.ProfilerHook
=========================
Captures CPU/GPU profiling information every N steps or seconds.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.ProfilerHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/ProfilerHook), [`tf.compat.v1.train.ProfilerHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/ProfilerHook)
```
tf.estimator.ProfilerHook(
save_steps=None,
save_secs=None,
output_dir='',
show_dataflow=True,
show_memory=False
)
```
This produces files called "timeline-.json", which are in Chrome Trace format.
For more information see: <https://github.com/catapult-project/catapult/blob/master/tracing/README.md>
| Args |
| `save_steps` | `int`, save profile traces every N steps. Exactly one of `save_secs` and `save_steps` should be set. |
| `save_secs` | `int` or `float`, save profile traces every N seconds. |
| `output_dir` | `string`, the directory to save the profile traces to. Defaults to the current directory. |
| `show_dataflow` | `bool`, if True, add flow events to the trace connecting producers and consumers of tensors. |
| `show_memory` | `bool`, if True, add object snapshot events to the trace showing the sizes and lifetimes of tensors. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L1072-L1087)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L1061-L1070)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L1055-L1059)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.TrainSpec tf.estimator.TrainSpec
======================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/training.py#L129-L198) |
Configuration for the "train" part for the `train_and_evaluate` call.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.TrainSpec`](https://www.tensorflow.org/api_docs/python/tf/estimator/TrainSpec)
```
tf.estimator.TrainSpec(
input_fn, max_steps=None, hooks=None, saving_listeners=None
)
```
`TrainSpec` determines the input data for the training, as well as the duration. Optional hooks run at various stages of training.
#### Usage:
```
train_spec = tf.estimator.TrainSpec(
input_fn=lambda: 1,
max_steps=100,
hooks=[_StopAtSecsHook(stop_after_secs=10)],
saving_listeners=[_NewCheckpointListenerForEvaluate(None, 20, None)])
train_spec.saving_listeners[0]._eval_throttle_secs
20
train_spec.hooks[0]._stop_after_secs
10
train_spec.max_steps
100
```
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a tuple (features, labels) with same constraints as below.
* A tuple (features, labels): Where features is a `Tensor` or a dictionary of string feature name to `Tensor` and labels is a `Tensor` or a dictionary of string label name to `Tensor`.
|
| `max_steps` | Int. Positive number of total steps for which to train model. If `None`, train forever. The training `input_fn` is not expected to generate `OutOfRangeError` or `StopIteration` exceptions. See the `train_and_evaluate` stop condition section for details. |
| `hooks` | Iterable of `tf.train.SessionRunHook` objects to run on all workers (including chief) during training. |
| `saving_listeners` | Iterable of [`tf.estimator.CheckpointSaverListener`](checkpointsaverlistener) objects to run on chief during training. |
| Raises |
| `ValueError` | If any of the input arguments is invalid. |
| `TypeError` | If any of the arguments is not of the expected type. |
| Attributes |
| `input_fn` | A `namedtuple` alias for field number 0 |
| `max_steps` | A `namedtuple` alias for field number 1 |
| `hooks` | A `namedtuple` alias for field number 2 |
| `saving_listeners` | A `namedtuple` alias for field number 3 |
tensorflow tf.estimator.classifier_parse_example_spec tf.estimator.classifier\_parse\_example\_spec
=============================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/parsing_utils.py#L27-L144) |
Generates parsing spec for tf.parse\_example to be used with classifiers.
```
tf.estimator.classifier_parse_example_spec(
feature_columns,
label_key,
label_dtype=tf.dtypes.int64,
label_default=None,
weight_column=None
)
```
If users keep data in tf.Example format, they need to call tf.parse\_example with a proper feature spec. There are two main things that this utility helps:
* Users need to combine parsing spec of features with labels and weights (if any) since they are all parsed from same tf.Example instance. This utility combines these specs.
* It is difficult to map expected label by a classifier such as `DNNClassifier` to corresponding tf.parse\_example spec. This utility encodes it by getting related information from users (key, dtype).
Example output of parsing spec:
```
# Define features and transformations
feature_b = tf.feature_column.numeric_column(...)
feature_c_bucketized = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column("feature_c"), ...)
feature_a_x_feature_c = tf.feature_column.crossed_column(
columns=["feature_a", feature_c_bucketized], ...)
feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]
parsing_spec = tf.estimator.classifier_parse_example_spec(
feature_columns, label_key='my-label', label_dtype=tf.string)
# For the above example, classifier_parse_example_spec would return the dict:
assert parsing_spec == {
"feature_a": parsing_ops.VarLenFeature(tf.string),
"feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
"feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
"my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.string)
}
```
Example usage with a classifier:
```
feature_columns = # define features via tf.feature_column
estimator = DNNClassifier(
n_classes=1000,
feature_columns=feature_columns,
weight_column='example-weight',
label_vocabulary=['photos', 'keep', ...],
hidden_units=[256, 64, 16])
# This label configuration tells the classifier the following:
# * weights are retrieved with key 'example-weight'
# * label is string and can be one of the following ['photos', 'keep', ...]
# * integer id for label 'photos' is 0, 'keep' is 1, ...
# Input builders
def input_fn_train(): # Returns a tuple of features and labels.
features = tf.contrib.learn.read_keyed_batch_features(
file_pattern=train_files,
batch_size=batch_size,
# creates parsing configuration for tf.parse_example
features=tf.estimator.classifier_parse_example_spec(
feature_columns,
label_key='my-label',
label_dtype=tf.string,
weight_column='example-weight'),
reader=tf.RecordIOReader)
labels = features.pop('my-label')
return features, labels
estimator.train(input_fn=input_fn_train)
```
| Args |
| `feature_columns` | An iterable containing all feature columns. All items should be instances of classes derived from `FeatureColumn`. |
| `label_key` | A string identifying the label. It means tf.Example stores labels with this key. |
| `label_dtype` | A `tf.dtype` identifies the type of labels. By default it is [`tf.int64`](../../tf#int64). If user defines a `label_vocabulary`, this should be set as [`tf.string`](../../tf#string). [`tf.float32`](../../tf#float32) labels are only supported for binary classification. |
| `label_default` | used as label if label\_key does not exist in given tf.Example. An example usage: let's say `label_key` is 'clicked' and tf.Example contains clicked data only for positive examples in following format `key:clicked, value:1`. This means that if there is no data with key 'clicked' it should count as negative example by setting `label_deafault=0`. Type of this value should be compatible with `label_dtype`. |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| Returns |
| A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` value. |
| Raises |
| `ValueError` | If label is used in `feature_columns`. |
| `ValueError` | If weight\_column is used in `feature_columns`. |
| `ValueError` | If any of the given `feature_columns` is not a `_FeatureColumn` instance. |
| `ValueError` | If `weight_column` is not a `NumericColumn` instance. |
| `ValueError` | if label\_key is None. |
tensorflow tf.estimator.StopAtStepHook tf.estimator.StopAtStepHook
===========================
Hook that requests stop at a specified step.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.StopAtStepHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/StopAtStepHook), [`tf.compat.v1.train.StopAtStepHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/StopAtStepHook)
```
tf.estimator.StopAtStepHook(
num_steps=None, last_step=None
)
```
Migrate to TF2
--------------
Please check this [notebook](https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb) on how to migrate the API to TF2.
Description
-----------
| Args |
| `num_steps` | Number of steps to execute. |
| `last_step` | Step after which to stop. |
| Raises |
| `ValueError` | If one of the arguments is invalid. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L436-L439)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L444-L454)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L441-L442)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L431-L434)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.DNNRegressor tf.estimator.DNNRegressor
=========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/dnn.py#L1011-L1176) |
A regressor for TensorFlow DNN models.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.DNNRegressor(
hidden_units,
feature_columns,
model_dir=None,
label_dimension=1,
weight_column=None,
optimizer='Adagrad',
activation_fn=tf.nn.relu,
dropout=None,
config=None,
warm_start_from=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
batch_norm=False
)
```
#### Example:
```
categorical_feature_a = categorical_column_with_hash_bucket(...)
categorical_feature_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using mean squared error.
| Args |
| `hidden_units` | Iterable of number hidden units per layer. All layers are fully connected. Ex. `[64, 32]` means first layer has 64 nodes and second one has 32. |
| `feature_columns` | An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `FeatureColumn`. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `label_dimension` | Number of regression targets per example. This is the size of the last dimension of the labels and logits `Tensor` objects (typically, these have shape `[batch_size, label_dimension]`). |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `optimizer` | An instance of `tf.keras.optimizers.*` used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer. |
| `activation_fn` | Activation function applied to each layer. If `None`, will use [`tf.nn.relu`](../nn/relu). |
| `dropout` | When not `None`, the probability we will drop out a given coordinate. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| `batch_norm` | Whether to use batch normalization after each hidden layer. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.LoggingTensorHook tf.estimator.LoggingTensorHook
==============================
Prints the given tensors every N local steps, every N seconds, or at end.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.LoggingTensorHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/LoggingTensorHook), [`tf.compat.v1.train.LoggingTensorHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/LoggingTensorHook)
```
tf.estimator.LoggingTensorHook(
tensors, every_n_iter=None, every_n_secs=None, at_end=False, formatter=None
)
```
Migrate to TF2
--------------
Please check this [notebook](https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb) on how to migrate the API to TF2.
Description
-----------
The tensors will be printed to the log, with `INFO` severity. If you are not seeing the logs, you might want to add the following line after your imports:
```
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
```
Note that if `at_end` is True, `tensors` should not include any tensor whose evaluation produces a side effect such as consuming additional inputs.
| Args |
| `tensors` | `dict` that maps string-valued tags to tensors/tensor names, or `iterable` of tensors/tensor names. |
| `every_n_iter` | `int`, print the values of `tensors` once every N local steps taken on the current worker. |
| `every_n_secs` | `int` or `float`, print the values of `tensors` once every N seconds. Exactly one of `every_n_iter` and `every_n_secs` should be provided. |
| `at_end` | `bool` specifying whether to print the values of `tensors` at the end of the run. |
| `formatter` | function, takes dict of `tag`->`Tensor` and returns a string. If `None` uses default printing all tensors. |
| Raises |
| `ValueError` | if `every_n_iter` is non-positive. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L269-L274)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L246-L251)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L237-L244)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L276-L279)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.SessionRunContext tf.estimator.SessionRunContext
==============================
Provides information about the `session.run()` call being made.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.SessionRunContext`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunContext), [`tf.compat.v1.train.SessionRunContext`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunContext)
```
tf.estimator.SessionRunContext(
original_args, session
)
```
Provides information about original request to `Session.Run()` function. SessionRunHook objects can stop the loop by calling `request_stop()` of `run_context`. In the future we may use this object to add more information about run without changing the Hook API.
| Attributes |
| `original_args` | A `SessionRunArgs` object holding the original arguments of `run()`. If user called `MonitoredSession.run(fetches=a, feed_dict=b)`, then this field is equal to SessionRunArgs(a, b). |
| `session` | A TensorFlow session object which will execute the `run`. |
| `stop_requested` | Returns whether a stop is requested or not. If true, `MonitoredSession` stops iterations. Returns: A `bool` |
Methods
-------
### `request_stop`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L253-L259)
```
request_stop()
```
Sets stop requested field.
Hooks can use this function to request stop of iterations. `MonitoredSession` checks whether this is called or not.
tensorflow tf.estimator.MultiHead tf.estimator.MultiHead
======================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_head.py#L53-L547) |
Creates a `Head` for multi-objective learning.
Inherits From: [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.MultiHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/MultiHead)
```
tf.estimator.MultiHead(
heads, head_weights=None
)
```
This class merges the output of multiple `Head` objects. Specifically:
* For training, sums losses of each head, calls `train_op_fn` with this final loss.
* For eval, merges metrics by adding `head.name` suffix to the keys in eval metrics, such as `precision/head1.name`, `precision/head2.name`.
* For prediction, merges predictions and updates keys in prediction dict to a 2-tuple, `(head.name, prediction_key)`. Merges `export_outputs` such that by default the first head is served.
#### Usage:
```
head1 = tf.estimator.MultiLabelHead(n_classes=2, name='head1')
head2 = tf.estimator.MultiLabelHead(n_classes=3, name='head2')
multi_head = tf.estimator.MultiHead([head1, head2])
logits = {
'head1': np.array([[-10., 10.], [-15., 10.]], dtype=np.float32),
'head2': np.array([[20., -20., 20.], [-30., 20., -20.]],
dtype=np.float32),}
labels = {
'head1': np.array([[1, 0], [1, 1]], dtype=np.int64),
'head2': np.array([[0, 1, 0], [1, 1, 0]], dtype=np.int64),}
features = {'x': np.array(((42,),), dtype=np.float32)}
# For large logits, sigmoid cross entropy loss is approximated as:
# loss = labels * (logits < 0) * (-logits) +
# (1 - labels) * (logits > 0) * logits =>
# head1: expected_unweighted_loss = [[10., 10.], [15., 0.]]
# loss1 = ((10 + 10) / 2 + (15 + 0) / 2) / 2 = 8.75
# head2: expected_unweighted_loss = [[20., 20., 20.], [30., 0., 0]]
# loss2 = ((20 + 20 + 20) / 3 + (30 + 0 + 0) / 3) / 2 = 15.00
# loss = loss1 + loss2 = 8.75 + 15.00 = 23.75
loss = multi_head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
23.75
eval_metrics = multi_head.metrics()
updated_metrics = multi_head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
auc/head1 : 0.17
auc/head2 : 0.33
auc_precision_recall/head1 : 0.60
auc_precision_recall/head2 : 0.40
average_loss/head1 : 8.75
average_loss/head2 : 15.00
loss/head1 : 8.75
loss/head2 : 15.00
preds = multi_head.predictions(logits)
print(preds[('head1', 'logits')])
tf.Tensor(
[[-10. 10.]
[-15. 10.]], shape=(2, 2), dtype=float32)
```
Usage with a canned estimator:
```
# In `input_fn`, specify labels as a dict keyed by head name:
def input_fn():
features = ...
labels1 = ...
labels2 = ...
return features, {'head1.name': labels1, 'head2.name': labels2}
# In `model_fn`, specify logits as a dict keyed by head name:
def model_fn(features, labels, mode):
# Create simple heads and specify head name.
head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')
head2 = tf.estimator.BinaryClassHead(name='head2')
# Create MultiHead from two simple heads.
head = tf.estimator.MultiHead([head1, head2])
# Create logits for each head, and combine them into a dict.
logits1, logits2 = logit_fn()
logits = {'head1.name': logits1, 'head2.name': logits2}
# Return the merged EstimatorSpec
return head.create_estimator_spec(..., logits=logits, ...)
# Create an estimator with this model_fn.
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(input_fn=input_fn)
```
Also supports `logits` as a `Tensor` of shape `[D0, D1, ... DN, logits_dimension]`. It will split the `Tensor` along the last dimension and distribute it appropriately among the heads. E.g.:
```
# Input logits.
logits = np.array([[-1., 1., 2., -2., 2.], [-1.5, 1., -3., 2., -2.]],
dtype=np.float32)
# Suppose head1 and head2 have the following logits dimension.
head1.logits_dimension = 2
head2.logits_dimension = 3
# After splitting, the result will be:
logits_dict = {'head1_name': [[-1., 1.], [-1.5, 1.]],
'head2_name': [[2., -2., 2.], [-3., 2., -2.]]}
```
#### Usage:
```
def model_fn(features, labels, mode):
# Create simple heads and specify head name.
head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')
head2 = tf.estimator.BinaryClassHead(name='head2')
# Create multi-head from two simple heads.
head = tf.estimator.MultiHead([head1, head2])
# Create logits for the multihead. The result of logits is a `Tensor`.
logits = logit_fn(logits_dimension=head.logits_dimension)
# Return the merged EstimatorSpec
return head.create_estimator_spec(..., logits=logits, ...)
```
| Args |
| `heads` | List or tuple of `Head` instances. All heads must have `name` specified. The first head in the list is the default used at serving time. |
| `head_weights` | Optional list of weights, same length as `heads`. Used when merging losses to calculate the weighted sum of losses from each head. If `None`, all losses are weighted equally. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_head.py#L396-L511)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns a `model_fn.EstimatorSpec`.
| Args |
| `features` | Input `dict` of `Tensor` or `SparseTensor` objects. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Input `dict` keyed by head name, or logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the `Tensor` shape is `[batch_size, logits_dimension]`. If logits is a `Tensor`, it will split the `Tensor` along the last dimension and distribute it appropriately among the heads. Check `MultiHead` for examples. |
| `labels` | Input `dict` keyed by head name. For each head, the label value can be integer or string `Tensor` with shape matching its corresponding `logits`.`labels` is a required argument when `mode` equals `TRAIN` or `EVAL`. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns `train_op`. Used if `optimizer` is `None`. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. These losses are usually expressed as a batch average, so for best results, in each head, users need to use the default `loss_reduction=SUM_OVER_BATCH_SIZE` to avoid scaling errors. Compared to the regularization losses for each head, this loss is to regularize the merged loss of all heads in multi head, and will be added to the overall training loss of multi head. |
| Returns |
| A `model_fn.EstimatorSpec` instance. |
| Raises |
| `ValueError` | If both `train_op_fn` and `optimizer` are `None` in TRAIN mode, or if both are set. If `mode` is not in Estimator's `ModeKeys`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_head.py#L307-L341)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Returns regularized training loss. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_head.py#L354-L366)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_head.py#L343-L352)
```
predictions(
logits, keys=None
)
```
Create predictions. See `base_head.Head` for details.
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_head.py#L368-L394)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.LinearClassifier tf.estimator.LinearClassifier
=============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear.py#L770-L948) |
Linear classifier model.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.LinearClassifier(
feature_columns,
model_dir=None,
n_classes=2,
weight_column=None,
label_vocabulary=None,
optimizer='Ftrl',
config=None,
warm_start_from=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
sparse_combiner='sum'
)
```
Train a linear model to classify instances into one of multiple possible classes. When number of possible classes is 2, this is binary classification.
#### Example:
```
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.exponential_decay(
learning_rate=0.1,
global_step=tf.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
+ if `column` is a `SparseColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedSparseColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `RealValuedColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using softmax cross entropy.
| Args |
| `feature_columns` | An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `FeatureColumn`. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `n_classes` | number of label classes. Default is binary classification. Note that class labels are integers representing the class index (i.e. values from 0 to n\_classes-1). For arbitrary label values (e.g. string labels), convert to class indices first. |
| `weight_column` | A string or a `_NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `label_vocabulary` | A list of strings represents possible label values. If given, labels must be string type and have any value in `label_vocabulary`. If it is not given, that means labels are already encoded as integer or float within [0, 1] for `n_classes=2` and encoded as integer values in {0, 1,..., n\_classes-1} for `n_classes`>2 . Also there will be errors if vocabulary is not provided and labels are string. |
| `optimizer` | An instance of `tf.keras.optimizers.*` or [`tf.estimator.experimental.LinearSDCA`](experimental/linearsdca) used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| `sparse_combiner` | A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see `tf.feature_column.linear_model`. |
| Raises |
| `ValueError` | if n\_classes < 2. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.train_and_evaluate tf.estimator.train\_and\_evaluate
=================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/training.py#L297-L504) |
Train and evaluate the `estimator`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.train_and_evaluate`](https://www.tensorflow.org/api_docs/python/tf/estimator/train_and_evaluate)
```
tf.estimator.train_and_evaluate(
estimator, train_spec, eval_spec
)
```
This utility function trains, evaluates, and (optionally) exports the model by using the given `estimator`. All training related specification is held in `train_spec`, including training `input_fn` and training max steps, etc. All evaluation and export related specification is held in `eval_spec`, including evaluation `input_fn`, steps, etc.
This utility function provides consistent behavior for both local (non-distributed) and distributed configurations. The default distribution configuration is parameter server-based between-graph replication. For other types of distribution configurations such as all-reduce training, please use [DistributionStrategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/distribute).
Overfitting: In order to avoid overfitting, it is recommended to set up the training `input_fn` to shuffle the training data properly.
Stop condition: In order to support both distributed and non-distributed configuration reliably, the only supported stop condition for model training is `train_spec.max_steps`. If `train_spec.max_steps` is `None`, the model is trained forever. *Use with care* if model stop condition is different. For example, assume that the model is expected to be trained with one epoch of training data, and the training `input_fn` is configured to throw `OutOfRangeError` after going through one epoch, which stops the [`Estimator.train`](../compat/v1/estimator/estimator#train). For a three-training-worker distributed configuration, each training worker is likely to go through the whole epoch independently. So, the model will be trained with three epochs of training data instead of one epoch.
Example of local (non-distributed) training:
```
# Set up feature columns.
categorial_feature_a = categorial_column_with_hash_bucket(...)
categorial_feature_a_emb = embedding_column(
categorical_column=categorial_feature_a, ...)
... # other feature columns
estimator = DNNClassifier(
feature_columns=[categorial_feature_a_emb, ...],
hidden_units=[1024, 512, 256])
# Or set up the model directory
# estimator = DNNClassifier(
# config=tf.estimator.RunConfig(
# model_dir='/my_model', save_summary_steps=100),
# feature_columns=[categorial_feature_a_emb, ...],
# hidden_units=[1024, 512, 256])
# Input pipeline for train and evaluate.
def train_input_fn(): # returns x, y
# please shuffle the data.
pass
def eval_input_fn(): # returns x, y
pass
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
Note that in current implementation `estimator.evaluate` will be called multiple times. This means that evaluation graph (including eval\_input\_fn) will be re-created for each `evaluate` call. `estimator.train` will be called only once.
Example of distributed training:
Regarding the example of distributed training, the code above can be used without a change (Please do make sure that the [`RunConfig.model_dir`](runconfig#model_dir) for all workers is set to the same directory, i.e., a shared file system all workers can read and write). The only extra work to do is setting the environment variable `TF_CONFIG` properly for each worker correspondingly.
Also see [Distributed TensorFlow](https://www.tensorflow.org/deploy/distributed).
Setting environment variable depends on the platform. For example, on Linux, it can be done as follows (`$` is the shell prompt):
```
$ TF_CONFIG='<replace_with_real_content>' python train_model.py
```
For the content in `TF_CONFIG`, assume that the training cluster spec looks like:
```
cluster = {"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]}
```
Example of `TF_CONFIG` for chief training worker (must have one and only one):
```
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "chief", "index": 0}
}'
```
Note that the chief worker also does the model training job, similar to other non-chief training workers (see next paragraph). In addition to the model training, it manages some extra work, e.g., checkpoint saving and restoring, writing summaries, etc.
Example of `TF_CONFIG` for non-chief training worker (optional, could be multiple):
```
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "worker", "index": 0}
}'
```
where the `task.index` should be set as 0, 1, 2, in this example, respectively for non-chief training workers.
Example of `TF_CONFIG` for parameter server, aka ps (could be multiple):
```
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "ps", "index": 0}
}'
```
where the `task.index` should be set as 0 and 1, in this example, respectively for parameter servers.
Example of `TF_CONFIG` for evaluator task. Evaluator is a special task that is not part of the training cluster. There could be only one. It is used for model evaluation.
```
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "evaluator", "index": 0}
}'
```
When `distribute` or `experimental_distribute.train_distribute` and `experimental_distribute.remote_cluster` is set, this method will start a client running on the current host which connects to the `remote_cluster` for training and evaluation.
| Args |
| `estimator` | An `Estimator` instance to train and evaluate. |
| `train_spec` | A `TrainSpec` instance to specify the training specification. |
| `eval_spec` | A `EvalSpec` instance to specify the evaluation and export specification. |
| Returns |
| A tuple of the result of the `evaluate` call to the `Estimator` and the export results using the specified `Exporter`s. Currently, the return value is undefined for distributed training mode. |
| Raises |
| `ValueError` | if environment variable `TF_CONFIG` is incorrectly set. |
tensorflow tf.estimator.DNNLinearCombinedClassifier tf.estimator.DNNLinearCombinedClassifier
========================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/dnn_linear_combined.py#L393-L590) |
An estimator for TensorFlow Linear and DNN joined classification models.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.DNNLinearCombinedClassifier(
model_dir=None,
linear_feature_columns=None,
linear_optimizer='Ftrl',
dnn_feature_columns=None,
dnn_optimizer='Adagrad',
dnn_hidden_units=None,
dnn_activation_fn=tf.nn.relu,
dnn_dropout=None,
n_classes=2,
weight_column=None,
label_vocabulary=None,
config=None,
warm_start_from=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
batch_norm=False,
linear_sparse_combiner='sum'
)
```
>
> **Note:** This estimator is also known as wide-n-deep.
>
#### Example:
```
numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_id_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedClassifier(
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...),
# warm-start settings
warm_start_from="/path/to/checkpoint/dir")
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using softmax cross entropy.
| Args |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `linear_feature_columns` | An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from `FeatureColumn`. |
| `linear_optimizer` | An instance of `tf.keras.optimizers.*` used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. |
| `dnn_feature_columns` | An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from `FeatureColumn`. |
| `dnn_optimizer` | An instance of `tf.keras.optimizers.*` used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer. |
| `dnn_hidden_units` | List of hidden units per layer. All layers are fully connected. |
| `dnn_activation_fn` | Activation function applied to each layer. If None, will use [`tf.nn.relu`](../nn/relu). |
| `dnn_dropout` | When not None, the probability we will drop out a given coordinate. |
| `n_classes` | Number of label classes. Defaults to 2, namely binary classification. Must be > 1. |
| `weight_column` | A string or a `_NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `label_vocabulary` | A list of strings represents possible label values. If given, labels must be string type and have any value in `label_vocabulary`. If it is not given, that means labels are already encoded as integer or float within [0, 1] for `n_classes=2` and encoded as integer values in {0, 1,..., n\_classes-1} for `n_classes`>2 . Also there will be errors if vocabulary is not provided and labels are string. |
| `config` | RunConfig object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| `batch_norm` | Whether to use batch normalization after each hidden layer. |
| `linear_sparse_combiner` | A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see `tf.feature_column.linear_model`. |
| Raises |
| `ValueError` | If both linear\_feature\_columns and dnn\_features\_columns are empty at the same time. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.MultiLabelHead tf.estimator.MultiLabelHead
===========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_label_head.py#L35-L593) |
Creates a `Head` for multi-label classification.
Inherits From: [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.MultiLabelHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/MultiLabelHead)
```
tf.estimator.MultiLabelHead(
n_classes,
weight_column=None,
thresholds=None,
label_vocabulary=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
loss_fn=None,
classes_for_class_based_metrics=None,
name=None
)
```
Multi-label classification handles the case where each example may have zero or more associated labels, from a discrete set. This is distinct from `MultiClassHead` which has exactly one label per example.
Uses `sigmoid_cross_entropy` loss average over classes and weighted sum over the batch. Namely, if the input logits have shape `[batch_size, n_classes]`, the loss is the average over `n_classes` and the weighted sum over `batch_size`.
The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`. In many applications, the shape is `[batch_size, n_classes]`.
#### Labels can be:
* A multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`
* An integer `SparseTensor` of class indices. The `dense_shape` must be `[D0, D1, ... DN, ?]` and the values within `[0, n_classes)`.
* If `label_vocabulary` is given, a string `SparseTensor`. The `dense_shape` must be `[D0, D1, ... DN, ?]` and the values within `label_vocabulary` or a multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`.
If `weight_column` is specified, weights must be of shape `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.
Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or `(labels, logits, features)` as arguments and returns unreduced loss with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support indicator `labels` with shape `[D0, D1, ... DN, n_classes]`. Namely, the head applies `label_vocabulary` to the input labels before passing them to `loss_fn`.
#### Usage:
```
n_classes = 2
head = tf.estimator.MultiLabelHead(n_classes)
logits = np.array([[-1., 1.], [-1.5, 1.5]], dtype=np.float32)
labels = np.array([[1, 0], [1, 1]], dtype=np.int64)
features = {'x': np.array([[41], [42]], dtype=np.int32)}
# expected_loss = sum(_sigmoid_cross_entropy(labels, logits)) / batch_size
# = sum(1.31326169, 0.9514133) / 2 = 1.13
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
1.13
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
auc : 0.33
auc_precision_recall : 0.77
average_loss : 1.13
preds = head.predictions(logits)
print(preds['logits'])
tf.Tensor(
[[-1. 1. ]
[-1.5 1.5]], shape=(2, 2), dtype=float32)
```
Usage with a canned estimator:
```
my_head = tf.estimator.MultiLabelHead(n_classes=3)
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
```
It can also be used with a custom `model_fn`. Example:
```
def _my_model_fn(features, labels, mode):
my_head = tf.estimator.MultiLabelHead(n_classes=3)
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
```
| Args |
| `n_classes` | Number of classes, must be greater than 1 (for 1 class, use `BinaryClassHead`). |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. Per-class weighting is not supported. |
| `thresholds` | Iterable of floats in the range `(0, 1)`. Accuracy, precision and recall metrics are evaluated for each threshold value. The threshold is applied to the predicted probabilities, i.e. above the threshold is `true`, below is `false`. |
| `label_vocabulary` | A list of strings represents possible label values. If it is not given, that means labels are already encoded as integer within [0, n\_classes) or multi-hot Tensor. If given, labels must be SparseTensor `string` type and have any value in `label_vocabulary`. Also there will be errors if vocabulary is not provided and labels are string. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Decides how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by batch size. |
| `loss_fn` | Optional loss function. |
| `classes_for_class_based_metrics` | List of integer class IDs or string class names for which per-class metrics are evaluated. If integers, all must be in the range `[0, n_classes - 1]`. If strings, all must be in `label_vocabulary`. |
| `name` | Name of the head. If provided, summary and metrics keys will be suffixed by `"/" + name`. Also used as `name_scope` when creating ops. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_label_head.py#L339-L362)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Returns regularized training loss. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_label_head.py#L399-L429)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_label_head.py#L364-L397)
```
predictions(
logits, keys=None
)
```
Return predictions based on keys.
See `base_head.Head` for details.
| Args |
| `logits` | logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the shape is `[batch_size, logits_dimension]`. |
| `keys` | a list of prediction keys. Key can be either the class variable of prediction\_keys.PredictionKeys or its string value, such as: prediction\_keys.PredictionKeys.LOGITS or 'logits'. |
| Returns |
| A dict of predictions. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_label_head.py#L431-L482)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.CheckpointSaverHook tf.estimator.CheckpointSaverHook
================================
Saves checkpoints every N steps or seconds.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.CheckpointSaverHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/CheckpointSaverHook), [`tf.compat.v1.train.CheckpointSaverHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/CheckpointSaverHook)
```
tf.estimator.CheckpointSaverHook(
checkpoint_dir,
save_secs=None,
save_steps=None,
saver=None,
checkpoint_basename='model.ckpt',
scaffold=None,
listeners=None,
save_graph_def=True
)
```
| Args |
| `checkpoint_dir` | `str`, base directory for the checkpoint files. |
| `save_secs` | `int`, save every N secs. |
| `save_steps` | `int`, save every N steps. |
| `saver` | `Saver` object, used for saving. |
| `checkpoint_basename` | `str`, base name for the checkpoint files. |
| `scaffold` | `Scaffold`, use to get saver object. |
| `listeners` | List of `CheckpointSaverListener` subclass instances. Used for callbacks that run immediately before or after this hook saves the checkpoint. |
| `save_graph_def` | Whether to save the GraphDef and MetaGraphDef to `checkpoint_dir`. The GraphDef is saved after the session is created as `graph.pbtxt`. MetaGraphDefs are saved out for every checkpoint as `model.ckpt-*.meta`. |
| Raises |
| `ValueError` | One of `save_steps` or `save_secs` should be set. |
| `ValueError` | At most one of `saver` or `scaffold` should be set. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L586-L603)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L608-L617)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L605-L606)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L577-L584)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L619-L624)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.add_metrics tf.estimator.add\_metrics
=========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/extenders.py#L29-L99) |
Creates a new [`tf.estimator.Estimator`](estimator) which has given metrics.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.add_metrics`](https://www.tensorflow.org/api_docs/python/tf/estimator/add_metrics)
```
tf.estimator.add_metrics(
estimator, metric_fn
)
```
#### Example:
```
def my_auc(labels, predictions):
auc_metric = tf.keras.metrics.AUC(name="my_auc")
auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'])
return {'auc': auc_metric}
estimator = tf.estimator.DNNClassifier(...)
estimator = tf.estimator.add_metrics(estimator, my_auc)
estimator.train(...)
estimator.evaluate(...)
```
Example usage of custom metric which uses features:
```
def my_auc(labels, predictions, features):
auc_metric = tf.keras.metrics.AUC(name="my_auc")
auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'],
sample_weight=features['weight'])
return {'auc': auc_metric}
estimator = tf.estimator.DNNClassifier(...)
estimator = tf.estimator.add_metrics(estimator, my_auc)
estimator.train(...)
estimator.evaluate(...)
```
| Args |
| `estimator` | A [`tf.estimator.Estimator`](estimator) object. |
| `metric_fn` | A function which should obey the following signature: * Args: can only have following four arguments in any order:
+ predictions: Predictions `Tensor` or dict of `Tensor` created by given `estimator`.
+ features: Input `dict` of `Tensor` objects created by `input_fn` which is given to `estimator.evaluate` as an argument.
+ labels: Labels `Tensor` or dict of `Tensor` created by `input_fn` which is given to `estimator.evaluate` as an argument.
+ config: config attribute of the `estimator`.
+ Returns: Dict of metric results keyed by name. Final metrics are a union of this and `estimator's` existing metrics. If there is a name conflict between this and `estimator`s existing metrics, this will override the existing one. The values of the dict are the results of calling a metric function, namely a `(metric_tensor, update_op)` tuple.
|
| Returns |
| A new [`tf.estimator.Estimator`](estimator) which has a union of original metrics with given ones. |
tensorflow tf.estimator.DNNClassifier tf.estimator.DNNClassifier
==========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/dnn.py#L590-L763) |
A classifier for TensorFlow DNN models.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.DNNClassifier(
hidden_units,
feature_columns,
model_dir=None,
n_classes=2,
weight_column=None,
label_vocabulary=None,
optimizer='Adagrad',
activation_fn=tf.nn.relu,
dropout=None,
config=None,
warm_start_from=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
batch_norm=False
)
```
#### Example:
```
categorical_feature_a = categorical_column_with_hash_bucket(...)
categorical_feature_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using softmax cross entropy.
| Args |
| `hidden_units` | Iterable of number hidden units per layer. All layers are fully connected. Ex. `[64, 32]` means first layer has 64 nodes and second one has 32. |
| `feature_columns` | An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `_FeatureColumn`. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `n_classes` | Number of label classes. Defaults to 2, namely binary classification. Must be > 1. |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `label_vocabulary` | A list of strings represents possible label values. If given, labels must be string type and have any value in `label_vocabulary`. If it is not given, that means labels are already encoded as integer or float within [0, 1] for `n_classes=2` and encoded as integer values in {0, 1,..., n\_classes-1} for `n_classes`>2 . Also there will be errors if vocabulary is not provided and labels are string. |
| `optimizer` | An instance of `tf.keras.optimizers.*` used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer. |
| `activation_fn` | Activation function applied to each layer. If `None`, will use [`tf.nn.relu`](../nn/relu). |
| `dropout` | When not `None`, the probability we will drop out a given coordinate. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| `batch_norm` | Whether to use batch normalization after each hidden layer. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.MultiClassHead tf.estimator.MultiClassHead
===========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_class_head.py#L34-L496) |
Creates a `Head` for multi class classification.
Inherits From: [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.MultiClassHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/MultiClassHead)
```
tf.estimator.MultiClassHead(
n_classes,
weight_column=None,
label_vocabulary=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
loss_fn=None,
name=None
)
```
Uses `sparse_softmax_cross_entropy` loss.
The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`. In many applications, the shape is `[batch_size, n_classes]`.
`labels` must be a dense `Tensor` with shape matching `logits`, namely `[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string `Tensor` with values from the vocabulary. If `label_vocabulary` is not given, `labels` must be an integer `Tensor` with values specifying the class index.
If `weight_column` is specified, weights must be of shape `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.
The loss is the weighted sum over the input dimensions. Namely, if the input labels have shape `[batch_size, 1]`, the loss is the weighted sum over `batch_size`.
Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or `(labels, logits, features, loss_reduction)` as arguments and returns unreduced loss with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support integer `labels` with shape `[D0, D1, ... DN, 1]`. Namely, the head applies `label_vocabulary` to the input labels before passing them to `loss_fn`.
#### Usage:
```
n_classes = 3
head = tf.estimator.MultiClassHead(n_classes)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# expected_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(10, 0) / 2 = 5.
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
5.00
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
accuracy : 0.50
average_loss : 5.00
preds = head.predictions(logits)
print(preds['logits'])
tf.Tensor(
[[10. 0. 0.]
[ 0. 10. 0.]], shape=(2, 3), dtype=float32)
```
Usage with a canned estimator:
```
my_head = tf.estimator.MultiClassHead(n_classes=3)
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
```
It can also be used with a custom `model_fn`. Example:
```
def _my_model_fn(features, labels, mode):
my_head = tf.estimator.MultiClassHead(n_classes=3)
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
```
| Args |
| `n_classes` | Number of classes, must be greater than 2 (for 2 classes, use `BinaryClassHead`). |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. |
| `label_vocabulary` | A list or tuple of strings representing possible label values. If it is not given, that means labels are already encoded as an integer within [0, n\_classes). If given, labels must be of string type and have any value in `label_vocabulary`. Note that errors will be raised if `label_vocabulary` is not provided but labels are strings. If both `n_classes` and `label_vocabulary` are provided, `label_vocabulary` should contain exactly `n_classes` items. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Decides how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch size * label_dimension`. |
| `loss_fn` | Optional loss function. |
| `name` | Name of the head. If provided, summary and metrics keys will be suffixed by `"/" + name`. Also used as `name_scope` when creating ops. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_class_head.py#L262-L285)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Returns regularized training loss. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_class_head.py#L345-L359)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_class_head.py#L287-L343)
```
predictions(
logits, keys=None
)
```
Return predictions based on keys.
See `base_head.Head` for details.
| Args |
| `logits` | logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the shape is `[batch_size, logits_dimension]`. |
| `keys` | a list or tuple of prediction keys. Each key can be either the class variable of prediction\_keys.PredictionKeys or its string value, such as: prediction\_keys.PredictionKeys.CLASSES or 'classes'. If not specified, it will return the predictions for all valid keys. |
| Returns |
| A dict of predictions. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/multi_class_head.py#L361-L385)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.FinalOpsHook tf.estimator.FinalOpsHook
=========================
A hook which evaluates `Tensors` at the end of a session.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.FinalOpsHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/FinalOpsHook), [`tf.compat.v1.train.FinalOpsHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/FinalOpsHook)
```
tf.estimator.FinalOpsHook(
final_ops, final_ops_feed_dict=None
)
```
| Args |
| `final_ops` | A single `Tensor`, a list of `Tensors` or a dictionary of names to `Tensors`. |
| `final_ops_feed_dict` | A feed dictionary to use when running `final_ops_dict`. |
| Attributes |
| `final_ops_values` | |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L148-L165)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L125-L146)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L97-L106)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L972-L992)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.VocabInfo tf.estimator.VocabInfo
======================
Vocabulary information for warm-starting.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.VocabInfo`](https://www.tensorflow.org/api_docs/python/tf/estimator/VocabInfo), [`tf.compat.v1.train.VocabInfo`](https://www.tensorflow.org/api_docs/python/tf/estimator/VocabInfo)
```
tf.estimator.VocabInfo(
new_vocab,
new_vocab_size,
num_oov_buckets,
old_vocab,
old_vocab_size=-1,
backup_initializer=None,
axis=0
)
```
See [`tf.estimator.WarmStartSettings`](warmstartsettings) for examples of using VocabInfo to warm-start.
Args: new\_vocab: [Required] A path to the new vocabulary file (used with the model to be trained). new\_vocab\_size: [Required] An integer indicating how many entries of the new vocabulary will used in training. num\_oov\_buckets: [Required] An integer indicating how many OOV buckets are associated with the vocabulary. old\_vocab: [Required] A path to the old vocabulary file (used with the checkpoint to be warm-started from). old\_vocab\_size: [Optional] An integer indicating how many entries of the old vocabulary were used in the creation of the checkpoint. If not provided, the entire old vocabulary will be used. backup\_initializer: [Optional] A variable initializer used for variables corresponding to new vocabulary entries and OOV. If not provided, these entries will be zero-initialized. axis: [Optional] Denotes what axis the vocabulary corresponds to. The default, 0, corresponds to the most common use case (embeddings or linear weights for binary classification / regression). An axis of 1 could be used for warm-starting output layers with class vocabularies.
Returns: A `VocabInfo` which represents the vocabulary information for warm-starting.
Raises: ValueError: `axis` is neither 0 or 1.
```
Example Usage:
```
```
embeddings_vocab_info = tf.VocabInfo(
new_vocab='embeddings_vocab',
new_vocab_size=100,
num_oov_buckets=1,
old_vocab='pretrained_embeddings_vocab',
old_vocab_size=10000,
backup_initializer=tf.compat.v1.truncated_normal_initializer(
mean=0.0, stddev=(1 / math.sqrt(embedding_dim))),
axis=0)
softmax_output_layer_kernel_vocab_info = tf.VocabInfo(
new_vocab='class_vocab',
new_vocab_size=5,
num_oov_buckets=0, # No OOV for classes.
old_vocab='old_class_vocab',
old_vocab_size=8,
backup_initializer=tf.compat.v1.glorot_uniform_initializer(),
axis=1)
softmax_output_layer_bias_vocab_info = tf.VocabInfo(
new_vocab='class_vocab',
new_vocab_size=5,
num_oov_buckets=0, # No OOV for classes.
old_vocab='old_class_vocab',
old_vocab_size=8,
backup_initializer=tf.compat.v1.zeros_initializer(),
axis=0)
#Currently, only axis=0 and axis=1 are supported.
```
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
<tr>
<td>
`new_vocab`
</td>
<td>
A `namedtuple` alias for field number 0
</td>
</tr><tr>
<td>
`new_vocab_size`
</td>
<td>
A `namedtuple` alias for field number 1
</td>
</tr><tr>
<td>
`num_oov_buckets`
</td>
<td>
A `namedtuple` alias for field number 2
</td>
</tr><tr>
<td>
`old_vocab`
</td>
<td>
A `namedtuple` alias for field number 3
</td>
</tr><tr>
<td>
`old_vocab_size`
</td>
<td>
A `namedtuple` alias for field number 4
</td>
</tr><tr>
<td>
`backup_initializer`
</td>
<td>
A `namedtuple` alias for field number 5
</td>
</tr><tr>
<td>
`axis`
</td>
<td>
A `namedtuple` alias for field number 6
</td>
</tr>
</table>
```
tensorflow tf.estimator.LinearEstimator tf.estimator.LinearEstimator
============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear.py#L998-L1126) |
An estimator for TensorFlow linear models with user-specified head.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.LinearEstimator(
head,
feature_columns,
model_dir=None,
optimizer='Ftrl',
config=None,
sparse_combiner='sum',
warm_start_from=None
)
```
#### Example:
```
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss and predicted output are determined by the specified head.
| Args |
| `head` | A `Head` instance constructed with a method such as [`tf.estimator.MultiLabelHead`](multilabelhead). |
| `feature_columns` | An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `FeatureColumn`. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `optimizer` | An instance of `tf.keras.optimizers.*` used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `sparse_combiner` | A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see `tf.feature_column.linear_model`. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.NanTensorHook tf.estimator.NanTensorHook
==========================
Monitors the loss tensor and stops training if loss is NaN.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.NanTensorHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/NanTensorHook), [`tf.compat.v1.train.NanTensorHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/NanTensorHook)
```
tf.estimator.NanTensorHook(
loss_tensor, fail_on_nan_loss=True
)
```
Can either fail with exception or just stop training.
| Args |
| `loss_tensor` | `Tensor`, the loss tensor. |
| `fail_on_nan_loss` | `bool`, whether to raise exception when loss is NaN. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L781-L790)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L778-L779)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L97-L106)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.CheckpointSaverListener tf.estimator.CheckpointSaverListener
====================================
Interface for listeners that take action before or after checkpoint save.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.CheckpointSaverListener`](https://www.tensorflow.org/api_docs/python/tf/estimator/CheckpointSaverListener), [`tf.compat.v1.train.CheckpointSaverListener`](https://www.tensorflow.org/api_docs/python/tf/estimator/CheckpointSaverListener)
`CheckpointSaverListener` triggers only in steps when `CheckpointSaverHook` is triggered, and provides callbacks at the following points:
* before using the session
* before each call to `Saver.save()`
* after each call to `Saver.save()`
* at the end of session
To use a listener, implement a class and pass the listener to a `CheckpointSaverHook`, as in this example:
```
class ExampleCheckpointSaverListener(CheckpointSaverListener):
def begin(self):
# You can add ops to the graph here.
print('Starting the session.')
self.your_tensor = ...
def before_save(self, session, global_step_value):
print('About to write a checkpoint')
def after_save(self, session, global_step_value):
print('Done writing checkpoint.')
if decided_to_stop_training():
return True
def end(self, session, global_step_value):
print('Done with the session.')
...
listener = ExampleCheckpointSaverListener()
saver_hook = tf.estimator.CheckpointSaverHook(
checkpoint_dir, listeners=[listener])
with
tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]):
...
```
A `CheckpointSaverListener` may simply take some action after every checkpoint save. It is also possible for the listener to use its own schedule to act less frequently, e.g. based on global\_step\_value. In this case, implementors should implement the `end()` method to handle actions related to the last checkpoint save. But the listener should not act twice if `after_save()` already handled this last checkpoint save.
A `CheckpointSaverListener` can request training to be stopped, by returning True in `after_save`. Please note that, in replicated distributed training setting, only `chief` should use this behavior. Otherwise each worker will do their own evaluation, which may be wasteful of resources.
Methods
-------
### `after_save`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L517-L518)
```
after_save(
session, global_step_value
)
```
### `before_save`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L514-L515)
```
before_save(
session, global_step_value
)
```
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L511-L512)
```
begin()
```
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L520-L521)
```
end(
session, global_step_value
)
```
tensorflow tf.estimator.DNNEstimator tf.estimator.DNNEstimator
=========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/dnn.py#L815-L966) |
An estimator for TensorFlow DNN models with user-specified head.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.DNNEstimator(
head,
hidden_units,
feature_columns,
model_dir=None,
optimizer='Adagrad',
activation_fn=tf.nn.relu,
dropout=None,
config=None,
warm_start_from=None,
batch_norm=False
)
```
#### Example:
```
sparse_feature_a = sparse_column_with_hash_bucket(...)
sparse_feature_b = sparse_column_with_hash_bucket(...)
sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
...)
sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
...)
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss and predicted output are determined by the specified head.
| Args |
| `head` | A `_Head` instance constructed with a method such as `tf.contrib.estimator.multi_label_head`. |
| `hidden_units` | Iterable of number hidden units per layer. All layers are fully connected. Ex. `[64, 32]` means first layer has 64 nodes and second one has 32. |
| `feature_columns` | An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `_FeatureColumn`. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `optimizer` | An instance of `tf.keras.optimizers.*` used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer. |
| `activation_fn` | Activation function applied to each layer. If `None`, will use [`tf.nn.relu`](../nn/relu). |
| `dropout` | When not `None`, the probability we will drop out a given coordinate. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `batch_norm` | Whether to use batch normalization after each hidden layer. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.GlobalStepWaiterHook tf.estimator.GlobalStepWaiterHook
=================================
Delays execution until global step reaches `wait_until_step`.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.GlobalStepWaiterHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/GlobalStepWaiterHook), [`tf.compat.v1.train.GlobalStepWaiterHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/GlobalStepWaiterHook)
```
tf.estimator.GlobalStepWaiterHook(
wait_until_step
)
```
This hook delays execution until global step reaches to `wait_until_step`. It is used to gradually start workers in distributed settings. One example usage would be setting `wait_until_step=int(K*log(task_id+1))` assuming that task\_id=0 is the chief.
| Args |
| `wait_until_step` | an `int` shows until which global step should we wait. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L148-L165)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L927-L948)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L920-L925)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.NanLossDuringTrainingError tf.estimator.NanLossDuringTrainingError
=======================================
Unspecified run-time error.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.NanLossDuringTrainingError`](https://www.tensorflow.org/api_docs/python/tf/estimator/NanLossDuringTrainingError), [`tf.compat.v1.train.NanLossDuringTrainingError`](https://www.tensorflow.org/api_docs/python/tf/estimator/NanLossDuringTrainingError)
```
tf.estimator.NanLossDuringTrainingError(
*args, **kwargs
)
```
tensorflow tf.estimator.EstimatorSpec tf.estimator.EstimatorSpec
==========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/model_fn.py#L36-L192) |
Ops and objects returned from a `model_fn` and passed to an `Estimator`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.EstimatorSpec`](https://www.tensorflow.org/api_docs/python/tf/estimator/EstimatorSpec)
```
tf.estimator.EstimatorSpec(
mode,
predictions=None,
loss=None,
train_op=None,
eval_metric_ops=None,
export_outputs=None,
training_chief_hooks=None,
training_hooks=None,
scaffold=None,
evaluation_hooks=None,
prediction_hooks=None
)
```
`EstimatorSpec` fully defines the model to be run by an `Estimator`.
| Args |
| `mode` | A `ModeKeys`. Specifies if this is training, evaluation or prediction. |
| `predictions` | Predictions `Tensor` or dict of `Tensor`. |
| `loss` | Training loss `Tensor`. Must be either scalar, or with shape `[1]`. |
| `train_op` | Op for the training step. |
| `eval_metric_ops` | Dict of metric results keyed by name. The values of the dict can be one of the following: (1) instance of `Metric` class. (2) Results of calling a metric function, namely a `(metric_tensor, update_op)` tuple. `metric_tensor` should be evaluated without any impact on state (typically is a pure computation results based on variables.). For example, it should not trigger the `update_op` or requires any input fetching. |
| `export_outputs` | Describes the output signatures to be exported to `SavedModel` and used during serving. A dict `{name: output}` where: * name: An arbitrary name for this output.
* output: an `ExportOutput` object such as `ClassificationOutput`, `RegressionOutput`, or `PredictOutput`. Single-headed models only need to specify one entry in this dictionary. Multi-headed models should specify one entry for each head, one of which must be named using `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`. If no entry is provided, a default `PredictOutput` mapping to `predictions` will be created.
|
| `training_chief_hooks` | Iterable of `tf.train.SessionRunHook` objects to run on the chief worker during training. |
| `training_hooks` | Iterable of `tf.train.SessionRunHook` objects to run on all workers during training. |
| `scaffold` | A `tf.train.Scaffold` object that can be used to set initialization, saver, and more to be used in training. |
| `evaluation_hooks` | Iterable of `tf.train.SessionRunHook` objects to run during evaluation. |
| `prediction_hooks` | Iterable of `tf.train.SessionRunHook` objects to run during predictions. |
| Raises |
| `ValueError` | If validation fails. |
| `TypeError` | If any of the arguments is not the expected type. |
| Attributes |
| `mode` | A `namedtuple` alias for field number 0 |
| `predictions` | A `namedtuple` alias for field number 1 |
| `loss` | A `namedtuple` alias for field number 2 |
| `train_op` | A `namedtuple` alias for field number 3 |
| `eval_metric_ops` | A `namedtuple` alias for field number 4 |
| `export_outputs` | A `namedtuple` alias for field number 5 |
| `training_chief_hooks` | A `namedtuple` alias for field number 6 |
| `training_hooks` | A `namedtuple` alias for field number 7 |
| `scaffold` | A `namedtuple` alias for field number 8 |
| `evaluation_hooks` | A `namedtuple` alias for field number 9 |
| `prediction_hooks` | A `namedtuple` alias for field number 10 |
tensorflow tf.estimator.LatestExporter tf.estimator.LatestExporter
===========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L422-L508) |
This class regularly exports the serving graph and checkpoints.
Inherits From: [`Exporter`](exporter)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.LatestExporter`](https://www.tensorflow.org/api_docs/python/tf/estimator/LatestExporter)
```
tf.estimator.LatestExporter(
name,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
exports_to_keep=5
)
```
In addition to exporting, this class also garbage collects stale exports.
| Args |
| `name` | unique name of this `Exporter` that is going to be used in the export path. |
| `serving_input_receiver_fn` | a function that takes no arguments and returns a `ServingInputReceiver`. |
| `assets_extra` | An optional dict specifying how to populate the assets.extra directory within the exported SavedModel. Each key should give the destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. |
| `as_text` | whether to write the SavedModel proto in text format. Defaults to `False`. |
| `exports_to_keep` | Number of exports to keep. Older exports will be garbage-collected. Defaults to 5. Set to `None` to disable garbage collection. |
| Raises |
| `ValueError` | if any arguments is invalid. |
| Attributes |
| `name` | Directory name. A directory name under the export base directory where exports of this type are written. Should not be `None` nor empty. |
Methods
-------
### `export`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L469-L477)
```
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
```
Exports the given `Estimator` to a specific format.
| Args |
| `estimator` | the `Estimator` to export. |
| `export_path` | A string containing a directory where to write the export. |
| `checkpoint_path` | The checkpoint path to export. |
| `eval_result` | The output of [`Estimator.evaluate`](../compat/v1/estimator/estimator#evaluate) on this checkpoint. |
| `is_the_final_export` | This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing `Exporter` to [`tf.estimator.train_and_evaluate`](train_and_evaluate) `is_the_final_export` is always False if [`TrainSpec.max_steps`](trainspec#max_steps) is `None`. |
| Returns |
| The string path to the exported directory or `None` if export is skipped. |
tensorflow Module: tf.estimator.experimental Module: tf.estimator.experimental
=================================
Public API for tf.estimator.experimental namespace.
Classes
-------
[`class InMemoryEvaluatorHook`](experimental/inmemoryevaluatorhook): Hook to run evaluation in training without a checkpoint.
[`class LinearSDCA`](experimental/linearsdca): Stochastic Dual Coordinate Ascent helper for linear estimators.
[`class RNNClassifier`](experimental/rnnclassifier): A classifier for TensorFlow RNN models.
[`class RNNEstimator`](experimental/rnnestimator): An Estimator for TensorFlow RNN models with user-specified head.
Functions
---------
[`build_raw_supervised_input_receiver_fn(...)`](experimental/build_raw_supervised_input_receiver_fn): Build a supervised\_input\_receiver\_fn for raw features and labels.
[`call_logit_fn(...)`](experimental/call_logit_fn): Calls logit\_fn (experimental).
[`make_early_stopping_hook(...)`](experimental/make_early_stopping_hook): Creates early-stopping hook.
[`make_stop_at_checkpoint_step_hook(...)`](experimental/make_stop_at_checkpoint_step_hook): Creates a proper StopAtCheckpointStepHook based on chief status.
[`stop_if_higher_hook(...)`](experimental/stop_if_higher_hook): Creates hook to stop if the given metric is higher than the threshold.
[`stop_if_lower_hook(...)`](experimental/stop_if_lower_hook): Creates hook to stop if the given metric is lower than the threshold.
[`stop_if_no_decrease_hook(...)`](experimental/stop_if_no_decrease_hook): Creates hook to stop if metric does not decrease within given max steps.
[`stop_if_no_increase_hook(...)`](experimental/stop_if_no_increase_hook): Creates hook to stop if metric does not increase within given max steps.
tensorflow tf.estimator.RunConfig tf.estimator.RunConfig
======================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/run_config.py#L345-L959) |
This class specifies the configurations for an `Estimator` run.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig)
```
tf.estimator.RunConfig(
model_dir=None,
tf_random_seed=None,
save_summary_steps=100,
save_checkpoints_steps=_USE_DEFAULT,
save_checkpoints_secs=_USE_DEFAULT,
session_config=None,
keep_checkpoint_max=5,
keep_checkpoint_every_n_hours=10000,
log_step_count_steps=100,
train_distribute=None,
device_fn=None,
protocol=None,
eval_distribute=None,
experimental_distribute=None,
experimental_max_worker_delay_secs=None,
session_creation_timeout_secs=7200,
checkpoint_save_graph_def=True
)
```
| Args |
| `model_dir` | directory where model parameters, graph, etc are saved. If `PathLike` object, the path will be resolved. If `None`, will use a default value set by the Estimator. |
| `tf_random_seed` | Random seed for TensorFlow initializers. Setting this value allows consistency between reruns. |
| `save_summary_steps` | Save summaries every this many steps. |
| `save_checkpoints_steps` | Save checkpoints every this many steps. Can not be specified with `save_checkpoints_secs`. |
| `save_checkpoints_secs` | Save checkpoints every this many seconds. Can not be specified with `save_checkpoints_steps`. Defaults to 600 seconds if both `save_checkpoints_steps` and `save_checkpoints_secs` are not set in constructor. If both `save_checkpoints_steps` and `save_checkpoints_secs` are `None`, then checkpoints are disabled. |
| `session_config` | a ConfigProto used to set session parameters, or `None`. |
| `keep_checkpoint_max` | The maximum number of recent checkpoint files to keep. As new files are created, older files are deleted. If `None` or 0, all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent checkpoint files are kept). If a saver is passed to the estimator, this argument will be ignored. |
| `keep_checkpoint_every_n_hours` | Number of hours between each checkpoint to be saved. The default value of 10,000 hours effectively disables the feature. |
| `log_step_count_steps` | The frequency, in number of global steps, that the global step and the loss will be logged during training. Also controls the frequency that the global steps / s will be logged (and written to summary) during training. |
| `train_distribute` | An optional instance of [`tf.distribute.Strategy`](../distribute/strategy). If specified, then Estimator will distribute the user's model during training, according to the policy specified by that strategy. Setting `experimental_distribute.train_distribute` is preferred. |
| `device_fn` | A callable invoked for every `Operation` that takes the `Operation` and returns the device string. If `None`, defaults to the device function returned by `tf.train.replica_device_setter` with round-robin strategy. |
| `protocol` | An optional argument which specifies the protocol used when starting server. `None` means default to grpc. |
| `eval_distribute` | An optional instance of [`tf.distribute.Strategy`](../distribute/strategy). If specified, then Estimator will distribute the user's model during evaluation, according to the policy specified by that strategy. Setting `experimental_distribute.eval_distribute` is preferred. |
| `experimental_distribute` | An optional `tf.contrib.distribute.DistributeConfig` object specifying DistributionStrategy-related configuration. The `train_distribute` and `eval_distribute` can be passed as parameters to `RunConfig` or set in `experimental_distribute` but not both. |
| `experimental_max_worker_delay_secs` | An optional integer specifying the maximum time a worker should wait before starting. By default, workers are started at staggered times, with each worker being delayed by up to 60 seconds. This is intended to reduce the risk of divergence, which can occur when many workers simultaneously update the weights of a randomly initialized model. Users who warm-start their models and train them for short durations (a few minutes or less) should consider reducing this default to improve training times. |
| `session_creation_timeout_secs` | Max time workers should wait for a session to become available (on initialization or when recovering a session) with MonitoredTrainingSession. Defaults to 7200 seconds, but users may want to set a lower value to detect problems with variable / session (re)-initialization more quickly. |
| `checkpoint_save_graph_def` | Whether to save the GraphDef and MetaGraphDef to `checkpoint_dir`. The GraphDef is saved after the session is created as `graph.pbtxt`. MetaGraphDefs are saved out for every checkpoint as `model.ckpt-*.meta`. |
| Raises |
| `ValueError` | If both `save_checkpoints_steps` and `save_checkpoints_secs` are set. |
| Attributes |
| `checkpoint_save_graph_def` | |
| `cluster_spec` | |
| `device_fn` | Returns the device\_fn. If device\_fn is not `None`, it overrides the default device function used in `Estimator`. Otherwise the default one is used. |
| `eval_distribute` | Optional [`tf.distribute.Strategy`](../distribute/strategy) for evaluation. |
| `evaluation_master` | |
| `experimental_max_worker_delay_secs` | |
| `global_id_in_cluster` | The global id in the training cluster. All global ids in the training cluster are assigned from an increasing sequence of consecutive integers. The first id is 0.
**Note:** Task id (the property field `task_id`) is tracking the index of the node among all nodes with the SAME task type. For example, given the cluster definition as follows:
```
cluster = {'chief': ['host0:2222'],
'ps': ['host1:2222', 'host2:2222'],
'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
```
Nodes with task type `worker` can have id 0, 1, 2. Nodes with task type `ps` can have id, 0, 1. So, `task_id` is not unique, but the pair (`task_type`, `task_id`) can uniquely determine a node in the cluster. Global id, i.e., this field, is tracking the index of the node among ALL nodes in the cluster. It is uniquely assigned. For example, for the cluster spec given above, the global ids are assigned as:
```
task_type | task_id | global_id
--------------------------------
chief | 0 | 0
worker | 0 | 1
worker | 1 | 2
worker | 2 | 3
ps | 0 | 4
ps | 1 | 5
```
|
| `is_chief` | |
| `keep_checkpoint_every_n_hours` | |
| `keep_checkpoint_max` | |
| `log_step_count_steps` | |
| `master` | |
| `model_dir` | |
| `num_ps_replicas` | |
| `num_worker_replicas` | |
| `protocol` | Returns the optional protocol value. |
| `save_checkpoints_secs` | |
| `save_checkpoints_steps` | |
| `save_summary_steps` | |
| `service` | Returns the platform defined (in TF\_CONFIG) service dict. |
| `session_config` | |
| `session_creation_timeout_secs` | |
| `task_id` | |
| `task_type` | |
| `tf_random_seed` | |
| `train_distribute` | Optional [`tf.distribute.Strategy`](../distribute/strategy) for training. |
Methods
-------
### `replace`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/run_config.py#L885-L923)
```
replace(
**kwargs
)
```
Returns a new instance of `RunConfig` replacing specified properties.
Only the properties in the following list are allowed to be replaced:
* `model_dir`,
* `tf_random_seed`,
* `save_summary_steps`,
* `save_checkpoints_steps`,
* `save_checkpoints_secs`,
* `session_config`,
* `keep_checkpoint_max`,
* `keep_checkpoint_every_n_hours`,
* `log_step_count_steps`,
* `train_distribute`,
* `device_fn`,
* `protocol`.
* `eval_distribute`,
* `experimental_distribute`,
* `experimental_max_worker_delay_secs`,
In addition, either `save_checkpoints_steps` or `save_checkpoints_secs` can be set (should not be both).
| Args |
| `**kwargs` | keyword named properties with new values. |
| Raises |
| `ValueError` | If any property name in `kwargs` does not exist or is not allowed to be replaced, or both `save_checkpoints_steps` and `save_checkpoints_secs` are set. |
| Returns |
| a new instance of `RunConfig`. |
| programming_docs |
tensorflow tf.estimator.regressor_parse_example_spec tf.estimator.regressor\_parse\_example\_spec
============================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/parsing_utils.py#L147-L261) |
Generates parsing spec for tf.parse\_example to be used with regressors.
```
tf.estimator.regressor_parse_example_spec(
feature_columns,
label_key,
label_dtype=tf.dtypes.float32,
label_default=None,
label_dimension=1,
weight_column=None
)
```
If users keep data in tf.Example format, they need to call tf.parse\_example with a proper feature spec. There are two main things that this utility helps:
* Users need to combine parsing spec of features with labels and weights (if any) since they are all parsed from same tf.Example instance. This utility combines these specs.
* It is difficult to map expected label by a regressor such as `DNNRegressor` to corresponding tf.parse\_example spec. This utility encodes it by getting related information from users (key, dtype).
Example output of parsing spec:
```
# Define features and transformations
feature_b = tf.feature_column.numeric_column(...)
feature_c_bucketized = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column("feature_c"), ...)
feature_a_x_feature_c = tf.feature_column.crossed_column(
columns=["feature_a", feature_c_bucketized], ...)
feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]
parsing_spec = tf.estimator.regressor_parse_example_spec(
feature_columns, label_key='my-label')
# For the above example, regressor_parse_example_spec would return the dict:
assert parsing_spec == {
"feature_a": parsing_ops.VarLenFeature(tf.string),
"feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
"feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
"my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.float32)
}
```
Example usage with a regressor:
```
feature_columns = # define features via tf.feature_column
estimator = DNNRegressor(
hidden_units=[256, 64, 16],
feature_columns=feature_columns,
weight_column='example-weight',
label_dimension=3)
# This label configuration tells the regressor the following:
# * weights are retrieved with key 'example-weight'
# * label is a 3 dimension tensor with float32 dtype.
# Input builders
def input_fn_train(): # Returns a tuple of features and labels.
features = tf.contrib.learn.read_keyed_batch_features(
file_pattern=train_files,
batch_size=batch_size,
# creates parsing configuration for tf.parse_example
features=tf.estimator.classifier_parse_example_spec(
feature_columns,
label_key='my-label',
label_dimension=3,
weight_column='example-weight'),
reader=tf.RecordIOReader)
labels = features.pop('my-label')
return features, labels
estimator.train(input_fn=input_fn_train)
```
| Args |
| `feature_columns` | An iterable containing all feature columns. All items should be instances of classes derived from `_FeatureColumn`. |
| `label_key` | A string identifying the label. It means tf.Example stores labels with this key. |
| `label_dtype` | A `tf.dtype` identifies the type of labels. By default it is [`tf.float32`](../../tf#float32). |
| `label_default` | used as label if label\_key does not exist in given tf.Example. By default default\_value is none, which means `tf.parse_example` will error out if there is any missing label. |
| `label_dimension` | Number of regression targets per example. This is the size of the last dimension of the labels and logits `Tensor` objects (typically, these have shape `[batch_size, label_dimension]`). |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| Returns |
| A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` value. |
| Raises |
| `ValueError` | If label is used in `feature_columns`. |
| `ValueError` | If weight\_column is used in `feature_columns`. |
| `ValueError` | If any of the given `feature_columns` is not a `_FeatureColumn` instance. |
| `ValueError` | If `weight_column` is not a `NumericColumn` instance. |
| `ValueError` | if label\_key is None. |
tensorflow tf.estimator.BestExporter tf.estimator.BestExporter
=========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L165-L365) |
This class exports the serving graph and checkpoints of the best models.
Inherits From: [`Exporter`](exporter)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.BestExporter`](https://www.tensorflow.org/api_docs/python/tf/estimator/BestExporter)
```
tf.estimator.BestExporter(
name='best_exporter',
serving_input_receiver_fn=None,
event_file_pattern='eval/*.tfevents.*',
compare_fn=_loss_smaller,
assets_extra=None,
as_text=False,
exports_to_keep=5
)
```
This class performs a model export everytime the new model is better than any existing model.
| Args |
| `name` | unique name of this `Exporter` that is going to be used in the export path. |
| `serving_input_receiver_fn` | a function that takes no arguments and returns a `ServingInputReceiver`. |
| `event_file_pattern` | event file name pattern relative to model\_dir. If None, however, the exporter would not be preemption-safe. To be preemption-safe, event\_file\_pattern must be specified. |
| `compare_fn` | a function that compares two evaluation results and returns true if current evaluation result is better. Follows the signature: * Args:
+ `best_eval_result`: This is the evaluation result of the best model.
+ `current_eval_result`: This is the evaluation result of current candidate model.
* Returns: True if current evaluation result is better; otherwise, False.
|
| `assets_extra` | An optional dict specifying how to populate the assets.extra directory within the exported SavedModel. Each key should give the destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. |
| `as_text` | whether to write the SavedModel proto in text format. Defaults to `False`. |
| `exports_to_keep` | Number of exports to keep. Older exports will be garbage-collected. Defaults to 5. Set to `None` to disable garbage collection. |
| Raises |
| `ValueError` | if any argument is invalid. |
| Attributes |
| `name` | Directory name. A directory name under the export base directory where exports of this type are written. Should not be `None` nor empty. |
Methods
-------
### `export`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L279-L307)
```
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
```
Exports the given `Estimator` to a specific format.
| Args |
| `estimator` | the `Estimator` to export. |
| `export_path` | A string containing a directory where to write the export. |
| `checkpoint_path` | The checkpoint path to export. |
| `eval_result` | The output of [`Estimator.evaluate`](../compat/v1/estimator/estimator#evaluate) on this checkpoint. |
| `is_the_final_export` | This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing `Exporter` to [`tf.estimator.train_and_evaluate`](train_and_evaluate) `is_the_final_export` is always False if [`TrainSpec.max_steps`](trainspec#max_steps) is `None`. |
| Returns |
| The string path to the exported directory or `None` if export is skipped. |
tensorflow tf.estimator.DNNLinearCombinedRegressor tf.estimator.DNNLinearCombinedRegressor
=======================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/dnn_linear_combined.py#L899-L1087) |
An estimator for TensorFlow Linear and DNN joined models for regression.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.DNNLinearCombinedRegressor(
model_dir=None,
linear_feature_columns=None,
linear_optimizer='Ftrl',
dnn_feature_columns=None,
dnn_optimizer='Adagrad',
dnn_hidden_units=None,
dnn_activation_fn=tf.nn.relu,
dnn_dropout=None,
label_dimension=1,
weight_column=None,
config=None,
warm_start_from=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
batch_norm=False,
linear_sparse_combiner='sum'
)
```
>
> **Note:** This estimator is also known as wide-n-deep.
>
#### Example:
```
numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedRegressor(
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...),
# warm-start settings
warm_start_from="/path/to/checkpoint/dir")
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using mean squared error.
| Args |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `linear_feature_columns` | An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from `FeatureColumn`. |
| `linear_optimizer` | An instance of `tf.keras.optimizers.*` used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. |
| `dnn_feature_columns` | An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from `FeatureColumn`. |
| `dnn_optimizer` | An instance of `tf.keras.optimizers.*` used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer. |
| `dnn_hidden_units` | List of hidden units per layer. All layers are fully connected. |
| `dnn_activation_fn` | Activation function applied to each layer. If None, will use [`tf.nn.relu`](../nn/relu). |
| `dnn_dropout` | When not None, the probability we will drop out a given coordinate. |
| `label_dimension` | Number of regression targets per example. This is the size of the last dimension of the labels and logits `Tensor` objects (typically, these have shape `[batch_size, label_dimension]`). |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `config` | RunConfig object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| `batch_norm` | Whether to use batch normalization after each hidden layer. |
| `linear_sparse_combiner` | A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see `tf.feature_column.linear_model`. |
| Raises |
| `ValueError` | If both linear\_feature\_columns and dnn\_features\_columns are empty at the same time. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.BaselineRegressor tf.estimator.BaselineRegressor
==============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/baseline.py#L531-L623) |
A regressor that can establish a simple baseline.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.BaselineRegressor(
model_dir=None,
label_dimension=1,
weight_column=None,
optimizer='Ftrl',
config=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE
)
```
This regressor ignores feature values and will learn to predict the average value of each label.
#### Example:
```
# Build BaselineRegressor
regressor = tf.estimator.BaselineRegressor()
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
regressor.train(input_fn=input_fn_train)
# Evaluate squared-loss between the test and train targets.
loss = regressor.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the mean value seen during training.
predictions = regressor.predict(new_samples)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
| Args |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `label_dimension` | Number of regression targets per example. This is the size of the last dimension of the labels and logits `Tensor` objects (typically, these have shape `[batch_size, label_dimension]`). |
| `weight_column` | A string or a `_NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It will be multiplied by the loss of the example. |
| `optimizer` | String, `tf.keras.optimizers.*` object, or callable that creates the optimizer to use for training. If not specified, will use `Ftrl` as the default optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
tensorflow tf.estimator.Head tf.estimator.Head
=================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L45-L343) |
Interface for the head/top of a model.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.Head`](https://www.tensorflow.org/api_docs/python/tf/estimator/Head)
Head sits on top of the model network and handles computing the outputs of the network. Given logits (or output of a hidden layer), a Head knows how to compute predictions, loss, train\_op, metrics and export outputs. It is meant to:
1. Simplify writing model\_fn and to make model\_fn more configurable for Estimator.
2. Simpilfy creating loss and metrics for the train and test loop in Eager execution.
3. Support wide range of machine learning models. Since most heads can work with logits, they can support DNN, RNN, Wide, Wide&Deep, Global objectives, Gradient boosted trees and many other types of machine learning models.
#### Common usage:
Here is simplified model\_fn to build a DNN regression model.
```
def _my_dnn_model_fn(features, labels, mode, params, config=None):
# Optionally your callers can pass head to model_fn as a param.
head = tf.estimator.RegressionHead(...)
feature_columns = tf.feature_column.numeric_column(...)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
inputs = feature_layer(features)
# Compute logits with tf.keras.layers API
hidden_layer0 = tf.keras.layers.Dense(
units=1000, activation="relu")(inputs)
hidden_layer1 = tf.keras.layers.Dense(
units=500, activation="relu")(hidden_layer0)
logits = tf.keras.layers.Dense(
units=head.logits_dimension, activation=None)(hidden_layer1)
# Or use Keras model for logits computation
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=1000, activation="relu"))
model.add(tf.keras.layers.Dense(units=500, activation="relu"))
model.add(tf.keras.layers.Dense(
units=head.logits_dimension, activation=None))
logits = model(inputs)
return head.create_estimator_spec(
features=features,
labels=labels,
mode=mode,
logits=logits,
optimizer=optimizer)
```
| Attributes |
| `logits_dimension` | Size of the last dimension of the logits `Tensor`. Often is the number of classes, labels, or real values to be predicted. Typically, logits is of shape `[batch_size, logits_dimension]`. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction). Describes how to reduce training loss over batch, such as mean or sum. |
| `name` | The name of this head. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L130-L158)
```
@abc.abstractmethod
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Returns a loss `Tensor` from provided arguments.
Note that, the args of `features` and `mode` are most likely not used, but some Head implementations may require them.
| Args |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `logits` | Logits `Tensor` to be used for loss construction. |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. To be used in case loss calculation is different in Train and Eval mode. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| A scalar `Tensor` representing regularized training loss used in train and eval. |
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L175-L187)
```
@abc.abstractmethod
metrics(
regularization_losses=None
)
```
Returns a `dict` of metric objects.
| Args |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| A `dict` of metrics keyed by string name. The value is an instance of `Metric` class. |
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L160-L173)
```
@abc.abstractmethod
predictions(
logits, keys=None
)
```
Returns a `dict` of predictions from provided logits.
| Args |
| `logits` | Logits `Tensor` to be used for prediction construction. |
| `keys` | A list of `string` for prediction keys. Defaults to `None`, meaning if not specified, predictions will be created for all the pre-defined valid keys in the head. |
| Returns |
| A `dict` of predicted `Tensor` keyed by prediction name. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L189-L219)
```
@abc.abstractmethod
update_metrics(
eval_metrics,
features,
logits,
labels,
mode=None,
regularization_losses=None
)
```
Updates metric objects and returns a `dict` of the updated metrics.
| Args |
| `eval_metrics` | A `dict` of metrics to be updated. |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `logits` | logits `Tensor` to be used for metrics update. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `mode` | Estimator's `ModeKeys`. In most cases, this arg is not used and can be removed in the method implementation. |
| `regularization_losses` | A list of additional scalar losses to be added to the training and evaluation loss, such as regularization losses. Note that, the `mode` arg is not used in the `tf.estimator.*Head`. If the update of the metrics doesn't rely on `mode`, it can be safely ignored in the method signature. |
| Returns |
| A `dict` of updated metrics keyed by name. The value is an instance of `Metric` class. |
| programming_docs |
tensorflow tf.estimator.SessionRunValues tf.estimator.SessionRunValues
=============================
Contains the results of `Session.run()`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.SessionRunValues`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunValues), [`tf.compat.v1.train.SessionRunValues`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunValues)
```
tf.estimator.SessionRunValues(
results, options, run_metadata
)
```
In the future we may use this object to add more information about result of run without changing the Hook API.
| Args |
| `results` | The return values from `Session.run()` corresponding to the fetches attribute returned in the RunArgs. Note that this has the same shape as the RunArgs fetches. For example: fetches = global\_step\_tensor => results = nparray(int) fetches = [train\_op, summary\_op, global\_step\_tensor] => results = [None, nparray(string), nparray(int)] fetches = {'step': global\_step\_tensor, 'summ': summary\_op} => results = {'step': nparray(int), 'summ': nparray(string)} |
| `options` | `RunOptions` from the `Session.run()` call. |
| `run_metadata` | `RunMetadata` from the `Session.run()` call. |
| Attributes |
| `results` | A `namedtuple` alias for field number 0 |
| `options` | A `namedtuple` alias for field number 1 |
| `run_metadata` | A `namedtuple` alias for field number 2 |
tensorflow tf.estimator.LogisticRegressionHead tf.estimator.LogisticRegressionHead
===================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L500-L583) |
Creates a `Head` for logistic regression.
Inherits From: [`RegressionHead`](regressionhead), [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.LogisticRegressionHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/LogisticRegressionHead)
```
tf.estimator.LogisticRegressionHead(
weight_column=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
name=None
)
```
Uses `sigmoid_cross_entropy_with_logits` loss, which is the same as `BinaryClassHead`. The differences compared to `BinaryClassHead` are:
* Does not support `label_vocabulary`. Instead, labels must be float in the range [0, 1].
* Does not calculate some metrics that do not make sense, such as AUC.
* In `PREDICT` mode, only returns logits and predictions (`=tf.sigmoid(logits)`), whereas `BinaryClassHead` also returns probabilities, classes, and class\_ids.
* Export output defaults to `RegressionOutput`, whereas `BinaryClassHead` defaults to `PredictOutput`.
The head expects `logits` with shape `[D0, D1, ... DN, 1]`. In many applications, the shape is `[batch_size, 1]`.
The `labels` shape must match `logits`, namely `[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.
If `weight_column` is specified, weights must be of shape `[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.
This is implemented as a generalized linear model, see <https://en.wikipedia.org/wiki/Generalized_linear_model>
The head can be used with a canned estimator. Example:
```
my_head = tf.estimator.LogisticRegressionHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
```
It can also be used with a custom `model_fn`. Example:
```
def _my_model_fn(features, labels, mode):
my_head = tf.estimator.LogisticRegressionHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
```
| Args |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Decides how to reduce training loss over batch and label dimension. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch size * label_dimension`. |
| `name` | name of the head. If provided, summary and metrics keys will be suffixed by `"/" + name`. Also used as `name_scope` when creating ops. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L203-L226)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Return predictions based on keys. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L254-L269)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L228-L252)
```
predictions(
logits
)
```
Return predictions based on keys.
See `base_head.Head` for details.
| Args |
| `logits` | logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the shape is `[batch_size, logits_dimension]`. |
| Returns |
| A dict of predictions. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L271-L297)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.PoissonRegressionHead tf.estimator.PoissonRegressionHead
==================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L410-L496) |
Creates a `Head` for poisson regression using [`tf.nn.log_poisson_loss`](../nn/log_poisson_loss).
Inherits From: [`RegressionHead`](regressionhead), [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.PoissonRegressionHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/PoissonRegressionHead)
```
tf.estimator.PoissonRegressionHead(
label_dimension=1,
weight_column=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
compute_full_loss=True,
name=None
)
```
The loss is the weighted sum over all input dimensions. Namely, if the input labels have shape `[batch_size, label_dimension]`, the loss is the weighted sum over both `batch_size` and `label_dimension`.
The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`. In many applications, the shape is `[batch_size, label_dimension]`.
The `labels` shape must match `logits`, namely `[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape `[D0, D1, ... DN]` is also supported.
If `weight_column` is specified, weights must be of shape `[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or `[D0, D1, ... DN, label_dimension]`.
This is implemented as a generalized linear model, see <https://en.wikipedia.org/wiki/Generalized_linear_model>
The head can be used with a canned estimator. Example:
```
my_head = tf.estimator.PoissonRegressionHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
```
It can also be used with a custom `model_fn`. Example:
```
def _my_model_fn(features, labels, mode):
my_head = tf.estimator.PoissonRegressionHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
```
| Args |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. |
| `label_dimension` | Number of regression labels per example. This is the size of the last dimension of the labels `Tensor` (typically, this has shape `[batch_size, label_dimension]`). |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Decides how to reduce training loss over batch and label dimension. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch size * label_dimension`. |
| `compute_full_loss` | Whether to include the constant `log(z!)` term in computing the poisson loss. See [`tf.nn.log_poisson_loss`](../nn/log_poisson_loss) for the full documentation. |
| `name` | name of the head. If provided, summary and metrics keys will be suffixed by `"/" + name`. Also used as `name_scope` when creating ops. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L203-L226)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Return predictions based on keys. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L254-L269)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L228-L252)
```
predictions(
logits
)
```
Return predictions based on keys.
See `base_head.Head` for details.
| Args |
| `logits` | logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the shape is `[batch_size, logits_dimension]`. |
| Returns |
| A dict of predictions. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L271-L297)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.FeedFnHook tf.estimator.FeedFnHook
=======================
Runs `feed_fn` and sets the `feed_dict` accordingly.
Inherits From: [`SessionRunHook`](sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.FeedFnHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/FeedFnHook), [`tf.compat.v1.train.FeedFnHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/FeedFnHook)
```
tf.estimator.FeedFnHook(
feed_fn
)
```
| Args |
| `feed_fn` | function that takes no arguments and returns `dict` of `Tensor` to feed. |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L148-L165)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L1008-L1010)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L97-L106)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
| programming_docs |
tensorflow tf.estimator.SecondOrStepTimer tf.estimator.SecondOrStepTimer
==============================
Timer that triggers at most once every N seconds or once every N steps.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.SecondOrStepTimer`](https://www.tensorflow.org/api_docs/python/tf/estimator/SecondOrStepTimer), [`tf.compat.v1.train.SecondOrStepTimer`](https://www.tensorflow.org/api_docs/python/tf/estimator/SecondOrStepTimer)
```
tf.estimator.SecondOrStepTimer(
every_secs=None, every_steps=None
)
```
This symbol is also exported to v2 in tf.estimator namespace. See <https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/hooks/basic_session_run_hooks.py>
Methods
-------
### `last_triggered_step`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L150-L151)
```
last_triggered_step()
```
Returns the last triggered time step or None if never triggered.
### `reset`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L106-L108)
```
reset()
```
Resets the timer.
### `should_trigger_for_step`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L110-L135)
```
should_trigger_for_step(
step
)
```
Return true if the timer should trigger for the specified step.
| Args |
| `step` | Training step to trigger on. |
| Returns |
| True if the difference between the current time and the time of the last trigger exceeds `every_secs`, or if the difference between the current step and the last triggered step exceeds `every_steps`. False otherwise. |
### `update_last_triggered_step`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/basic_session_run_hooks.py#L137-L148)
```
update_last_triggered_step(
step
)
```
Update the last triggered time and step number.
| Args |
| `step` | The current step. |
| Returns |
| A pair `(elapsed_time, elapsed_steps)`, where `elapsed_time` is the number of seconds between the current trigger and the last one (a float), and `elapsed_steps` is the number of steps between the current trigger and the last one. Both values will be set to `None` on the first trigger. |
tensorflow tf.estimator.ModeKeys tf.estimator.ModeKeys
=====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/mode_keys.py#L38-L50) |
Standard names for Estimator model modes.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.ModeKeys`](https://www.tensorflow.org/api_docs/python/tf/estimator/ModeKeys)
The following standard keys are defined:
* `TRAIN`: training/fitting mode.
* `EVAL`: testing/evaluation mode.
* `PREDICT`: predication/inference mode.
| Class Variables |
| EVAL | `'eval'` |
| PREDICT | `'infer'` |
| TRAIN | `'train'` |
tensorflow tf.estimator.RegressionHead tf.estimator.RegressionHead
===========================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L34-L406) |
Creates a `Head` for regression using the `mean_squared_error` loss.
Inherits From: [`Head`](head)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.RegressionHead`](https://www.tensorflow.org/api_docs/python/tf/estimator/RegressionHead)
```
tf.estimator.RegressionHead(
label_dimension=1,
weight_column=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
loss_fn=None,
inverse_link_fn=None,
name=None
)
```
The loss is the weighted sum over all input dimensions. Namely, if the input labels have shape `[batch_size, label_dimension]`, the loss is the weighted sum over both `batch_size` and `label_dimension`.
The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`. In many applications, the shape is `[batch_size, label_dimension]`.
The `labels` shape must match `logits`, namely `[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape `[D0, D1, ... DN]` is also supported.
If `weight_column` is specified, weights must be of shape `[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or `[D0, D1, ... DN, label_dimension]`.
Supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or `(labels, logits, features, loss_reduction)` as arguments and returns unreduced loss with shape `[D0, D1, ... DN, label_dimension]`.
Also supports custom `inverse_link_fn`, also known as 'mean function'. `inverse_link_fn` is only used in `PREDICT` mode. It takes `logits` as argument and returns predicted values. This function is the inverse of the link function defined in <https://en.wikipedia.org/wiki/Generalized_linear_model#Link_function> Namely, for poisson regression, set `inverse_link_fn=tf.exp`.
#### Usage:
```
head = tf.estimator.RegressionHead()
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43,), (44,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# expected_loss = weighted_loss / batch_size
# = (43-45)^2 + (44-41)^2 / 2 = 6.50
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
6.50
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
average_loss : 6.50
label/mean : 43.50
prediction/mean : 43.00
preds = head.predictions(logits)
print(preds['predictions'])
tf.Tensor(
[[45.]
[41.]], shape=(2, 1), dtype=float32)
```
Usage with a canned estimator:
```
my_head = tf.estimator.RegressionHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
```
It can also be used with a custom `model_fn`. Example:
```
def _my_model_fn(features, labels, mode):
my_head = tf.estimator.RegressionHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
```
| Args |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. |
| `label_dimension` | Number of regression labels per example. This is the size of the last dimension of the labels `Tensor` (typically, this has shape `[batch_size, label_dimension]`). |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Decides how to reduce training loss over batch and label dimension. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch_size * label_dimension`. |
| `loss_fn` | Optional loss function. Defaults to `mean_squared_error`. |
| `inverse_link_fn` | Optional inverse link function, also known as 'mean function'. Defaults to identity. |
| `name` | name of the head. If provided, summary and metrics keys will be suffixed by `"/" + name`. Also used as `name_scope` when creating ops. |
| Attributes |
| `logits_dimension` | See `base_head.Head` for details. |
| `loss_reduction` | See `base_head.Head` for details. |
| `name` | See `base_head.Head` for details. |
Methods
-------
### `create_estimator_spec`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/base_head.py#L224-L292)
```
create_estimator_spec(
features,
mode,
logits,
labels=None,
optimizer=None,
trainable_variables=None,
train_op_fn=None,
update_ops=None,
regularization_losses=None
)
```
Returns `EstimatorSpec` that a model\_fn can return.
It is recommended to pass all args via name.
| Args |
| `features` | Input `dict` mapping string feature names to `Tensor` or `SparseTensor` objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. |
| `mode` | Estimator's `ModeKeys`. |
| `logits` | Logits `Tensor` to be used by the head. |
| `labels` | Labels `Tensor`, or `dict` mapping string label names to `Tensor` objects of the label values. |
| `optimizer` | An [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) instance to optimize the loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, trainable_variables)`, which updates variables to minimize `loss`. |
| `trainable_variables` | A list or tuple of `Variable` objects to update to minimize `loss`. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable\_variables need to be passed explicitly here. |
| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss in TRAIN mode. Used if `optimizer` is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in TRAIN mode. By default, it is `None` in other modes. If you want to optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use [`EstimatorSpec.loss`](estimatorspec#loss) to compute and apply gradients. |
| `update_ops` | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE\_OPS collection. As Tensorflow 2.x doesn't have collections, update\_ops need to be passed explicitly here. |
| `regularization_losses` | A list of additional scalar losses to be added to the training loss, such as regularization losses. |
| Returns |
| `EstimatorSpec`. |
### `loss`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L203-L226)
```
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
```
Return predictions based on keys. See `base_head.Head` for details.
### `metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L254-L269)
```
metrics(
regularization_losses=None
)
```
Creates metrics. See `base_head.Head` for details.
### `predictions`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L228-L252)
```
predictions(
logits
)
```
Return predictions based on keys.
See `base_head.Head` for details.
| Args |
| `logits` | logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. For many applications, the shape is `[batch_size, logits_dimension]`. |
| Returns |
| A dict of predictions. |
### `update_metrics`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/head/regression_head.py#L271-L297)
```
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
```
Updates eval metrics. See `base_head.Head` for details.
tensorflow tf.estimator.LinearRegressor tf.estimator.LinearRegressor
============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear.py#L1204-L1366) |
An estimator for TensorFlow Linear regression problems.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.LinearRegressor(
feature_columns,
model_dir=None,
label_dimension=1,
weight_column=None,
optimizer='Ftrl',
config=None,
warm_start_from=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
sparse_combiner='sum'
)
```
Train a linear regression model to predict label value given observation of feature values.
#### Example:
```
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a KeyError:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
+ if `column` is a `SparseColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedSparseColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `RealValuedColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using mean squared error.
| Args |
| `feature_columns` | An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from `FeatureColumn`. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `label_dimension` | Number of regression targets per example. This is the size of the last dimension of the labels and logits `Tensor` objects (typically, these have shape `[batch_size, label_dimension]`). |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `optimizer` | An instance of `tf.keras.optimizers.*` or [`tf.estimator.experimental.LinearSDCA`](experimental/linearsdca) used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `warm_start_from` | A string filepath to a checkpoint to warm-start from, or a `WarmStartSettings` object to fully configure warm-starting. If the string filepath is provided instead of a `WarmStartSettings`, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM`. |
| `sparse_combiner` | A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see `tf.feature_column.linear_model`. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.EvalSpec tf.estimator.EvalSpec
=====================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/training.py#L202-L294) |
Configuration for the "eval" part for the `train_and_evaluate` call.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.EvalSpec`](https://www.tensorflow.org/api_docs/python/tf/estimator/EvalSpec)
```
tf.estimator.EvalSpec(
input_fn,
steps=100,
name=None,
hooks=None,
exporters=None,
start_delay_secs=120,
throttle_secs=600
)
```
`EvalSpec` combines details of evaluation of the trained model as well as its export. Evaluation consists of computing metrics to judge the performance of the trained model. Export writes out the trained model on to external storage.
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a tuple (features, labels) with same constraints as below.
* A tuple (features, labels): Where features is a `Tensor` or a dictionary of string feature name to `Tensor` and labels is a `Tensor` or a dictionary of string label name to `Tensor`.
|
| `steps` | Int. Positive number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. See [`Estimator.evaluate`](../compat/v1/estimator/estimator#evaluate) for details. |
| `name` | String. Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| `hooks` | Iterable of `tf.train.SessionRunHook` objects to run during evaluation. |
| `exporters` | Iterable of `Exporter`s, or a single one, or `None`. `exporters` will be invoked after each evaluation. |
| `start_delay_secs` | Int. Start evaluating after waiting for this many seconds. |
| `throttle_secs` | Int. Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum. |
| Raises |
| `ValueError` | If any of the input arguments is invalid. |
| `TypeError` | If any of the arguments is not of the expected type. |
| Attributes |
| `input_fn` | A `namedtuple` alias for field number 0 |
| `steps` | A `namedtuple` alias for field number 1 |
| `name` | A `namedtuple` alias for field number 2 |
| `hooks` | A `namedtuple` alias for field number 3 |
| `exporters` | A `namedtuple` alias for field number 4 |
| `start_delay_secs` | A `namedtuple` alias for field number 5 |
| `throttle_secs` | A `namedtuple` alias for field number 6 |
tensorflow Module: tf.estimator.export Module: tf.estimator.export
===========================
All public utility methods for exporting Estimator to SavedModel.
This file includes functions and constants from core (model\_utils) and export.py
Classes
-------
[`class ClassificationOutput`](export/classificationoutput): Represents the output of a classification head.
[`class EvalOutput`](export/evaloutput): Represents the output of a supervised eval process.
[`class ExportOutput`](export/exportoutput): Represents an output of a model that can be served.
[`class PredictOutput`](export/predictoutput): Represents the output of a generic prediction head.
[`class RegressionOutput`](export/regressionoutput): Represents the output of a regression head.
[`class ServingInputReceiver`](export/servinginputreceiver): A return type for a serving\_input\_receiver\_fn.
[`class TensorServingInputReceiver`](export/tensorservinginputreceiver): A return type for a serving\_input\_receiver\_fn.
Functions
---------
[`build_parsing_serving_input_receiver_fn(...)`](export/build_parsing_serving_input_receiver_fn): Build a serving\_input\_receiver\_fn expecting fed tf.Examples.
[`build_raw_serving_input_receiver_fn(...)`](export/build_raw_serving_input_receiver_fn): Build a serving\_input\_receiver\_fn expecting feature Tensors.
tensorflow tf.estimator.SessionRunArgs tf.estimator.SessionRunArgs
===========================
Represents arguments to be added to a `Session.run()` call.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.SessionRunArgs`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunArgs), [`tf.compat.v1.train.SessionRunArgs`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunArgs)
```
tf.estimator.SessionRunArgs(
fetches, feed_dict=None, options=None
)
```
| Args |
| `fetches` | Exactly like the 'fetches' argument to Session.Run(). Can be a single tensor or op, a list of 'fetches' or a dictionary of fetches. For example: fetches = global\_step\_tensor fetches = [train\_op, summary\_op, global\_step\_tensor] fetches = {'step': global\_step\_tensor, 'summ': summary\_op} Note that this can recurse as expected: fetches = {'step': global\_step\_tensor, 'ops': [train\_op, check\_nan\_op]} |
| `feed_dict` | Exactly like the `feed_dict` argument to `Session.Run()` |
| `options` | Exactly like the `options` argument to `Session.run()`, i.e., a config\_pb2.RunOptions proto. |
| Attributes |
| `fetches` | A `namedtuple` alias for field number 0 |
| `feed_dict` | A `namedtuple` alias for field number 1 |
| `options` | A `namedtuple` alias for field number 2 |
tensorflow tf.estimator.BaselineClassifier tf.estimator.BaselineClassifier
===============================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/baseline.py#L293-L400) |
A classifier that can establish a simple baseline.
Inherits From: [`Estimator`](estimator), [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.BaselineClassifier(
model_dir=None,
n_classes=2,
weight_column=None,
label_vocabulary=None,
optimizer='Ftrl',
config=None,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE
)
```
This classifier ignores feature values and will learn to predict the average value of each label. For single-label problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label problems, this will predict the fraction of examples that are positive for each class.
#### Example:
```
# Build BaselineClassifier
classifier = tf.estimator.BaselineClassifier(n_classes=3)
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
classifier.train(input_fn=input_fn_train)
# Evaluate cross entropy between the test and train labels.
loss = classifier.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the probability distribution of the classes as seen in
# training.
predictions = classifier.predict(new_samples)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
| Args |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `n_classes` | number of label classes. Default is binary classification. It must be greater than 1. Note: Class labels are integers representing the class index (i.e. values from 0 to n\_classes-1). For arbitrary label values (e.g. string labels), convert to class indices first. |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../feature_column/numeric_column) defining feature column representing weights. It will be multiplied by the loss of the example. |
| `label_vocabulary` | Optional list of strings with size `[n_classes]` defining the label vocabulary. Only supported for `n_classes` > 2. |
| `optimizer` | String, `tf.keras.optimizers.*` object, or callable that creates the optimizer to use for training. If not specified, will use `Ftrl` as the default optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| Raises |
| `ValueError` | If `n_classes` < 2. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators can be used while eager execution is enabled. Note that `input_fn` and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.SessionRunHook tf.estimator.SessionRunHook
===========================
Hook to extend calls to MonitoredSession.run().
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.SessionRunHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunHook), [`tf.compat.v1.train.SessionRunHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/SessionRunHook)
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L108-L123)
```
after_create_session(
session, coord
)
```
Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called:
* When this is called, the graph is finalized and ops can no longer be added to the graph.
* This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
| Args |
| `session` | A TensorFlow Session that has been created. |
| `coord` | A Coordinator object which keeps track of all threads. |
### `after_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L148-L165)
```
after_run(
run_context, run_values
)
```
Called after each call to run().
The `run_values` argument contains results of requested ops/tensors by `before_run()`.
The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration.
If `session.run()` raises any exceptions then `after_run()` is not called.
| Args |
| `run_context` | A `SessionRunContext` object. |
| `run_values` | A SessionRunValues object. |
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L125-L146)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L97-L106)
```
begin()
```
Called once before using the session.
When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph.
### `end`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L167-L182)
```
end(
session
)
```
Called at the end of session.
The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint.
If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called.
| Args |
| `session` | A TensorFlow Session that will be soon closed. |
tensorflow tf.estimator.Exporter tf.estimator.Exporter
=====================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L31-L62) |
A class representing a type of model export.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.Exporter`](https://www.tensorflow.org/api_docs/python/tf/estimator/Exporter)
| Attributes |
| `name` | Directory name. A directory name under the export base directory where exports of this type are written. Should not be `None` nor empty. |
Methods
-------
### `export`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/exporter.py#L43-L62)
```
@abc.abstractmethod
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
```
Exports the given `Estimator` to a specific format.
| Args |
| `estimator` | the `Estimator` to export. |
| `export_path` | A string containing a directory where to write the export. |
| `checkpoint_path` | The checkpoint path to export. |
| `eval_result` | The output of [`Estimator.evaluate`](../compat/v1/estimator/estimator#evaluate) on this checkpoint. |
| `is_the_final_export` | This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing `Exporter` to [`tf.estimator.train_and_evaluate`](train_and_evaluate) `is_the_final_export` is always False if [`TrainSpec.max_steps`](trainspec#max_steps) is `None`. |
| Returns |
| The string path to the exported directory or `None` if export is skipped. |
tensorflow tf.estimator.Estimator tf.estimator.Estimator
======================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L1766-L1775) |
Estimator class to train and evaluate TensorFlow models.
Inherits From: [`Estimator`](../compat/v1/estimator/estimator)
```
tf.estimator.Estimator(
model_fn, model_dir=None, config=None, params=None, warm_start_from=None
)
```
The `Estimator` object wraps a model which is specified by a `model_fn`, which, given inputs and a number of other parameters, returns the ops necessary to perform training, evaluation, or predictions.
All outputs (checkpoints, event files, etc.) are written to `model_dir`, or a subdirectory thereof. If `model_dir` is not set, a temporary directory is used.
The `config` argument can be passed [`tf.estimator.RunConfig`](runconfig) object containing information about the execution environment. It is passed on to the `model_fn`, if the `model_fn` has a parameter named "config" (and input functions in the same manner). If the `config` parameter is not passed, it is instantiated by the `Estimator`. Not passing config means that defaults useful for local execution are used. `Estimator` makes config available to the model (for instance, to allow specialization based on the number of workers available), and also uses some of its fields to control internals, especially regarding checkpointing.
The `params` argument contains hyperparameters. It is passed to the `model_fn`, if the `model_fn` has a parameter named "params", and to the input functions in the same manner. `Estimator` only passes params along, it does not inspect it. The structure of `params` is therefore entirely up to the developer.
None of `Estimator`'s methods can be overridden in subclasses (its constructor enforces this). Subclasses should use `model_fn` to configure the base class, and may add methods implementing specialized functionality.
See [estimators](https://tensorflow.org/guide/estimator) for more information.
To warm-start an `Estimator`:
```
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
```
For more details on warm-start configuration, see [`tf.estimator.WarmStartSettings`](warmstartsettings).
| Args |
| `model_fn` | Model function. Follows the signature: * `features` -- This is the first item returned from the `input_fn` passed to `train`, `evaluate`, and `predict`. This should be a single [`tf.Tensor`](../tensor) or `dict` of same.
* `labels` -- This is the second item returned from the `input_fn` passed to `train`, `evaluate`, and `predict`. This should be a single [`tf.Tensor`](../tensor) or `dict` of same (for multi-head models). If mode is [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT), `labels=None` will be passed. If the `model_fn`'s signature does not accept `mode`, the `model_fn` must still be able to handle `labels=None`.
* `mode` -- Optional. Specifies if this is training, evaluation or prediction. See [`tf.estimator.ModeKeys`](modekeys). `params` -- Optional `dict` of hyperparameters. Will receive what is passed to Estimator in `params` parameter. This allows to configure Estimators from hyper parameter tuning.
* `config` -- Optional [`estimator.RunConfig`](runconfig) object. Will receive what is passed to Estimator as its `config` parameter, or a default value. Allows setting up things in your `model_fn` based on configuration such as `num_ps_replicas`, or `model_dir`.
* Returns -- [`tf.estimator.EstimatorSpec`](estimatorspec)
|
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If `PathLike` object, the path will be resolved. If `None`, the model\_dir in `config` will be used if set. If both are set, they must be same. If both are `None`, a temporary directory will be used. |
| `config` | [`estimator.RunConfig`](runconfig) configuration object. |
| `params` | `dict` of hyper parameters that will be passed into `model_fn`. Keys are names of parameters, values are basic python types. |
| `warm_start_from` | Optional string filepath to a checkpoint or SavedModel to warm-start from, or a [`tf.estimator.WarmStartSettings`](warmstartsettings) object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a [`tf.estimator.WarmStartSettings`](warmstartsettings), then all variables are warm-started, and it is assumed that vocabularies and [`tf.Tensor`](../tensor) names are unchanged. |
| Raises |
| `ValueError` | parameters of `model_fn` don't match `params`. |
| `ValueError` | if this is called via a subclass and if that class overrides a member of `Estimator`. |
| Attributes |
| `config` | |
| `export_savedmodel` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Calling methods of `Estimator` will work while eager execution is enabled. However, the `model_fn` and `input_fn` is not executed eagerly, `Estimator` will switch to graph mode before calling all user-provided functions (incl. hooks), so their code has to be compatible with graph mode execution. Note that `input_fn` code using [`tf.data`](../data) generally works in both graph and eager modes.
| programming_docs |
tensorflow tf.estimator.experimental.call_logit_fn tf.estimator.experimental.call\_logit\_fn
=========================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/model_fn.py#L562-L607) |
Calls logit\_fn (experimental).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.call_logit_fn`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/call_logit_fn)
```
tf.estimator.experimental.call_logit_fn(
logit_fn, features, mode, params, config
)
```
THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs for logit and model composition.
A utility function that calls the provided logit\_fn with the relevant subset of provided arguments. Similar to tf.estimator.\_call\_model\_fn().
| Args |
| `logit_fn` | A logit\_fn as defined above. |
| `features` | The features dict. |
| `mode` | TRAIN / EVAL / PREDICT ModeKeys. |
| `params` | The hyperparameter dict. |
| `config` | The configuration object. |
| Returns |
| A logit Tensor, the output of logit\_fn. |
| Raises |
| `ValueError` | if logit\_fn does not return a Tensor or a dictionary mapping strings to Tensors. |
tensorflow tf.estimator.experimental.RNNEstimator tf.estimator.experimental.RNNEstimator
======================================
An Estimator for TensorFlow RNN models with user-specified head.
Inherits From: [`Estimator`](../../compat/v1/estimator/estimator)
```
tf.estimator.experimental.RNNEstimator(
head,
sequence_feature_columns,
context_feature_columns=None,
units=None,
cell_type=USE_DEFAULT,
rnn_cell_fn=None,
return_sequences=False,
model_dir=None,
optimizer='Adagrad',
config=None
)
```
#### Example:
```
token_sequence = sequence_categorical_column_with_hash_bucket(...)
token_emb = embedding_column(categorical_column=token_sequence, ...)
estimator = RNNEstimator(
head=tf.estimator.RegressionHead(),
sequence_feature_columns=[token_emb],
units=[32, 16], cell_type='lstm')
# Or with custom RNN cell:
def rnn_cell_fn(_):
cells = [ tf.keras.layers.LSTMCell(size) for size in [32, 16] ]
return tf.keras.layers.StackedRNNCells(cells)
estimator = RNNEstimator(
head=tf.estimator.RegressionHead(),
sequence_feature_columns=[token_emb],
rnn_cell_fn=rnn_cell_fn)
# Input builders
def input_fn_train: # returns x, y
pass
estimator.train(input_fn=input_fn_train, steps=100)
def input_fn_eval: # returns x, y
pass
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
def input_fn_predict: # returns x, None
pass
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if the head's `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `sequence_feature_columns`:
+ a feature with `key=column.name` whose `value` is a `SparseTensor`.
* for each `column` in `context_feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss and predicted output are determined by the specified head.
| Args |
| `head` | A `Head` instance. This specifies the model's output and loss function to be optimized. |
| `sequence_feature_columns` | An iterable containing the `FeatureColumn`s that represent sequential input. All items in the set should either be sequence columns (e.g. `sequence_numeric_column`) or constructed from one (e.g. `embedding_column` with `sequence_categorical_column_*` as input). |
| `context_feature_columns` | An iterable containing the `FeatureColumn`s for contextual input. The data represented by these columns will be replicated and given to the RNN at each timestep. These columns must be instances of classes derived from `DenseColumn` such as `numeric_column`, not the sequential variants. |
| `units` | Iterable of integer number of hidden units per RNN layer. If set, `cell_type` must also be specified and `rnn_cell_fn` must be `None`. |
| `cell_type` | A class producing a RNN cell or a string specifying the cell type. Supported strings are: `'simple_rnn'`, `'lstm'`, and `'gru'`. If set, `units` must also be specified and `rnn_cell_fn` must be `None`. |
| `rnn_cell_fn` | A function that returns a RNN cell instance that will be used to construct the RNN. If set, `units` and `cell_type` cannot be set. This is for advanced users who need additional customization beyond `units` and `cell_type`. Note that [`tf.keras.layers.StackedRNNCells`](../../keras/layers/stackedrnncells) is needed for stacked RNNs. |
| `return_sequences` | A boolean indicating whether to return the last output in the output sequence, or the full sequence. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `optimizer` | An instance of `tf.Optimizer` or string specifying optimizer type. Defaults to Adagrad optimizer. |
| `config` | `RunConfig` object to configure the runtime settings. |
| Raises |
| `ValueError` | If `units`, `cell_type`, and `rnn_cell_fn` are not compatible. |
| Attributes |
| `config` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](../modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](../modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](../modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](../export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](../modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](../export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](../export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](../export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](../modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_savedmodel`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L1688-L1762)
```
export_savedmodel(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
strip_default_attrs=False
)
```
Exports inference graph as a `SavedModel` into the given dir. (deprecated)
For a detailed guide, see [SavedModel from Estimators.](https://www.tensorflow.org/guide/estimator#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](../export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](../export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](../export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `strip_default_attrs` | Boolean. If `True`, default-valued attributes will be removed from the `NodeDef`s. For a detailed guide, see [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](../estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](../estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators are not compatible with eager execution.
| programming_docs |
tensorflow tf.estimator.experimental.make_early_stopping_hook tf.estimator.experimental.make\_early\_stopping\_hook
=====================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/early_stopping.py#L30-L96) |
Creates early-stopping hook.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.make_early_stopping_hook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/make_early_stopping_hook)
```
tf.estimator.experimental.make_early_stopping_hook(
estimator, should_stop_fn, run_every_secs=60, run_every_steps=None
)
```
Returns a `SessionRunHook` that stops training when `should_stop_fn` returns `True`.
#### Usage example:
```
estimator = ...
hook = early_stopping.make_early_stopping_hook(
estimator, should_stop_fn=make_stop_fn(...))
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
```
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
| Args |
| `estimator` | A [`tf.estimator.Estimator`](../estimator) instance. |
| `should_stop_fn` | `callable`, function that takes no arguments and returns a `bool`. If the function returns `True`, stopping will be initiated by the chief. |
| `run_every_secs` | If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set. |
| `run_every_steps` | If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set. |
| Returns |
| A `SessionRunHook` that periodically executes `should_stop_fn` and initiates early stopping if the function returns `True`. |
| Raises |
| `TypeError` | If `estimator` is not of type [`tf.estimator.Estimator`](../estimator). |
| `ValueError` | If both `run_every_secs` and `run_every_steps` are set. |
tensorflow tf.estimator.experimental.stop_if_lower_hook tf.estimator.experimental.stop\_if\_lower\_hook
===============================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/early_stopping.py#L156-L210) |
Creates hook to stop if the given metric is lower than the threshold.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.stop_if_lower_hook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/stop_if_lower_hook)
```
tf.estimator.experimental.stop_if_lower_hook(
estimator,
metric_name,
threshold,
eval_dir=None,
min_steps=0,
run_every_secs=60,
run_every_steps=None
)
```
#### Usage example:
```
estimator = ...
# Hook to stop training if loss becomes lower than 100.
hook = early_stopping.stop_if_lower_hook(estimator, "loss", 100)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
```
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
| Args |
| `estimator` | A [`tf.estimator.Estimator`](../estimator) instance. |
| `metric_name` | `str`, metric to track. "loss", "accuracy", etc. |
| `threshold` | Numeric threshold for the given metric. |
| `eval_dir` | If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used. |
| `min_steps` | `int`, stop is never requested if global step is less than this value. Defaults to 0. |
| `run_every_secs` | If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set. |
| `run_every_steps` | If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set. |
| Returns |
| An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric is lower than specified threshold and initiates early stopping if true. |
tensorflow tf.estimator.experimental.stop_if_no_increase_hook tf.estimator.experimental.stop\_if\_no\_increase\_hook
======================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/early_stopping.py#L213-L268) |
Creates hook to stop if metric does not increase within given max steps.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.stop_if_no_increase_hook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/stop_if_no_increase_hook)
```
tf.estimator.experimental.stop_if_no_increase_hook(
estimator,
metric_name,
max_steps_without_increase,
eval_dir=None,
min_steps=0,
run_every_secs=60,
run_every_steps=None
)
```
#### Usage example:
```
estimator = ...
# Hook to stop training if accuracy does not increase in over 100000 steps.
hook = early_stopping.stop_if_no_increase_hook(estimator, "accuracy", 100000)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
```
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
| Args |
| `estimator` | A [`tf.estimator.Estimator`](../estimator) instance. |
| `metric_name` | `str`, metric to track. "loss", "accuracy", etc. |
| `max_steps_without_increase` | `int`, maximum number of training steps with no increase in the given metric. |
| `eval_dir` | If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used. |
| `min_steps` | `int`, stop is never requested if global step is less than this value. Defaults to 0. |
| `run_every_secs` | If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set. |
| `run_every_steps` | If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set. |
| Returns |
| An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric shows no increase over given maximum number of training steps, and initiates early stopping if true. |
tensorflow tf.estimator.experimental.LinearSDCA tf.estimator.experimental.LinearSDCA
====================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear.py#L46-L238) |
Stochastic Dual Coordinate Ascent helper for linear estimators.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.LinearSDCA`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/LinearSDCA)
```
tf.estimator.experimental.LinearSDCA(
example_id_column,
num_loss_partitions=1,
num_table_shards=None,
symmetric_l1_regularization=0.0,
symmetric_l2_regularization=1.0,
adaptive=False
)
```
Objects of this class are intended to be provided as the optimizer argument (though LinearSDCA objects do not implement the `tf.train.Optimizer` interface) when creating [`tf.estimator.LinearClassifier`](../linearclassifier) or [`tf.estimator.LinearRegressor`](../linearregressor).
SDCA can only be used with `LinearClassifier` and `LinearRegressor` under the following conditions:
* Feature columns are of type V2.
* Multivalent categorical columns are not normalized. In other words the `sparse_combiner` argument in the estimator constructor should be "sum".
* For classification: binary label.
* For regression: one-dimensional label.
#### Example usage:
```
real_feature_column = numeric_column(...)
sparse_feature_column = categorical_column_with_hash_bucket(...)
linear_sdca = tf.estimator.experimental.LinearSDCA(
example_id_column='example_id',
num_loss_partitions=1,
num_table_shards=1,
symmetric_l2_regularization=2.0)
classifier = tf.estimator.LinearClassifier(
feature_columns=[real_feature_column, sparse_feature_column],
weight_column=...,
optimizer=linear_sdca)
classifier.train(input_fn_train, steps=50)
classifier.evaluate(input_fn=input_fn_eval)
```
Here the expectation is that the `input_fn_*` functions passed to train and evaluate return a pair (dict, label\_tensor) where dict has `example_id_column` as `key` whose value is a `Tensor` of shape [batch\_size] and dtype string. num\_loss\_partitions defines sigma' in eq (11) of [3]. Convergence of (global) loss is guaranteed if `num_loss_partitions` is larger or equal to the product `(#concurrent train ops/per worker) x (#workers)`. Larger values for `num_loss_partitions` lead to slower convergence. The recommended value for `num_loss_partitions` in [`tf.estimator`](../../estimator) (where currently there is one process per worker) is the number of workers running the train steps. It defaults to 1 (single machine). `num_table_shards` defines the number of shards for the internal state table, typically set to match the number of parameter servers for large data sets.
The SDCA algorithm was originally introduced in [1] and it was followed by the L1 proximal step [2], a distributed version [3] and adaptive sampling [4]. [1] www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf [2] <https://arxiv.org/pdf/1309.2375.pdf> [3] <https://arxiv.org/pdf/1502.03508.pdf> [4] <https://arxiv.org/pdf/1502.08053.pdf> Details specific to this implementation are provided in: <https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear_optimizer/doc/sdca.ipynb>
| Args |
| `example_id_column` | The column name containing the example ids. |
| `num_loss_partitions` | Number of workers. |
| `num_table_shards` | Number of shards of the internal state table, typically set to match the number of parameter servers. |
| `symmetric_l1_regularization` | A float value, must be greater than or equal to zero. |
| `symmetric_l2_regularization` | A float value, must be greater than zero and should typically be greater than 1. |
| `adaptive` | A boolean indicating whether to use adaptive sampling. |
Methods
-------
### `get_train_step`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear.py#L176-L238)
```
get_train_step(
state_manager,
weight_column_name,
loss_type,
feature_columns,
features,
targets,
bias_var,
global_step
)
```
Returns the training operation of an SdcaModel optimizer.
tensorflow tf.estimator.experimental.InMemoryEvaluatorHook tf.estimator.experimental.InMemoryEvaluatorHook
===============================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L31-L211) |
Hook to run evaluation in training without a checkpoint.
Inherits From: [`SessionRunHook`](../sessionrunhook)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/InMemoryEvaluatorHook)
```
tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, input_fn, steps=None, hooks=None, name=None, every_n_iter=100
)
```
#### Example:
```
def train_input_fn():
...
return train_dataset
def eval_input_fn():
...
return eval_dataset
estimator = tf.estimator.DNNClassifier(...)
evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, eval_input_fn)
estimator.train(train_input_fn, hooks=[evaluator])
```
Current limitations of this approach are:
* It doesn't support multi-node distributed mode.
* It doesn't support saveable objects other than variables (such as boosted tree support)
* It doesn't support custom saver logic (such as ExponentialMovingAverage support)
| Args |
| `estimator` | A [`tf.estimator.Estimator`](../estimator) instance to call evaluate. |
| `input_fn` | Equivalent to the `input_fn` arg to `estimator.evaluate`. A function that constructs the input data for evaluation. See [Creating input functions](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a tuple (features, labels) with same constraints as below.
* A tuple (features, labels): Where `features` is a `Tensor` or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Equivalent to the `steps` arg to `estimator.evaluate`. Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | Equivalent to the `hooks` arg to `estimator.evaluate`. List of `SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `name` | Equivalent to the `name` arg to `estimator.evaluate`. Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| `every_n_iter` | `int`, runs the evaluator once every N training iteration. |
| Raises |
| `ValueError` | if `every_n_iter` is non-positive or it's not a single machine training |
Methods
-------
### `after_create_session`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L146-L176)
```
after_create_session(
session, coord
)
```
Does first run which shows the eval metrics before training.
### `after_run`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L203-L207)
```
after_run(
run_context, run_values
)
```
Runs evaluator.
### `before_run`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/session_run_hook.py#L125-L146)
```
before_run(
run_context
)
```
Called before each call to run().
You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.
The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
| Args |
| `run_context` | A `SessionRunContext` object. |
| Returns |
| None or a `SessionRunArgs` object. |
### `begin`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L122-L144)
```
begin()
```
Build eval graph and restoring op.
### `end`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L209-L211)
```
end(
session
)
```
Runs evaluator for final model.
tensorflow tf.estimator.experimental.build_raw_supervised_input_receiver_fn tf.estimator.experimental.build\_raw\_supervised\_input\_receiver\_fn
=====================================================================
Build a supervised\_input\_receiver\_fn for raw features and labels.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/build_raw_supervised_input_receiver_fn)
```
tf.estimator.experimental.build_raw_supervised_input_receiver_fn(
features, labels, default_batch_size=None
)
```
This function wraps tensor placeholders in a supervised\_receiver\_fn with the expectation that the features and labels appear precisely as the model\_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
| Args |
| `features` | a dict of string to `Tensor` or `Tensor`. |
| `labels` | a dict of string to `Tensor` or `Tensor`. |
| `default_batch_size` | the number of query examples expected per batch. Leave unset for variable batch size (recommended). |
| Returns |
| A supervised\_input\_receiver\_fn. |
| Raises |
| `ValueError` | if features and labels have overlapping keys. |
tensorflow tf.estimator.experimental.stop_if_no_decrease_hook tf.estimator.experimental.stop\_if\_no\_decrease\_hook
======================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/early_stopping.py#L271-L326) |
Creates hook to stop if metric does not decrease within given max steps.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.stop_if_no_decrease_hook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/stop_if_no_decrease_hook)
```
tf.estimator.experimental.stop_if_no_decrease_hook(
estimator,
metric_name,
max_steps_without_decrease,
eval_dir=None,
min_steps=0,
run_every_secs=60,
run_every_steps=None
)
```
#### Usage example:
```
estimator = ...
# Hook to stop training if loss does not decrease in over 100000 steps.
hook = early_stopping.stop_if_no_decrease_hook(estimator, "loss", 100000)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
```
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
| Args |
| `estimator` | A [`tf.estimator.Estimator`](../estimator) instance. |
| `metric_name` | `str`, metric to track. "loss", "accuracy", etc. |
| `max_steps_without_decrease` | `int`, maximum number of training steps with no decrease in the given metric. |
| `eval_dir` | If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used. |
| `min_steps` | `int`, stop is never requested if global step is less than this value. Defaults to 0. |
| `run_every_secs` | If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set. |
| `run_every_steps` | If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set. |
| Returns |
| An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric shows no decrease over given maximum number of training steps, and initiates early stopping if true. |
| programming_docs |
tensorflow tf.estimator.experimental.RNNClassifier tf.estimator.experimental.RNNClassifier
=======================================
A classifier for TensorFlow RNN models.
Inherits From: [`RNNEstimator`](rnnestimator), [`Estimator`](../../compat/v1/estimator/estimator)
```
tf.estimator.experimental.RNNClassifier(
sequence_feature_columns,
context_feature_columns=None,
units=None,
cell_type=USE_DEFAULT,
rnn_cell_fn=None,
return_sequences=False,
model_dir=None,
n_classes=2,
weight_column=None,
label_vocabulary=None,
optimizer='Adagrad',
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
sequence_mask='sequence_mask',
config=None
)
```
Trains a recurrent neural network model to classify instances into one of multiple classes.
#### Example:
```
token_sequence = sequence_categorical_column_with_hash_bucket(...)
token_emb = embedding_column(categorical_column=token_sequence, ...)
estimator = RNNClassifier(
sequence_feature_columns=[token_emb],
units=[32, 16], cell_type='lstm')
# Input builders
def input_fn_train: # returns x, y
pass
estimator.train(input_fn=input_fn_train, steps=100)
def input_fn_eval: # returns x, y
pass
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
def input_fn_predict: # returns x, None
pass
predictions = estimator.predict(input_fn=input_fn_predict)
```
Input of `train` and `evaluate` should have following features, otherwise there will be a `KeyError`:
* if `weight_column` is not `None`, a feature with `key=weight_column` whose value is a `Tensor`.
* for each `column` in `sequence_feature_columns`:
+ a feature with `key=column.name` whose `value` is a `SparseTensor`.
* for each `column` in `context_feature_columns`:
+ if `column` is a `CategoricalColumn`, a feature with `key=column.name` whose `value` is a `SparseTensor`.
+ if `column` is a `WeightedCategoricalColumn`, two features: the first with `key` the id column name, the second with `key` the weight column name. Both features' `value` must be a `SparseTensor`.
+ if `column` is a `DenseColumn`, a feature with `key=column.name` whose `value` is a `Tensor`.
Loss is calculated by using softmax cross entropy.
| Args |
| `sequence_feature_columns` | An iterable containing the `FeatureColumn`s that represent sequential input. All items in the set should either be sequence columns (e.g. `sequence_numeric_column`) or constructed from one (e.g. `embedding_column` with `sequence_categorical_column_*` as input). |
| `context_feature_columns` | An iterable containing the `FeatureColumn`s for contextual input. The data represented by these columns will be replicated and given to the RNN at each timestep. These columns must be instances of classes derived from `DenseColumn` such as `numeric_column`, not the sequential variants. |
| `units` | Iterable of integer number of hidden units per RNN layer. If set, `cell_type` must also be specified and `rnn_cell_fn` must be `None`. |
| `cell_type` | A class producing a RNN cell or a string specifying the cell type. Supported strings are: `'simple_rnn'`, `'lstm'`, and `'gru'`. If set, `units` must also be specified and `rnn_cell_fn` must be `None`. |
| `rnn_cell_fn` | A function that returns a RNN cell instance that will be used to construct the RNN. If set, `units` and `cell_type` cannot be set. This is for advanced users who need additional customization beyond `units` and `cell_type`. Note that [`tf.keras.layers.StackedRNNCells`](../../keras/layers/stackedrnncells) is needed for stacked RNNs. |
| `return_sequences` | A boolean indicating whether to return the last output in the output sequence, or the full sequence. Note that if True, `weight_column` must be None or a string. |
| `model_dir` | Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
| `n_classes` | Number of label classes. Defaults to 2, namely binary classification. Must be > 1. |
| `weight_column` | A string or a `NumericColumn` created by [`tf.feature_column.numeric_column`](../../feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the `features`. If it is a `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then weight\_column.normalizer\_fn is applied on it to get weight tensor. |
| `label_vocabulary` | A list of strings represents possible label values. If given, labels must be string type and have any value in `label_vocabulary`. If it is not given, that means labels are already encoded as integer or float within [0, 1] for `n_classes=2` and encoded as integer values in {0, 1,..., n\_classes-1} for `n_classes`>2 . Also there will be errors if vocabulary is not provided and labels are string. |
| `optimizer` | An instance of `tf.Optimizer` or string specifying optimizer type. Defaults to Adagrad optimizer. |
| `loss_reduction` | One of [`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. |
| `sequence_mask` | A string with the name of the sequence mask tensor. If `sequence_mask` is in the features dictionary, the provided tensor is used, otherwise the sequence mask is computed from the length of sequential features. The sequence mask is used in evaluation and training mode to aggregate loss and metrics computation while excluding padding steps. It is also added to the predictions dictionary in prediction mode to indicate which steps are padding. |
| `config` | `RunConfig` object to configure the runtime settings. |
| Raises |
| `ValueError` | If `units`, `cell_type`, and `rnn_cell_fn` are not compatible. |
| Attributes |
| `config` | |
| `model_dir` | |
| `model_fn` | Returns the `model_fn` which is bound to `self.params`. |
| `params` | |
Methods
-------
### `eval_dir`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L389-L401)
```
eval_dir(
name=None
)
```
Shows the directory name where evaluation metrics are dumped.
| Args |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A string which is the path of directory contains evaluation metrics. |
### `evaluate`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L403-L478)
```
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
```
Evaluates the model given evaluation data `input_fn`.
For each step, calls `input_fn`, which returns one batch of data. Evaluates until:
* `steps` batches are processed, or
* `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../../errors/outofrangeerror) or `StopIteration`).
| Args |
| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `steps` | Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |
| `checkpoint_path` | Path of a specific checkpoint to evaluate. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, evaluation is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `name` | Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
| Returns |
| A dict containing the evaluation metrics specified in `model_fn` keyed by name, as well as an entry `global_step` which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the `loss` (mean loss per mini-batch) and the `average_loss` (mean loss per sample). Canned classifiers also return the `accuracy`. Canned regressors also return the `label/mean` and the `prediction/mean`. |
| Raises |
| `ValueError` | If `steps <= 0`. |
### `experimental_export_all_saved_models`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L738-L810)
```
experimental_export_all_saved_models(
export_dir_base,
input_receiver_fn_map,
assets_extra=None,
as_text=False,
checkpoint_path=None
)
```
Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode.
For each mode passed in via the `input_receiver_fn_map`, this method builds a new graph by calling the `input_receiver_fn` to obtain feature and label `Tensor`s. Next, this method calls the `Estimator`'s `model_fn` in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the `SavedModel` (order of preference: [`tf.estimator.ModeKeys.TRAIN`](../modekeys#TRAIN), [`tf.estimator.ModeKeys.EVAL`](../modekeys#EVAL), then [`tf.estimator.ModeKeys.PREDICT`](../modekeys#PREDICT)), such that up to three `tf.MetaGraphDefs` are saved with a single set of variables in a single `SavedModel` directory.
For the variables and `tf.MetaGraphDefs`, a timestamped export directory below `export_dir_base`, and writes a `SavedModel` into it containing the `tf.MetaGraphDef` for the given mode and its associated signatures.
For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](../export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
For training and evaluation, the `train_op` is stored in an extra collection, and loss, metrics, and predictions are included in a `SignatureDef` for the mode in question.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `input_receiver_fn_map` | dict of [`tf.estimator.ModeKeys`](../modekeys) to `input_receiver_fn` mappings, where the `input_receiver_fn` is a function that takes no arguments and returns the appropriate subclass of `InputReceiver`. |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if any `input_receiver_fn` is `None`, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_saved_model`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L659-L736)
```
export_saved_model(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
experimental_mode=ModeKeys.PREDICT
)
```
Exports inference graph as a `SavedModel` into the given dir.
For a detailed guide on SavedModel, see [Using the SavedModel format](https://tensorflow.org/guide/saved_model#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](../export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
The experimental\_mode parameter can be used to export a single train/eval/predict graph as a `SavedModel`. See `experimental_export_all_saved_models` for full docs.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](../export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](../export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `experimental_mode` | [`tf.estimator.ModeKeys`](../modekeys) value indicating with mode will be exported. Note that this feature is experimental. |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `export_savedmodel`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L1688-L1762)
```
export_savedmodel(
export_dir_base,
serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None,
strip_default_attrs=False
)
```
Exports inference graph as a `SavedModel` into the given dir. (deprecated)
For a detailed guide, see [SavedModel from Estimators.](https://www.tensorflow.org/guide/estimator#savedmodels_from_estimators).
This method builds a new graph by first calling the `serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling this `Estimator`'s `model_fn` to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given `export_dir_base`, and writes a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this session.
The exported `MetaGraphDef` will provide one `SignatureDef` for each element of the `export_outputs` dict returned from the `model_fn`, named using the same keys. One of these keys is always `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding [`tf.estimator.export.ExportOutput`](../export/exportoutput)s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn`.
Extra assets may be written into the `SavedModel` via the `assets_extra` argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
| Args |
| `export_dir_base` | A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s. |
| `serving_input_receiver_fn` | A function that takes no argument and returns a [`tf.estimator.export.ServingInputReceiver`](../export/servinginputreceiver) or [`tf.estimator.export.TensorServingInputReceiver`](../export/tensorservinginputreceiver). |
| `assets_extra` | A dict specifying how to populate the assets.extra directory within the exported `SavedModel`, or `None` if no extra assets are needed. |
| `as_text` | whether to write the `SavedModel` proto in text format. |
| `checkpoint_path` | The checkpoint path to export. If `None` (the default), the most recent checkpoint found within the model directory is chosen. |
| `strip_default_attrs` | Boolean. If `True`, default-valued attributes will be removed from the `NodeDef`s. For a detailed guide, see [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). |
| Returns |
| The path to the exported directory as a bytes object. |
| Raises |
| `ValueError` | if no `serving_input_receiver_fn` is provided, no `export_outputs` are provided, or no checkpoint can be found. |
### `get_variable_names`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L261-L272)
```
get_variable_names()
```
Returns list of all variable names in this model.
| Returns |
| List of names. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `get_variable_value`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L245-L259)
```
get_variable_value(
name
)
```
Returns value of the variable given by name.
| Args |
| `name` | string or a list of string, name of the tensor. |
| Returns |
| Numpy array - value of the tensor. |
| Raises |
| `ValueError` | If the `Estimator` has not produced a checkpoint yet. |
### `latest_checkpoint`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L274-L282)
```
latest_checkpoint()
```
Finds the filename of the latest saved checkpoint file in `model_dir`.
| Returns |
| The full path to the latest checkpoint or `None` if no checkpoint was found. |
### `predict`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L555-L653)
```
predict(
input_fn,
predict_keys=None,
hooks=None,
checkpoint_path=None,
yield_single_examples=True
)
```
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: [issue/20506](https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)
| Args |
| `input_fn` | A function that constructs the features. Prediction continues until `input_fn` raises an end-of-input exception ([`tf.errors.OutOfRangeError`](../../errors/outofrangeerror) or `StopIteration`). See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * [`tf.data.Dataset`](../../data/dataset) object -- Outputs of `Dataset` object must have same constraints as below.
* features -- A [`tf.Tensor`](../../tensor) or a dictionary of string feature name to `Tensor`. features are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
* A tuple, in which case the first item is extracted as features.
|
| `predict_keys` | list of `str`, name of the keys to predict. It is used if the [`tf.estimator.EstimatorSpec.predictions`](../estimatorspec#predictions) is a `dict`. If `predict_keys` is used then rest of the predictions will be filtered from the dictionary. If `None`, returns all. |
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the prediction call. |
| `checkpoint_path` | Path of a specific checkpoint to predict. If `None`, the latest checkpoint in `model_dir` is used. If there are no checkpoints in `model_dir`, prediction is run with newly initialized `Variables` instead of ones restored from checkpoint. |
| `yield_single_examples` | If `False`, yields the whole batch as returned by the `model_fn` instead of decomposing the batch into individual elements. This is useful if `model_fn` returns some tensors whose first dimension is not equal to the batch size. |
| Yields |
| Evaluated values of `predictions` tensors. |
| Raises |
| `ValueError` | If batch length of predictions is not the same and `yield_single_examples` is `True`. |
| `ValueError` | If there is a conflict between `predict_keys` and `predictions`. For example if `predict_keys` is not `None` but [`tf.estimator.EstimatorSpec.predictions`](../estimatorspec#predictions) is not a `dict`. |
### `train`
[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/estimator.py#L284-L362)
```
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
```
Trains a model given training data `input_fn`.
| Args |
| `input_fn` | A function that provides input data for training as minibatches. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * A [`tf.data.Dataset`](../../data/dataset) object: Outputs of `Dataset` object must be a tuple `(features, labels)` with same constraints as below.
* A tuple `(features, labels)`: Where `features` is a [`tf.Tensor`](../../tensor) or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs.
|
| `hooks` | List of `tf.train.SessionRunHook` subclass instances. Used for callbacks inside the training loop. |
| `steps` | Number of steps for which to train the model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. `steps` works incrementally. If you call two times `train(steps=10)` then training occurs in total 20 steps. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set `max_steps` instead. If set, `max_steps` must be `None`. |
| `max_steps` | Number of total steps for which to train model. If `None`, train forever or train until `input_fn` generates the `tf.errors.OutOfRange` error or `StopIteration` exception. If set, `steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the middle, training stops before `max_steps` steps. Two calls to `train(steps=100)` means 200 training iterations. On the other hand, two calls to `train(max_steps=100)` means that the second call will not do any iteration since first call did all 100 steps. |
| `saving_listeners` | list of `CheckpointSaverListener` objects. Used for callbacks that run immediately before or after checkpoint savings. |
| Returns |
| `self`, for chaining. |
| Raises |
| `ValueError` | If both `steps` and `max_steps` are not `None`. |
| `ValueError` | If either `steps` or `max_steps <= 0`. |
eager compatibility
-------------------
Estimators are not compatible with eager execution.
| programming_docs |
tensorflow tf.estimator.experimental.stop_if_higher_hook tf.estimator.experimental.stop\_if\_higher\_hook
================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/early_stopping.py#L99-L153) |
Creates hook to stop if the given metric is higher than the threshold.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.stop_if_higher_hook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/stop_if_higher_hook)
```
tf.estimator.experimental.stop_if_higher_hook(
estimator,
metric_name,
threshold,
eval_dir=None,
min_steps=0,
run_every_secs=60,
run_every_steps=None
)
```
#### Usage example:
```
estimator = ...
# Hook to stop training if accuracy becomes higher than 0.9.
hook = early_stopping.stop_if_higher_hook(estimator, "accuracy", 0.9)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
```
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in `train_and_evaluate` API and will be addressed in a future revision.
| Args |
| `estimator` | A [`tf.estimator.Estimator`](../estimator) instance. |
| `metric_name` | `str`, metric to track. "loss", "accuracy", etc. |
| `threshold` | Numeric threshold for the given metric. |
| `eval_dir` | If set, directory containing summary files with eval metrics. By default, `estimator.eval_dir()` will be used. |
| `min_steps` | `int`, stop is never requested if global step is less than this value. Defaults to 0. |
| `run_every_secs` | If specified, calls `should_stop_fn` at an interval of `run_every_secs` seconds. Defaults to 60 seconds. Either this or `run_every_steps` must be set. |
| `run_every_steps` | If specified, calls `should_stop_fn` every `run_every_steps` steps. Either this or `run_every_secs` must be set. |
| Returns |
| An early-stopping hook of type `SessionRunHook` that periodically checks if the given metric is higher than specified threshold and initiates early stopping if true. |
tensorflow tf.estimator.experimental.make_stop_at_checkpoint_step_hook tf.estimator.experimental.make\_stop\_at\_checkpoint\_step\_hook
================================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L269-L280) |
Creates a proper StopAtCheckpointStepHook based on chief status.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.experimental.make_stop_at_checkpoint_step_hook`](https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/make_stop_at_checkpoint_step_hook)
```
tf.estimator.experimental.make_stop_at_checkpoint_step_hook(
estimator, last_step, wait_after_file_check_secs=30
)
```
tensorflow tf.estimator.export.ExportOutput tf.estimator.export.ExportOutput
================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L28-L95) |
Represents an output of a model that can be served.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.ExportOutput`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/ExportOutput)
These typically correspond to model heads.
Methods
-------
### `as_signature_def`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L38-L49)
```
@abc.abstractmethod
as_signature_def(
receiver_tensors
)
```
Generate a SignatureDef proto for inclusion in a MetaGraphDef.
The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver\_tensors as inputs.
| Args |
| `receiver_tensors` | a `Tensor`, or a dict of string to `Tensor`, specifying input nodes that will be fed. |
tensorflow tf.estimator.export.TensorServingInputReceiver tf.estimator.export.TensorServingInputReceiver
==============================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/export/export.py#L166-L220) |
A return type for a serving\_input\_receiver\_fn.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.TensorServingInputReceiver`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/TensorServingInputReceiver)
```
tf.estimator.export.TensorServingInputReceiver(
features, receiver_tensors, receiver_tensors_alternatives=None
)
```
This is for use with models that expect a single `Tensor` or `SparseTensor` as an input feature, as opposed to a dict of features.
The normal `ServingInputReceiver` always returns a feature dict, even if it contains only one entry, and so can be used only with models that accept such a dict. For models that accept only a single raw feature, the `serving_input_receiver_fn` provided to [`Estimator.export_saved_model()`](../../compat/v1/estimator/estimator#export_saved_model) should return this `TensorServingInputReceiver` instead. See: https://github.com/tensorflow/tensorflow/issues/11674
Note that the receiver\_tensors and receiver\_tensor\_alternatives arguments will be automatically converted to the dict representation in either case, because the SavedModel format requires each input `Tensor` to have a name (provided by the dict key).
| Attributes |
| `features` | A single `Tensor` or `SparseTensor`, representing the feature to be passed to the model. |
| `receiver_tensors` | A `Tensor`, `SparseTensor`, or dict of string to `Tensor` or `SparseTensor`, specifying input nodes where this receiver expects to be fed by default. Typically, this is a single placeholder expecting serialized `tf.Example` protos. |
| `receiver_tensors_alternatives` | a dict of string to additional groups of receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict of string to `Tensor` or`SparseTensor`. These named receiver tensor alternatives generate additional serving signatures, which may be used to feed inputs at different points within the input receiver subgraph. A typical usage is to allow feeding raw feature `Tensor`s *downstream* of the tf.parse\_example() op. Defaults to None. |
tensorflow tf.estimator.export.build_parsing_serving_input_receiver_fn tf.estimator.export.build\_parsing\_serving\_input\_receiver\_fn
================================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/export/export.py#L285-L314) |
Build a serving\_input\_receiver\_fn expecting fed tf.Examples.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.build_parsing_serving_input_receiver_fn`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/build_parsing_serving_input_receiver_fn)
```
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec, default_batch_size=None
)
```
Creates a serving\_input\_receiver\_fn that expects a serialized tf.Example fed into a string placeholder. The function parses the tf.Example according to the provided feature\_spec, and returns all parsed Tensors as features.
| Args |
| `feature_spec` | a dict of string to `VarLenFeature`/`FixedLenFeature`. |
| `default_batch_size` | the number of query examples expected per batch. Leave unset for variable batch size (recommended). |
| Returns |
| A serving\_input\_receiver\_fn suitable for use in serving. |
tensorflow tf.estimator.export.EvalOutput tf.estimator.export.EvalOutput
==============================
Represents the output of a supervised eval process.
Inherits From: [`ExportOutput`](exportoutput)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.EvalOutput`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/EvalOutput)
```
tf.estimator.export.EvalOutput(
loss=None, predictions=None, metrics=None
)
```
This class generates the appropriate signature def for exporting eval output by type-checking and wrapping loss, predictions, and metrics values.
| Args |
| `loss` | dict of Tensors or single Tensor representing calculated loss. |
| `predictions` | dict of Tensors or single Tensor representing model predictions. |
| `metrics` | Dict of metric results keyed by name. The values of the dict can be one of the following: (1) instance of `Metric` class. (2) (metric\_value, update\_op) tuples, or a single tuple. metric\_value must be a Tensor, and update\_op must be a Tensor or Op. |
| Raises |
| `ValueError` | if any of the outputs' dict keys are not strings or tuples of strings or the values are not Tensors (or Operations in the case of update\_op). |
| Attributes |
| `loss` | |
| `metrics` | |
| `predictions` | |
Methods
-------
### `as_signature_def`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L397-L400)
```
as_signature_def(
receiver_tensors
)
```
Generate a SignatureDef proto for inclusion in a MetaGraphDef.
The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver\_tensors as inputs.
| Args |
| `receiver_tensors` | a `Tensor`, or a dict of string to `Tensor`, specifying input nodes that will be fed. |
| Class Variables |
| LOSS\_NAME | `'loss'` |
| METRICS\_NAME | `'metrics'` |
| METRIC\_UPDATE\_SUFFIX | `'update_op'` |
| METRIC\_VALUE\_SUFFIX | `'value'` |
| PREDICTIONS\_NAME | `'predictions'` |
tensorflow tf.estimator.export.RegressionOutput tf.estimator.export.RegressionOutput
====================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L177-L216) |
Represents the output of a regression head.
Inherits From: [`ExportOutput`](exportoutput)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.RegressionOutput`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/RegressionOutput)
```
tf.estimator.export.RegressionOutput(
value
)
```
| Args |
| `value` | a float `Tensor` giving the predicted values. Required. |
| Raises |
| `ValueError` | if the value is not a `Tensor` with dtype tf.float32. |
| Attributes |
| `value` | |
Methods
-------
### `as_signature_def`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L198-L216)
```
as_signature_def(
receiver_tensors
)
```
Generate a SignatureDef proto for inclusion in a MetaGraphDef.
The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver\_tensors as inputs.
| Args |
| `receiver_tensors` | a `Tensor`, or a dict of string to `Tensor`, specifying input nodes that will be fed. |
tensorflow tf.estimator.export.PredictOutput tf.estimator.export.PredictOutput
=================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L219-L249) |
Represents the output of a generic prediction head.
Inherits From: [`ExportOutput`](exportoutput)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.PredictOutput`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/PredictOutput)
```
tf.estimator.export.PredictOutput(
outputs
)
```
A generic prediction need not be either a classification or a regression.
Named outputs must be provided as a dict from string to `Tensor`,
| Args |
| `outputs` | A `Tensor` or a dict of string to `Tensor` representing the predictions. |
| Raises |
| `ValueError` | if the outputs is not dict, or any of its keys are not strings, or any of its values are not `Tensor`s. |
| Attributes |
| `outputs` | |
Methods
-------
### `as_signature_def`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L247-L249)
```
as_signature_def(
receiver_tensors
)
```
Generate a SignatureDef proto for inclusion in a MetaGraphDef.
The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver\_tensors as inputs.
| Args |
| `receiver_tensors` | a `Tensor`, or a dict of string to `Tensor`, specifying input nodes that will be fed. |
tensorflow tf.estimator.export.build_raw_serving_input_receiver_fn tf.estimator.export.build\_raw\_serving\_input\_receiver\_fn
============================================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/export/export.py#L355-L377) |
Build a serving\_input\_receiver\_fn expecting feature Tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.build_raw_serving_input_receiver_fn`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/build_raw_serving_input_receiver_fn)
```
tf.estimator.export.build_raw_serving_input_receiver_fn(
features, default_batch_size=None
)
```
Creates an serving\_input\_receiver\_fn that expects all features to be fed directly.
| Args |
| `features` | a dict of string to `Tensor`. |
| `default_batch_size` | the number of query examples expected per batch. Leave unset for variable batch size (recommended). |
| Returns |
| A serving\_input\_receiver\_fn. |
tensorflow tf.estimator.export.ClassificationOutput tf.estimator.export.ClassificationOutput
========================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L98-L174) |
Represents the output of a classification head.
Inherits From: [`ExportOutput`](exportoutput)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.ClassificationOutput`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/ClassificationOutput)
```
tf.estimator.export.ClassificationOutput(
scores=None, classes=None
)
```
Either classes or scores or both must be set.
The classes `Tensor` must provide string labels, not integer class IDs.
If only classes is set, it is interpreted as providing top-k results in descending order.
If only scores is set, it is interpreted as providing a score for every class in order of class ID.
If both classes and scores are set, they are interpreted as zipped, so each score corresponds to the class at the same index. Clients should not depend on the order of the entries.
| Args |
| `scores` | A float `Tensor` giving scores (sometimes but not always interpretable as probabilities) for each class. May be `None`, but only if `classes` is set. Interpretation varies-- see class doc. |
| `classes` | A string `Tensor` giving predicted class labels. May be `None`, but only if `scores` is set. Interpretation varies-- see class doc. |
| Raises |
| `ValueError` | if neither classes nor scores is set, or one of them is not a `Tensor` with the correct dtype. |
| Attributes |
| `classes` | |
| `scores` | |
Methods
-------
### `as_signature_def`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/saved_model/model_utils/export_output.py#L155-L174)
```
as_signature_def(
receiver_tensors
)
```
Generate a SignatureDef proto for inclusion in a MetaGraphDef.
The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver\_tensors as inputs.
| Args |
| `receiver_tensors` | a `Tensor`, or a dict of string to `Tensor`, specifying input nodes that will be fed. |
tensorflow tf.estimator.export.ServingInputReceiver tf.estimator.export.ServingInputReceiver
========================================
[View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/export/export.py#L109-L162) |
A return type for a serving\_input\_receiver\_fn.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.estimator.export.ServingInputReceiver`](https://www.tensorflow.org/api_docs/python/tf/estimator/export/ServingInputReceiver)
```
tf.estimator.export.ServingInputReceiver(
features, receiver_tensors, receiver_tensors_alternatives=None
)
```
| Attributes |
| `features` | A `Tensor`, `SparseTensor`, or dict of string or int to `Tensor` or `SparseTensor`, specifying the features to be passed to the model. Note: if `features` passed is not a dict, it will be wrapped in a dict with a single entry, using 'feature' as the key. Consequently, the model must accept a feature dict of the form {'feature': tensor}. You may use `TensorServingInputReceiver` if you want the tensor to be passed as is. |
| `receiver_tensors` | A `Tensor`, `SparseTensor`, or dict of string to `Tensor` or `SparseTensor`, specifying input nodes where this receiver expects to be fed by default. Typically, this is a single placeholder expecting serialized `tf.Example` protos. |
| `receiver_tensors_alternatives` | a dict of string to additional groups of receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict of string to `Tensor` or`SparseTensor`. These named receiver tensor alternatives generate additional serving signatures, which may be used to feed inputs at different points within the input receiver subgraph. A typical usage is to allow feeding raw feature `Tensor`s *downstream* of the tf.parse\_example() op. Defaults to None. |
tensorflow tf.io.RaggedFeature tf.io.RaggedFeature
===================
Configuration for passing a RaggedTensor input feature.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature)
```
tf.io.RaggedFeature(
dtype,
value_key=None,
partitions=(),
row_splits_dtype=tf.dtypes.int32,
validate=False
)
```
`value_key` specifies the feature key for a variable-length list of values; and `partitions` specifies zero or more feature keys for partitioning those values into higher dimensions. Each element of `partitions` must be one of the following:
* `tf.io.RaggedFeature.RowSplits(key: string)`
* `tf.io.RaggedFeature.RowLengths(key: string)`
* `tf.io.RaggedFeature.RowStarts(key: string)`
* `tf.io.RaggedFeature.RowLimits(key: string)`
* `tf.io.RaggedFeature.ValueRowIds(key: string)`
* `tf.io.RaggedFeature.UniformRowLength(length: int)`.
Where `key` is a feature key whose values are used to partition the values. Partitions are listed from outermost to innermost.
* If `len(partitions) == 0` (the default), then:
+ A feature from a single `tf.Example` is parsed into a 1D [`tf.Tensor`](../tensor).
+ A feature from a batch of `tf.Example`s is parsed into a 2D [`tf.RaggedTensor`](../raggedtensor), where the outer dimension is the batch dimension, and the inner (ragged) dimension is the feature length in each example.
* If `len(partitions) == 1`, then:
+ A feature from a single `tf.Example` is parsed into a 2D [`tf.RaggedTensor`](../raggedtensor), where the values taken from the `value_key` are separated into rows using the partition key.
+ A feature from a batch of `tf.Example`s is parsed into a 3D [`tf.RaggedTensor`](../raggedtensor), where the outer dimension is the batch dimension, the two inner dimensions are formed by separating the `value_key` values from each example into rows using that example's partition key.
* If `len(partitions) > 1`, then:
+ A feature from a single `tf.Example` is parsed into a [`tf.RaggedTensor`](../raggedtensor) whose rank is `len(partitions)+1`, and whose ragged\_rank is `len(partitions)`.
+ A feature from a batch of `tf.Example`s is parsed into a [`tf.RaggedTensor`](../raggedtensor) whose rank is `len(partitions)+2` and whose ragged\_rank is `len(partitions)+1`, where the outer dimension is the batch dimension.
There is one exception: if the final (i.e., innermost) element(s) of `partitions` are `UniformRowLength`s, then the values are simply reshaped (as a higher-dimensional [`tf.Tensor`](../tensor)), rather than being wrapped in a [`tf.RaggedTensor`](../raggedtensor).
#### Examples
```
import google.protobuf.text_format as pbtext
example_batch = [
pbtext.Merge(r'''
features {
feature {key: "v" value {int64_list {value: [3, 1, 4, 1, 5, 9]} } }
feature {key: "s1" value {int64_list {value: [0, 2, 3, 3, 6]} } }
feature {key: "s2" value {int64_list {value: [0, 2, 3, 4]} } }
}''', tf.train.Example()).SerializeToString(),
pbtext.Merge(r'''
features {
feature {key: "v" value {int64_list {value: [2, 7, 1, 8, 2, 8, 1]} } }
feature {key: "s1" value {int64_list {value: [0, 3, 4, 5, 7]} } }
feature {key: "s2" value {int64_list {value: [0, 1, 1, 4]} } }
}''', tf.train.Example()).SerializeToString()]
```
```
features = {
# Zero partitions: returns 1D tf.Tensor for each Example.
'f1': tf.io.RaggedFeature(value_key="v", dtype=tf.int64),
# One partition: returns 2D tf.RaggedTensor for each Example.
'f2': tf.io.RaggedFeature(value_key="v", dtype=tf.int64, partitions=[
tf.io.RaggedFeature.RowSplits("s1")]),
# Two partitions: returns 3D tf.RaggedTensor for each Example.
'f3': tf.io.RaggedFeature(value_key="v", dtype=tf.int64, partitions=[
tf.io.RaggedFeature.RowSplits("s2"),
tf.io.RaggedFeature.RowSplits("s1")])
}
```
```
feature_dict = tf.io.parse_single_example(example_batch[0], features)
for (name, val) in sorted(feature_dict.items()):
print('%s: %s' % (name, val))
f1: tf.Tensor([3 1 4 1 5 9], shape=(6,), dtype=int64)
f2: <tf.RaggedTensor [[3, 1], [4], [], [1, 5, 9]]>
f3: <tf.RaggedTensor [[[3, 1], [4]], [[]], [[1, 5, 9]]]>
```
```
feature_dict = tf.io.parse_example(example_batch, features)
for (name, val) in sorted(feature_dict.items()):
print('%s: %s' % (name, val))
f1: <tf.RaggedTensor [[3, 1, 4, 1, 5, 9],
[2, 7, 1, 8, 2, 8, 1]]>
f2: <tf.RaggedTensor [[[3, 1], [4], [], [1, 5, 9]],
[[2, 7, 1], [8], [2], [8, 1]]]>
f3: <tf.RaggedTensor [[[[3, 1], [4]], [[]], [[1, 5, 9]]],
[[[2, 7, 1]], [], [[8], [2], [8, 1]]]]>
```
#### Fields:
* **`dtype`**: Data type of the `RaggedTensor`. Must be one of: [`tf.dtypes.int64`](../dtypes#int64), [`tf.dtypes.float32`](../dtypes#float32), [`tf.dtypes.string`](../dtypes#string).
* **`value_key`**: (Optional.) Key for a `Feature` in the input `Example`, whose parsed `Tensor` will be the resulting [`RaggedTensor.flat_values`](../raggedtensor#flat_values). If not specified, then it defaults to the key for this `RaggedFeature`.
* **`partitions`**: (Optional.) A list of objects specifying the row-partitioning tensors (from outermost to innermost). Each entry in this list must be one of:
+ `tf.io.RaggedFeature.RowSplits(key: string)`
+ `tf.io.RaggedFeature.RowLengths(key: string)`
+ `tf.io.RaggedFeature.RowStarts(key: string)`
+ `tf.io.RaggedFeature.RowLimits(key: string)`
+ `tf.io.RaggedFeature.ValueRowIds(key: string)`
+ `tf.io.RaggedFeature.UniformRowLength(length: int)`. Where `key` is a key for a `Feature` in the input `Example`, whose parsed `Tensor` will be the resulting row-partitioning tensor.
* **`row_splits_dtype`**: (Optional.) Data type for the row-partitioning tensor(s). One of `int32` or `int64`. Defaults to `int32`.
* **`validate`**: (Optional.) Boolean indicating whether or not to validate that the input values form a valid RaggedTensor. Defaults to `False`.
| Attributes |
| `dtype` | A `namedtuple` alias for field number 0 |
| `value_key` | A `namedtuple` alias for field number 1 |
| `partitions` | A `namedtuple` alias for field number 2 |
| `row_splits_dtype` | A `namedtuple` alias for field number 3 |
| `validate` | A `namedtuple` alias for field number 4 |
Child Classes
-------------
[`class RowLengths`](raggedfeature/rowlengths)
[`class RowLimits`](raggedfeature/rowlimits)
[`class RowSplits`](raggedfeature/rowsplits)
[`class RowStarts`](raggedfeature/rowstarts)
[`class UniformRowLength`](raggedfeature/uniformrowlength)
[`class ValueRowIds`](raggedfeature/valuerowids)
| programming_docs |
tensorflow tf.io.encode_jpeg tf.io.encode\_jpeg
==================
JPEG-encode an image.
#### View aliases
**Main aliases**
[`tf.image.encode_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/encode_jpeg)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.encode_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/encode_jpeg), [`tf.compat.v1.io.encode_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/encode_jpeg)
```
tf.io.encode_jpeg(
image,
format='',
quality=95,
progressive=False,
optimize_size=False,
chroma_downsampling=True,
density_unit='in',
x_density=300,
y_density=300,
xmp_metadata='',
name=None
)
```
`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.
The attr `format` can be used to override the color format of the encoded output. Values can be:
* `''`: Use a default format based on the number of channels in the image.
* `grayscale`: Output a grayscale JPEG image. The `channels` dimension of `image` must be 1.
* `rgb`: Output an RGB JPEG image. The `channels` dimension of `image` must be 3.
If `format` is not specified or is the empty string, a default format is picked in function of the number of channels in `image`:
* 1: Output a grayscale image.
* 3: Output an RGB image.
| Args |
| `image` | A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`. |
| `format` | An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`. Per pixel image format. |
| `quality` | An optional `int`. Defaults to `95`. Quality of the compression from 0 to 100 (higher is better and slower). |
| `progressive` | An optional `bool`. Defaults to `False`. If True, create a JPEG that loads progressively (coarse to fine). |
| `optimize_size` | An optional `bool`. Defaults to `False`. If True, spend CPU/RAM to reduce size with no quality change. |
| `chroma_downsampling` | An optional `bool`. Defaults to `True`. See <http://en.wikipedia.org/wiki/Chroma_subsampling> |
| `density_unit` | An optional `string` from: `"in", "cm"`. Defaults to `"in"`. Unit used to specify `x_density` and `y_density`: pixels per inch (`'in'`) or centimeter (`'cm'`). |
| `x_density` | An optional `int`. Defaults to `300`. Horizontal pixels per density unit. |
| `y_density` | An optional `int`. Defaults to `300`. Vertical pixels per density unit. |
| `xmp_metadata` | An optional `string`. Defaults to `""`. If not empty, embed this XMP metadata in the image header. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.VarLenFeature tf.io.VarLenFeature
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_config.py#L44-L50) |
Configuration for parsing a variable-length input feature.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.VarLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/VarLenFeature), [`tf.compat.v1.io.VarLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/VarLenFeature)
```
tf.io.VarLenFeature(
dtype
)
```
#### Fields:
* **`dtype`**: Data type of input.
| Attributes |
| `dtype` | A `namedtuple` alias for field number 0 |
tensorflow tf.io.decode_proto tf.io.decode\_proto
===================
The op extracts fields from a serialized protocol buffers message into tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.decode_proto`](https://www.tensorflow.org/api_docs/python/tf/io/decode_proto)
```
tf.io.decode_proto(
bytes,
message_type,
field_names,
output_types,
descriptor_source='local://',
message_format='binary',
sanitize=False,
name=None
)
```
>
> **Note:** This API is designed for orthogonality rather than human-friendliness. It can be used to parse input protos by hand, but it is intended for use in generated code.
>
The `decode_proto` op extracts fields from a serialized protocol buffers message into tensors. The fields in `field_names` are decoded and converted to the corresponding `output_types` if possible.
A `message_type` name must be provided to give context for the field names. The actual message descriptor can be looked up either in the linked-in descriptor pool or a filename provided by the caller using the `descriptor_source` attribute.
Each output tensor is a dense tensor. This means that it is padded to hold the largest number of repeated elements seen in the input minibatch. (The shape is also padded by one to prevent zero-sized dimensions). The actual repeat counts for each example in the minibatch can be found in the `sizes` output. In many cases the output of `decode_proto` is fed immediately into tf.squeeze if missing values are not a concern. When using tf.squeeze, always pass the squeeze dimension explicitly to avoid surprises.
For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases:
* A proto field that contains a submessage or group can only be converted to `DT_STRING` (the serialized submessage). This is to reduce the complexity of the API. The resulting string can be used as input to another instance of the decode\_proto op.
* TensorFlow lacks support for unsigned integers. The ops represent uint64 types as a `DT_INT64` with the same twos-complement bit pattern (the obvious way). Unsigned int32 values can be represented exactly by specifying type `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in the `output_types` attribute.
* `map` fields are not directly decoded. They are treated as `repeated` fields, of the appropriate entry type. The proto-compiler defines entry types for each map field. The type-name is the field name, converted to "CamelCase" with "Entry" appended. The [`tf.train.Features.FeatureEntry`](../train/features/featureentry) message is an example of one of these implicit `Entry` types.
* `enum` fields should be read as int32.
Both binary and text proto serializations are supported, and can be chosen using the `format` attribute.
The `descriptor_source` attribute selects the source of protocol descriptors to consult when looking up `message_type`. This may be:
* An empty string or "local://", in which case protocol descriptors are created for C++ (not Python) proto definitions linked to the binary.
* A file, in which case protocol descriptors are created from the file, which is expected to contain a `FileDescriptorSet` serialized as a string. NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out` and `--include_imports` options to the protocol compiler `protoc`.
* A "bytes://", in which protocol descriptors are created from `<bytes>`, which is expected to be a `FileDescriptorSet` serialized as a string.
#### Here is an example:
The, internal, `Summary.Value` proto contains a `oneof {float simple_value; Image image; ...}`
```
from google.protobuf import text_format
# A Summary.Value contains: oneof {float simple_value; Image image}
values = [
"simple_value: 2.2",
"simple_value: 1.2",
"image { height: 128 width: 512 }",
"image { height: 256 width: 256 }",]
values = [
text_format.Parse(v, tf.compat.v1.Summary.Value()).SerializeToString()
for v in values]
```
The following can decode both fields from the serialized strings:
```
sizes, [simple_value, image] = tf.io.decode_proto(
values,
tf.compat.v1.Summary.Value.DESCRIPTOR.full_name,
field_names=['simple_value', 'image'],
output_types=[tf.float32, tf.string])
```
The `sizes` has the same shape as the input, with an additional axis across the fields that were decoded. Here the first column of `sizes` is the size of the decoded `simple_value` field:
```
print(sizes)
tf.Tensor(
[[1 0]
[1 0]
[0 1]
[0 1]], shape=(4, 2), dtype=int32)
```
The result tensors each have one more index than the input byte-strings. The valid elements of each result tensor are indicated by the appropriate column of `sizes`. The invalid elements are padded with a default value:
```
print(simple_value)
tf.Tensor(
[[2.2]
[1.2]
[0. ]
[0. ]], shape=(4, 1), dtype=float32)
```
Nested protos are extracted as string tensors:
```
print(image.dtype)
<dtype: 'string'>
print(image.shape.as_list())
[4, 1]
```
To convert to a [`tf.RaggedTensor`](../raggedtensor) representation use:
```
tf.RaggedTensor.from_tensor(simple_value, lengths=sizes[:, 0]).to_list()
[[2.2], [1.2], [], []]
```
| Args |
| `bytes` | A `Tensor` of type `string`. Tensor of serialized protos with shape `batch_shape`. |
| `message_type` | A `string`. Name of the proto message type to decode. |
| `field_names` | A list of `strings`. List of strings containing proto field names. An extension field can be decoded by using its full name, e.g. EXT\_PACKAGE.EXT\_FIELD\_NAME. |
| `output_types` | A list of `tf.DTypes`. List of TF types to use for the respective field in field\_names. |
| `descriptor_source` | An optional `string`. Defaults to `"local://"`. Either the special value `local://` or a path to a file containing a serialized `FileDescriptorSet`. |
| `message_format` | An optional `string`. Defaults to `"binary"`. Either `binary` or [`text`](https://www.tensorflow.org/text/api_docs/python/text). |
| `sanitize` | An optional `bool`. Defaults to `False`. Whether to sanitize the result or not. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (sizes, values). |
| `sizes` | A `Tensor` of type `int32`. |
| `values` | A list of `Tensor` objects of type `output_types`. |
tensorflow tf.io.decode_png tf.io.decode\_png
=================
Decode a PNG-encoded image to a uint8 or uint16 tensor.
#### View aliases
**Main aliases**
[`tf.image.decode_png`](https://www.tensorflow.org/api_docs/python/tf/io/decode_png)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.decode_png`](https://www.tensorflow.org/api_docs/python/tf/io/decode_png), [`tf.compat.v1.io.decode_png`](https://www.tensorflow.org/api_docs/python/tf/io/decode_png)
```
tf.io.decode_png(
contents,
channels=0,
dtype=tf.dtypes.uint8,
name=None
)
```
The attr `channels` indicates the desired number of color channels for the decoded image.
#### Accepted values are:
* 0: Use the number of channels in the PNG-encoded image.
* 1: output a grayscale image.
* 3: output an RGB image.
* 4: output an RGBA image.
If needed, the PNG-encoded image is transformed to match the requested number of color channels.
This op also supports decoding JPEGs and non-animated GIFs since the interface is the same, though it is cleaner to use [`tf.io.decode_image`](decode_image).
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The PNG-encoded image. |
| `channels` | An optional `int`. Defaults to `0`. Number of color channels for the decoded image. |
| `dtype` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.uint8, tf.uint16`. Defaults to [`tf.uint8`](../../tf#uint8). |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `dtype`. |
tensorflow tf.io.match_filenames_once tf.io.match\_filenames\_once
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/input.py#L53-L73) |
Save the list of files matching pattern, so it is only computed once.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.match_filenames_once`](https://www.tensorflow.org/api_docs/python/tf/io/match_filenames_once), [`tf.compat.v1.train.match_filenames_once`](https://www.tensorflow.org/api_docs/python/tf/io/match_filenames_once)
```
tf.io.match_filenames_once(
pattern, name=None
)
```
>
> **Note:** The order of the files returned is deterministic.
>
| Args |
| `pattern` | A file pattern (glob), or 1D tensor of file patterns. |
| `name` | A name for the operations (optional). |
| Returns |
| A variable that is initialized to the list of files matching the pattern(s). |
tensorflow tf.io.matching_files tf.io.matching\_files
=====================
Returns the set of files matching one or more glob patterns.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.matching_files`](https://www.tensorflow.org/api_docs/python/tf/io/matching_files), [`tf.compat.v1.matching_files`](https://www.tensorflow.org/api_docs/python/tf/io/matching_files)
```
tf.io.matching_files(
pattern, name=None
)
```
Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion. Note also that the order of filenames returned is deterministic.
| Args |
| `pattern` | A `Tensor` of type `string`. Shell wildcard pattern(s). Scalar or vector of type string. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.encode_proto tf.io.encode\_proto
===================
The op serializes protobuf messages provided in the input tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.encode_proto`](https://www.tensorflow.org/api_docs/python/tf/io/encode_proto)
```
tf.io.encode_proto(
sizes,
values,
field_names,
message_type,
descriptor_source='local://',
name=None
)
```
The types of the tensors in `values` must match the schema for the fields specified in `field_names`. All the tensors in `values` must have a common shape prefix, *batch\_shape*.
The `sizes` tensor specifies repeat counts for each field. The repeat count (last dimension) of a each tensor in `values` must be greater than or equal to corresponding repeat count in `sizes`.
A `message_type` name must be provided to give context for the field names. The actual message descriptor can be looked up either in the linked-in descriptor pool or a filename provided by the caller using the `descriptor_source` attribute.
For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases:
* A proto field that contains a submessage or group can only be converted to `DT_STRING` (the serialized submessage). This is to reduce the complexity of the API. The resulting string can be used as input to another instance of the decode\_proto op.
* TensorFlow lacks support for unsigned integers. The ops represent uint64 types as a `DT_INT64` with the same twos-complement bit pattern (the obvious way). Unsigned int32 values can be represented exactly by specifying type `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in the `output_types` attribute.
The `descriptor_source` attribute selects the source of protocol descriptors to consult when looking up `message_type`. This may be:
* An empty string or "local://", in which case protocol descriptors are created for C++ (not Python) proto definitions linked to the binary.
* A file, in which case protocol descriptors are created from the file, which is expected to contain a `FileDescriptorSet` serialized as a string. NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out` and `--include_imports` options to the protocol compiler `protoc`.
* A "bytes://", in which protocol descriptors are created from `<bytes>`, which is expected to be a `FileDescriptorSet` serialized as a string.
| Args |
| `sizes` | A `Tensor` of type `int32`. Tensor of int32 with shape `[batch_shape, len(field_names)]`. |
| `values` | A list of `Tensor` objects. List of tensors containing values for the corresponding field. |
| `field_names` | A list of `strings`. List of strings containing proto field names. |
| `message_type` | A `string`. Name of the proto message type to decode. |
| `descriptor_source` | An optional `string`. Defaults to `"local://"`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.serialize_sparse tf.io.serialize\_sparse
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2179-L2203) |
Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.
```
tf.io.serialize_sparse(
sp_input,
out_type=tf.dtypes.string,
name=None
)
```
| Args |
| `sp_input` | The input `SparseTensor`. |
| `out_type` | The `dtype` to use for serialization. |
| `name` | A name prefix for the returned tensors (optional). |
| Returns |
| A 3-vector (1-D `Tensor`), with each column representing the serialized `SparseTensor`'s indices, values, and shape (respectively). |
| Raises |
| `TypeError` | If `sp_input` is not a `SparseTensor`. |
tensorflow tf.io.decode_raw tf.io.decode\_raw
=================
Convert raw bytes from input tensor into numeric tensors.
```
tf.io.decode_raw(
input_bytes, out_type, little_endian=True, fixed_length=None, name=None
)
```
Every component of the input tensor is interpreted as a sequence of bytes. These bytes are then decoded as numbers in the format specified by `out_type`.
```
tf.io.decode_raw(tf.constant("1"), tf.uint8)
<tf.Tensor: shape=(1,), dtype=uint8, numpy=array([49], dtype=uint8)>
tf.io.decode_raw(tf.constant("1,2"), tf.uint8)
<tf.Tensor: shape=(3,), dtype=uint8, numpy=array([49, 44, 50], dtype=uint8)>
```
Note that the rank of the output tensor is always one more than the input one:
```
tf.io.decode_raw(tf.constant(["1","2"]), tf.uint8).shape
TensorShape([2, 1])
tf.io.decode_raw(tf.constant([["1"],["2"]]), tf.uint8).shape
TensorShape([2, 1, 1])
```
This is because each byte in the input is converted to a new value on the output (if output type is `uint8` or `int8`, otherwise chunks of inputs get coverted to a new value):
```
tf.io.decode_raw(tf.constant("123"), tf.uint8)
<tf.Tensor: shape=(3,), dtype=uint8, numpy=array([49, 50, 51], dtype=uint8)>
tf.io.decode_raw(tf.constant("1234"), tf.uint8)
<tf.Tensor: shape=(4,), dtype=uint8, numpy=array([49, 50, 51, 52], ...
# chuncked output
tf.io.decode_raw(tf.constant("12"), tf.uint16)
<tf.Tensor: shape=(1,), dtype=uint16, numpy=array([12849], dtype=uint16)>
tf.io.decode_raw(tf.constant("1234"), tf.uint16)
<tf.Tensor: shape=(2,), dtype=uint16, numpy=array([12849, 13363], ...
# int64 output
tf.io.decode_raw(tf.constant("12345678"), tf.int64)
<tf.Tensor: ... numpy=array([4050765991979987505])>
tf.io.decode_raw(tf.constant("1234567887654321"), tf.int64)
<tf.Tensor: ... numpy=array([4050765991979987505, 3544952156018063160])>
```
The operation allows specifying endianness via the `little_endian` parameter.
```
tf.io.decode_raw(tf.constant("\x0a\x0b"), tf.int16)
<tf.Tensor: shape=(1,), dtype=int16, numpy=array([2826], dtype=int16)>
hex(2826)
'0xb0a'
tf.io.decode_raw(tf.constant("\x0a\x0b"), tf.int16, little_endian=False)
<tf.Tensor: shape=(1,), dtype=int16, numpy=array([2571], dtype=int16)>
hex(2571)
'0xa0b'
```
If the elements of `input_bytes` are of different length, you must specify `fixed_length`:
```
tf.io.decode_raw(tf.constant([["1"],["23"]]), tf.uint8, fixed_length=4)
<tf.Tensor: shape=(2, 1, 4), dtype=uint8, numpy=
array([[[49, 0, 0, 0]],
[[50, 51, 0, 0]]], dtype=uint8)>
```
If the `fixed_length` value is larger that the length of the `out_type` dtype, multiple values are generated:
```
tf.io.decode_raw(tf.constant(["1212"]), tf.uint16, fixed_length=4)
<tf.Tensor: shape=(1, 2), dtype=uint16, numpy=array([[12849, 12849]], ...
```
If the input value is larger than `fixed_length`, it is truncated:
```
x=''.join([chr(1), chr(2), chr(3), chr(4)])
tf.io.decode_raw(x, tf.uint16, fixed_length=2)
<tf.Tensor: shape=(1,), dtype=uint16, numpy=array([513], dtype=uint16)>
hex(513)
'0x201'
```
If `little_endian` and `fixed_length` are specified, truncation to the fixed length occurs before endianness conversion:
```
x=''.join([chr(1), chr(2), chr(3), chr(4)])
tf.io.decode_raw(x, tf.uint16, fixed_length=2, little_endian=False)
<tf.Tensor: shape=(1,), dtype=uint16, numpy=array([258], dtype=uint16)>
hex(258)
'0x102'
```
If input values all have the same length, then specifying `fixed_length` equal to the size of the strings should not change output:
```
x = ["12345678", "87654321"]
tf.io.decode_raw(x, tf.int16)
<tf.Tensor: shape=(2, 4), dtype=int16, numpy=
array([[12849, 13363, 13877, 14391],
[14136, 13622, 13108, 12594]], dtype=int16)>
tf.io.decode_raw(x, tf.int16, fixed_length=len(x[0]))
<tf.Tensor: shape=(2, 4), dtype=int16, numpy=
array([[12849, 13363, 13877, 14391],
[14136, 13622, 13108, 12594]], dtype=int16)>
```
| Args |
| `input_bytes` | Each element of the input Tensor is converted to an array of bytes. Currently, this must be a tensor of strings (bytes), although semantically the operation should support any input. |
| `out_type` | `DType` of the output. Acceptable types are `half`, `float`, `double`, `int32`, `uint16`, `uint8`, `int16`, `int8`, `int64`. |
| `little_endian` | Whether the `input_bytes` data is in little-endian format. Data will be converted into host byte order if necessary. |
| `fixed_length` | If set, the first `fixed_length` bytes of each element will be converted. Data will be zero-padded or truncated to the specified length. `fixed_length` must be a multiple of the size of `out_type`. `fixed_length` must be specified if the elements of `input_bytes` are of variable length. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` object storing the decoded bytes. |
| programming_docs |
tensorflow Module: tf.io.gfile Module: tf.io.gfile
===================
Public API for tf.io.gfile namespace.
Classes
-------
[`class GFile`](gfile/gfile): File I/O wrappers without thread locking.
Functions
---------
[`copy(...)`](gfile/copy): Copies data from `src` to `dst`.
[`exists(...)`](gfile/exists): Determines whether a path exists or not.
[`get_registered_schemes(...)`](gfile/get_registered_schemes): Returns the currently registered filesystem schemes.
[`glob(...)`](gfile/glob): Returns a list of files that match the given pattern(s).
[`isdir(...)`](gfile/isdir): Returns whether the path is a directory or not.
[`join(...)`](gfile/join): Join one or more path components intelligently.
[`listdir(...)`](gfile/listdir): Returns a list of entries contained within a directory.
[`makedirs(...)`](gfile/makedirs): Creates a directory and all parent/intermediate directories.
[`mkdir(...)`](gfile/mkdir): Creates a directory with the name given by `path`.
[`remove(...)`](gfile/remove): Deletes the path located at 'path'.
[`rename(...)`](gfile/rename): Rename or move a file / directory.
[`rmtree(...)`](gfile/rmtree): Deletes everything under path recursively.
[`stat(...)`](gfile/stat): Returns file statistics for a given path.
[`walk(...)`](gfile/walk): Recursive directory tree generator for directories.
tensorflow tf.io.parse_single_sequence_example tf.io.parse\_single\_sequence\_example
======================================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_ops.py#L695-L806) |
Parses a single `SequenceExample` proto.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.parse_single_sequence_example`](https://www.tensorflow.org/api_docs/python/tf/io/parse_single_sequence_example), [`tf.compat.v1.parse_single_sequence_example`](https://www.tensorflow.org/api_docs/python/tf/io/parse_single_sequence_example)
```
tf.io.parse_single_sequence_example(
serialized,
context_features=None,
sequence_features=None,
example_name=None,
name=None
)
```
Parses a single serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) proto given in `serialized`.
This op parses a serialized sequence example into a tuple of dictionaries, each mapping keys to `Tensor` and `SparseTensor` objects. The first dictionary contains mappings for keys appearing in `context_features`, and the second dictionary contains mappings for keys appearing in `sequence_features`.
At least one of `context_features` and `sequence_features` must be provided and non-empty.
The `context_features` keys are associated with a `SequenceExample` as a whole, independent of time / frame. In contrast, the `sequence_features` keys provide a way to access variable-length data within the `FeatureList` section of the `SequenceExample` proto. While the shapes of `context_features` values are fixed with respect to frame, the frame dimension (the first dimension) of `sequence_features` values may vary between `SequenceExample` protos, and even between `feature_list` keys within the same `SequenceExample`.
`context_features` contains `VarLenFeature`, `RaggedFeature`, and `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and default value.
`sequence_features` contains `VarLenFeature`, `RaggedFeature`, and `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type. The shape will be `(T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`, where `T` is the length of the associated `FeatureList` in the `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar 1-D `Tensor` of static shape `[None]` and dynamic shape `[T]`, while `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 2-D matrix `Tensor` of static shape `[None, k]` and dynamic shape `[T, k]`.
Each `SparseTensor` corresponding to `sequence_features` represents a ragged vector. Its indices are `[time, index]`, where `time` is the `FeatureList` entry and `index` is the value's index in the list of values associated with that time.
`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature` entries with `allow_missing=True` are optional; otherwise, we will fail if that `Feature` or `FeatureList` is missing from any example in `serialized`.
`example_name` may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not `None`, `example_name` must be a scalar.
Note that the batch version of this function, `tf.parse_sequence_example`, is written for better memory efficiency and will be faster on large `SequenceExample`s.
| Args |
| `serialized` | A scalar (0-D Tensor) of type string, a single binary serialized `SequenceExample` proto. |
| `context_features` | A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` or `RaggedFeature` values. These features are associated with a `SequenceExample` as a whole. |
| `sequence_features` | A `dict` mapping feature keys to `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values. These features are associated with data within the `FeatureList` section of the `SequenceExample` proto. |
| `example_name` | A scalar (0-D Tensor) of strings (optional), the name of the serialized proto. |
| `name` | A name for this operation (optional). |
| Returns |
| A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s and `RaggedTensor`s. * The first dict contains the context key/values.
* The second dict contains the feature\_list key/values.
|
| Raises |
| `ValueError` | if any feature is invalid. |
tensorflow tf.io.encode_base64 tf.io.encode\_base64
====================
Encode strings into web-safe base64 format.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.encode_base64`](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64), [`tf.compat.v1.io.encode_base64`](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64)
```
tf.io.encode_base64(
input, pad=False, name=None
)
```
Refer to [this article](https://en.wikipedia.org/wiki/Base64) for more information on base64 format. Base64 strings may have padding with '=' at the end so that the encoded has length multiple of 4. See Padding section of the link above.
Web-safe means that the encoder uses - and \_ instead of + and /.
| Args |
| `input` | A `Tensor` of type `string`. Strings to be encoded. |
| `pad` | An optional `bool`. Defaults to `False`. Bool whether padding is applied at the ends. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.read_file tf.io.read\_file
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/io_ops.py#L96-L133) |
Reads the contents of file.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.read_file`](https://www.tensorflow.org/api_docs/python/tf/io/read_file), [`tf.compat.v1.read_file`](https://www.tensorflow.org/api_docs/python/tf/io/read_file)
```
tf.io.read_file(
filename, name=None
)
```
This operation returns a tensor with the entire contents of the input filename. It does not do any parsing, it just returns the contents as they are. Usually, this is the first step in the input pipeline.
#### Example:
```
with open("/tmp/file.txt", "w") as f:
f.write("asdf")
4
tf.io.read_file("/tmp/file.txt")
<tf.Tensor: shape=(), dtype=string, numpy=b'asdf'>
```
Example of using the op in a function to read an image, decode it and reshape the tensor containing the pixel data:
```
@tf.function
def load_image(filename):
raw = tf.io.read_file(filename)
image = tf.image.decode_png(raw, channels=3)
# the `print` executes during tracing.
print("Initial shape: ", image.shape)
image.set_shape([28, 28, 3])
print("Final shape: ", image.shape)
return image
```
| Args |
| `filename` | string. filename to read from. |
| `name` | string. Optional name for the op. |
| Returns |
| A tensor of dtype "string", with the file contents. |
tensorflow tf.io.decode_json_example tf.io.decode\_json\_example
===========================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_ops.py#L1151-L1233) |
Convert JSON-encoded Example records to binary protocol buffer strings.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.decode_json_example`](https://www.tensorflow.org/api_docs/python/tf/io/decode_json_example), [`tf.compat.v1.io.decode_json_example`](https://www.tensorflow.org/api_docs/python/tf/io/decode_json_example)
```
tf.io.decode_json_example(
json_examples, name=None
)
```
>
> **Note:** This is **not** a general purpose JSON parsing op.
>
This op converts JSON-serialized [`tf.train.Example`](../train/example) (maybe created with `json_format.MessageToJson`, following the [standard JSON mapping](https://developers.google.com/protocol-buffers/docs/proto3#json)) to a binary-serialized [`tf.train.Example`](../train/example) (equivalent to [`Example.SerializeToString()`](../train/byteslist#SerializeToString)) suitable for conversion to tensors with [`tf.io.parse_example`](parse_example).
Here is a [`tf.train.Example`](../train/example) proto:
```
example = tf.train.Example(
features=tf.train.Features(
feature={
"a": tf.train.Feature(
int64_list=tf.train.Int64List(
value=[1, 1, 3]))}))
```
Here it is converted to JSON:
```
from google.protobuf import json_format
example_json = json_format.MessageToJson(example)
print(example_json)
{
"features": {
"feature": {
"a": {
"int64List": {
"value": [
"1",
"1",
"3"
]
}
}
}
}
}
```
This op converts the above json string to a binary proto:
```
example_binary = tf.io.decode_json_example(example_json)
example_binary.numpy()
b'\n\x0f\n\r\n\x01a\x12\x08\x1a\x06\x08\x01\x08\x01\x08\x03'
```
The OP works on string tensors of andy shape:
```
tf.io.decode_json_example([
[example_json, example_json],
[example_json, example_json]]).shape.as_list()
[2, 2]
```
This resulting binary-string is equivalent to [`Example.SerializeToString()`](../train/byteslist#SerializeToString), and can be converted to Tensors using [`tf.io.parse_example`](parse_example) and related functions:
```
tf.io.parse_example(
serialized=[example_binary.numpy(),
example.SerializeToString()],
features = {'a': tf.io.FixedLenFeature(shape=[3], dtype=tf.int64)})
{'a': <tf.Tensor: shape=(2, 3), dtype=int64, numpy=
array([[1, 1, 3],
[1, 1, 3]])>}
```
| Args |
| `json_examples` | A string tensor containing json-serialized `tf.Example` protos. |
| `name` | A name for the op. |
| Returns |
| A string Tensor containing the binary-serialized `tf.Example` protos. |
| Raises |
| [`tf.errors.InvalidArgumentError`](../errors/invalidargumenterror): If the JSON could not be converted to a `tf.Example` |
tensorflow tf.io.parse_sequence_example tf.io.parse\_sequence\_example
==============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_ops.py#L451-L570) |
Parses a batch of `SequenceExample` protos.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.parse_sequence_example`](https://www.tensorflow.org/api_docs/python/tf/io/parse_sequence_example)
```
tf.io.parse_sequence_example(
serialized,
context_features=None,
sequence_features=None,
example_names=None,
name=None
)
```
Parses a vector of serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in `serialized`.
This op parses serialized sequence examples into a tuple of dictionaries, each mapping keys to `Tensor` and `SparseTensor` objects. The first dictionary contains mappings for keys appearing in `context_features`, and the second dictionary contains mappings for keys appearing in `sequence_features`.
At least one of `context_features` and `sequence_features` must be provided and non-empty.
The `context_features` keys are associated with a `SequenceExample` as a whole, independent of time / frame. In contrast, the `sequence_features` keys provide a way to access variable-length data within the `FeatureList` section of the `SequenceExample` proto. While the shapes of `context_features` values are fixed with respect to frame, the frame dimension (the first dimension) of `sequence_features` values may vary between `SequenceExample` protos, and even between `feature_list` keys within the same `SequenceExample`.
`context_features` contains `VarLenFeature`, `RaggedFeature`, and `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and default value.
`sequence_features` contains `VarLenFeature`, `RaggedFeature`, and `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type. The shape will be `(B,T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`, where `B` is the batch size, and `T` is the length of the associated `FeatureList` in the `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar 2-D `Tensor` of static shape `[None, None]` and dynamic shape `[B, T]`, while `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 3-D matrix `Tensor` of static shape `[None, None, k]` and dynamic shape `[B, T, k]`.
Like the input, the resulting output tensors have a batch dimension. This means that the original per-example shapes of `VarLenFeature`s and `FixedLenSequenceFeature`s can be lost. To handle that situation, this op also provides dicts of shape tensors as part of the output. There is one dict for the context features, and one for the feature\_list features. Context features of type `FixedLenFeature`s will not be present, since their shapes are already known by the caller. In situations where the input `FixedLenSequenceFeature`s are of different sequence lengths across examples, the shorter examples will be padded with default datatype values: 0 for numeric types, and the empty string for string types.
Each `SparseTensor` corresponding to `sequence_features` represents a ragged vector. Its indices are `[time, index]`, where `time` is the `FeatureList` entry and `index` is the value's index in the list of values associated with that time.
`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature` entries with `allow_missing=True` are optional; otherwise, we will fail if that `Feature` or `FeatureList` is missing from any example in `serialized`.
`example_name` may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not `None`, `example_name` must be a scalar.
| Args |
| `serialized` | A vector (1-D Tensor) of type string containing binary serialized `SequenceExample` protos. |
| `context_features` | A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` or `RaggedFeature` values. These features are associated with a `SequenceExample` as a whole. |
| `sequence_features` | A `dict` mapping feature keys to `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values. These features are associated with data within the `FeatureList` section of the `SequenceExample` proto. |
| `example_names` | A vector (1-D Tensor) of strings (optional), the name of the serialized protos. |
| `name` | A name for this operation (optional). |
| Returns |
| A tuple of three `dict`s, each mapping keys to `Tensor`s, `SparseTensor`s, and `RaggedTensor`. The first dict contains the context key/values, the second dict contains the feature\_list key/values, and the final dict contains the lengths of any dense feature\_list features. |
| Raises |
| `ValueError` | if any feature is invalid. |
tensorflow tf.io.decode_csv tf.io.decode\_csv
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_ops.py#L1072-L1129) |
Convert CSV records to tensors. Each column maps to one tensor.
```
tf.io.decode_csv(
records,
record_defaults,
field_delim=',',
use_quote_delim=True,
na_value='',
select_cols=None,
name=None
)
```
RFC 4180 format is expected for the CSV records. (<https://tools.ietf.org/html/rfc4180>) Note that we allow leading and trailing spaces with int or float field.
| Args |
| `records` | A `Tensor` of type `string`. Each string is a record/row in the csv and all records should have the same format. |
| `record_defaults` | A list of `Tensor` objects with specific types. Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`. One tensor per column of the input record, with either a scalar default value for that column or an empty vector if the column is required. |
| `field_delim` | An optional `string`. Defaults to `","`. char delimiter to separate fields in a record. |
| `use_quote_delim` | An optional `bool`. Defaults to `True`. If false, treats double quotation marks as regular characters inside of the string fields (ignoring RFC 4180, Section 2, Bullet 5). |
| `na_value` | Additional string to recognize as NA/NaN. |
| `select_cols` | Optional sorted list of column indices to select. If specified, only this subset of columns will be parsed and returned. |
| `name` | A name for the operation (optional). |
| Returns |
| A list of `Tensor` objects. Has the same type as `record_defaults`. Each tensor will have the same shape as records. |
| Raises |
| `ValueError` | If any of the arguments is malformed. |
tensorflow tf.io.extract_jpeg_shape tf.io.extract\_jpeg\_shape
==========================
Extract the shape information of a JPEG-encoded image.
#### View aliases
**Main aliases**
[`tf.image.extract_jpeg_shape`](https://www.tensorflow.org/api_docs/python/tf/io/extract_jpeg_shape)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.extract_jpeg_shape`](https://www.tensorflow.org/api_docs/python/tf/io/extract_jpeg_shape), [`tf.compat.v1.io.extract_jpeg_shape`](https://www.tensorflow.org/api_docs/python/tf/io/extract_jpeg_shape)
```
tf.io.extract_jpeg_shape(
contents,
output_type=tf.dtypes.int32,
name=None
)
```
This op only parses the image header, so it is much faster than DecodeJpeg.
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The JPEG-encoded image. |
| `output_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.int32, tf.int64`. Defaults to [`tf.int32`](../../tf#int32). (Optional) The output type of the operation (int32 or int64). Defaults to int32. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `output_type`. |
| programming_docs |
tensorflow tf.io.serialize_many_sparse tf.io.serialize\_many\_sparse
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2236-L2269) |
Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.
```
tf.io.serialize_many_sparse(
sp_input,
out_type=tf.dtypes.string,
name=None
)
```
The `SparseTensor` must have rank `R` greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the `SparseTensor` must be sorted in increasing order of this first dimension. The serialized `SparseTensor` objects going into each row of the output `Tensor` will have rank `R-1`.
The minibatch size `N` is extracted from `sparse_shape[0]`.
| Args |
| `sp_input` | The input rank `R` `SparseTensor`. |
| `out_type` | The `dtype` to use for serialization. |
| `name` | A name prefix for the returned tensors (optional). |
| Returns |
| A matrix (2-D `Tensor`) with `N` rows and `3` columns. Each column represents serialized `SparseTensor`'s indices, values, and shape (respectively). |
| Raises |
| `TypeError` | If `sp_input` is not a `SparseTensor`. |
tensorflow tf.io.decode_base64 tf.io.decode\_base64
====================
Decode web-safe base64-encoded strings.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.decode_base64`](https://www.tensorflow.org/api_docs/python/tf/io/decode_base64), [`tf.compat.v1.io.decode_base64`](https://www.tensorflow.org/api_docs/python/tf/io/decode_base64)
```
tf.io.decode_base64(
input, name=None
)
```
Input may or may not have padding at the end. See [EncodeBase64](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64) for padding. Web-safe means that input must use - and \_ instead of + and /.
| Args |
| `input` | A `Tensor` of type `string`. Base64 strings to decode. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.deserialize_many_sparse tf.io.deserialize\_many\_sparse
===============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2338-L2409) |
Deserialize and concatenate `SparseTensors` from a serialized minibatch.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.deserialize_many_sparse`](https://www.tensorflow.org/api_docs/python/tf/io/deserialize_many_sparse), [`tf.compat.v1.io.deserialize_many_sparse`](https://www.tensorflow.org/api_docs/python/tf/io/deserialize_many_sparse)
```
tf.io.deserialize_many_sparse(
serialized_sparse, dtype, rank=None, name=None
)
```
The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where `N` is the minibatch size and the rows correspond to packed outputs of `serialize_sparse`. The ranks of the original `SparseTensor` objects must all match. When the final `SparseTensor` is created, it has rank one higher than the ranks of the incoming `SparseTensor` objects (they have been concatenated along a new row dimension).
The output `SparseTensor` object's shape values for all dimensions but the first are the max across the input `SparseTensor` objects' shape values for the corresponding dimensions. Its first shape value is `N`, the minibatch size.
The input `SparseTensor` objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run [`sparse.reorder`](../sparse/reorder) to restore index ordering.
For example, if the serialized input is a `[2, 3]` matrix representing two original `SparseTensor` objects:
```
index = [ 0]
[10]
[20]
values = [1, 2, 3]
shape = [50]
```
and
```
index = [ 2]
[10]
values = [4, 5]
shape = [30]
```
then the final deserialized `SparseTensor` will be:
```
index = [0 0]
[0 10]
[0 20]
[1 2]
[1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]
```
| Args |
| `serialized_sparse` | 2-D `Tensor` of type `string` of shape `[N, 3]`. The serialized and packed `SparseTensor` objects. |
| `dtype` | The `dtype` of the serialized `SparseTensor` objects. |
| `rank` | (optional) Python int, the rank of the `SparseTensor` objects. |
| `name` | A name prefix for the returned tensors (optional) |
| Returns |
| A `SparseTensor` representing the deserialized `SparseTensor`s, concatenated along the `SparseTensor`s' first dimension. All of the serialized `SparseTensor`s must have had the same rank and type. |
tensorflow tf.io.FixedLenSequenceFeature tf.io.FixedLenSequenceFeature
=============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_config.py#L319-L349) |
Configuration for parsing a variable-length input feature into a `Tensor`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.FixedLenSequenceFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenSequenceFeature), [`tf.compat.v1.io.FixedLenSequenceFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenSequenceFeature)
```
tf.io.FixedLenSequenceFeature(
shape, dtype, allow_missing=False, default_value=None
)
```
The resulting `Tensor` of parsing a single `SequenceExample` or `Example` has a static `shape` of `[None] + shape` and the specified `dtype`. The resulting `Tensor` of parsing a `batch_size` many `Example`s has a static `shape` of `[batch_size, None] + shape` and the specified `dtype`. The entries in the `batch` from different `Examples` will be padded with `default_value` to the maximum length present in the `batch`.
To treat a sparse input as dense, provide `allow_missing=True`; otherwise, the parse functions will fail on any examples missing this feature.
#### Fields:
* **`shape`**: Shape of input data for dimension 2 and higher. First dimension is of variable length `None`.
* **`dtype`**: Data type of input.
* **`allow_missing`**: Whether to allow this feature to be missing from a feature list item. Is available only for parsing `SequenceExample` not for parsing `Examples`.
* **`default_value`**: Scalar value to be used to pad multiple `Example`s to their maximum length. Irrelevant for parsing a single `Example` or `SequenceExample`. Defaults to "" for dtype string and 0 otherwise (optional).
| Attributes |
| `shape` | A `namedtuple` alias for field number 0 |
| `dtype` | A `namedtuple` alias for field number 1 |
| `allow_missing` | A `namedtuple` alias for field number 2 |
| `default_value` | A `namedtuple` alias for field number 3 |
tensorflow tf.io.parse_tensor tf.io.parse\_tensor
===================
Transforms a serialized tensorflow.TensorProto proto into a Tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.parse_tensor`](https://www.tensorflow.org/api_docs/python/tf/io/parse_tensor), [`tf.compat.v1.parse_tensor`](https://www.tensorflow.org/api_docs/python/tf/io/parse_tensor)
```
tf.io.parse_tensor(
serialized, out_type, name=None
)
```
| Args |
| `serialized` | A `Tensor` of type `string`. A scalar string containing a serialized TensorProto proto. |
| `out_type` | A [`tf.DType`](../dtypes/dtype). The type of the serialized tensor. The provided type must match the type of the serialized tensor and no implicit conversion will take place. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `out_type`. |
tensorflow tf.io.decode_jpeg tf.io.decode\_jpeg
==================
Decode a JPEG-encoded image to a uint8 tensor.
#### View aliases
**Main aliases**
[`tf.image.decode_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/decode_jpeg)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.decode_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/decode_jpeg), [`tf.compat.v1.io.decode_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/decode_jpeg)
```
tf.io.decode_jpeg(
contents,
channels=0,
ratio=1,
fancy_upscaling=True,
try_recover_truncated=False,
acceptable_fraction=1,
dct_method='',
name=None
)
```
The attr `channels` indicates the desired number of color channels for the decoded image.
#### Accepted values are:
* 0: Use the number of channels in the JPEG-encoded image.
* 1: output a grayscale image.
* 3: output an RGB image.
If needed, the JPEG-encoded image is transformed to match the requested number of color channels.
The attr `ratio` allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.
This op also supports decoding PNGs and non-animated GIFs since the interface is the same, though it is cleaner to use [`tf.io.decode_image`](decode_image).
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The JPEG-encoded image. |
| `channels` | An optional `int`. Defaults to `0`. Number of color channels for the decoded image. |
| `ratio` | An optional `int`. Defaults to `1`. Downscaling ratio. |
| `fancy_upscaling` | An optional `bool`. Defaults to `True`. If true use a slower but nicer upscaling of the chroma planes (yuv420/422 only). |
| `try_recover_truncated` | An optional `bool`. Defaults to `False`. If true try to recover an image from truncated input. |
| `acceptable_fraction` | An optional `float`. Defaults to `1`. The minimum required fraction of lines before a truncated input is accepted. |
| `dct_method` | An optional `string`. Defaults to `""`. string specifying a hint about the algorithm used for decompression. Defaults to "" which maps to a system-specific default. Currently valid values are ["INTEGER\_FAST", "INTEGER\_ACCURATE"]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.) |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `uint8`. |
tensorflow tf.io.encode_png tf.io.encode\_png
=================
PNG-encode an image.
#### View aliases
**Main aliases**
[`tf.image.encode_png`](https://www.tensorflow.org/api_docs/python/tf/io/encode_png)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.encode_png`](https://www.tensorflow.org/api_docs/python/tf/io/encode_png), [`tf.compat.v1.io.encode_png`](https://www.tensorflow.org/api_docs/python/tf/io/encode_png)
```
tf.io.encode_png(
image, compression=-1, name=None
)
```
`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` where `channels` is:
* 1: for grayscale.
* 2: for grayscale + alpha.
* 3: for RGB.
* 4: for RGBA.
The ZLIB compression level, `compression`, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.
| Args |
| `image` | A `Tensor`. Must be one of the following types: `uint8`, `uint16`. 3-D with shape `[height, width, channels]`. |
| `compression` | An optional `int`. Defaults to `-1`. Compression level. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.write_graph tf.io.write\_graph
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/graph_io.py#L26-L72) |
Writes a graph proto to a file.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.write_graph`](https://www.tensorflow.org/api_docs/python/tf/io/write_graph), [`tf.compat.v1.train.write_graph`](https://www.tensorflow.org/api_docs/python/tf/io/write_graph)
```
tf.io.write_graph(
graph_or_graph_def, logdir, name, as_text=True
)
```
The graph is written as a text proto unless `as_text` is `False`.
```
v = tf.Variable(0, name='my_variable')
sess = tf.compat.v1.Session()
tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')
```
or
```
v = tf.Variable(0, name='my_variable')
sess = tf.compat.v1.Session()
tf.io.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt')
```
| Args |
| `graph_or_graph_def` | A `Graph` or a `GraphDef` protocol buffer. |
| `logdir` | Directory where to write the graph. This can refer to remote filesystems, such as Google Cloud Storage (GCS). |
| `name` | Filename for the graph. |
| `as_text` | If `True`, writes the graph as an ASCII proto. |
| Returns |
| The path of the output proto file. |
tensorflow tf.io.is_jpeg tf.io.is\_jpeg
==============
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3125-L3142) |
Convenience function to check if the 'contents' encodes a JPEG image.
#### View aliases
**Main aliases**
[`tf.image.is_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/is_jpeg)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.is_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/is_jpeg), [`tf.compat.v1.io.is_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/is_jpeg)
```
tf.io.is_jpeg(
contents, name=None
)
```
| Args |
| `contents` | 0-D `string`. The encoded image bytes. |
| `name` | A name for the operation (optional) |
| Returns |
| A scalar boolean tensor indicating if 'contents' may be a JPEG image. is\_jpeg is susceptible to false positives. |
tensorflow tf.io.write_file tf.io.write\_file
=================
Writes `contents` to the file at input `filename`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.write_file`](https://www.tensorflow.org/api_docs/python/tf/io/write_file), [`tf.compat.v1.write_file`](https://www.tensorflow.org/api_docs/python/tf/io/write_file)
```
tf.io.write_file(
filename, contents, name=None
)
```
Creates the file and recursively creates directory if it does not exist.
| Args |
| `filename` | A `Tensor` of type `string`. scalar. The name of the file to which we write the contents. |
| `contents` | A `Tensor` of type `string`. scalar. The content to be written to the output file. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.io.FixedLenFeature tf.io.FixedLenFeature
=====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_config.py#L298-L314) |
Configuration for parsing a fixed-length input feature.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.FixedLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature), [`tf.compat.v1.io.FixedLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature)
```
tf.io.FixedLenFeature(
shape, dtype, default_value=None
)
```
To treat sparse input as dense, provide a `default_value`; otherwise, the parse functions will fail on any examples missing this feature.
#### Fields:
* **`shape`**: Shape of input data.
* **`dtype`**: Data type of input.
* **`default_value`**: Value to be used if an example is missing this feature. It must be compatible with `dtype` and of the specified `shape`.
| Attributes |
| `shape` | A `namedtuple` alias for field number 0 |
| `dtype` | A `namedtuple` alias for field number 1 |
| `default_value` | A `namedtuple` alias for field number 2 |
tensorflow tf.io.decode_image tf.io.decode\_image
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3230-L3296) |
Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.
#### View aliases
**Main aliases**
[`tf.image.decode_image`](https://www.tensorflow.org/api_docs/python/tf/io/decode_image)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.decode_image`](https://www.tensorflow.org/api_docs/python/tf/io/decode_image), [`tf.compat.v1.io.decode_image`](https://www.tensorflow.org/api_docs/python/tf/io/decode_image)
```
tf.io.decode_image(
contents,
channels=None,
dtype=tf.dtypes.uint8,
name=None,
expand_animations=True
)
```
Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the appropriate operation to convert the input bytes `string` into a `Tensor` of type `dtype`.
>
> **Note:** `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D arrays `[height, width, num_channels]`. Make sure to take this into account when constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or PNG files. Alternately, set the `expand_animations` argument of this function to `False`, in which case the op will return 3-dimensional tensors and will truncate animated GIF files to the first frame.
>
>
> **Note:** If the first frame of an animated GIF does not occupy the entire canvas (maximum frame width x maximum frame height), then it fills the unoccupied areas (in the first frame) with zeros (black). For frames after the first frame that does not occupy the entire canvas, it uses the previous frame to fill the unoccupied areas.
>
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The encoded image bytes. |
| `channels` | An optional `int`. Defaults to `0`. Number of color channels for the decoded image. |
| `dtype` | The desired DType of the returned `Tensor`. |
| `name` | A name for the operation (optional) |
| `expand_animations` | An optional `bool`. Defaults to `True`. Controls the shape of the returned op's output. If `True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, `False`, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame. |
| Returns |
| `Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on the file type and the value of the `expand_animations` parameter. |
| Raises |
| `ValueError` | On incorrect number of channels. |
tensorflow tf.io.decode_bmp tf.io.decode\_bmp
=================
Decode the first frame of a BMP-encoded image to a uint8 tensor.
#### View aliases
**Main aliases**
[`tf.image.decode_bmp`](https://www.tensorflow.org/api_docs/python/tf/io/decode_bmp)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.decode_bmp`](https://www.tensorflow.org/api_docs/python/tf/io/decode_bmp), [`tf.compat.v1.io.decode_bmp`](https://www.tensorflow.org/api_docs/python/tf/io/decode_bmp)
```
tf.io.decode_bmp(
contents, channels=0, name=None
)
```
The attr `channels` indicates the desired number of color channels for the decoded image.
#### Accepted values are:
* 0: Use the number of channels in the BMP-encoded image.
* 3: output an RGB image.
* 4: output an RGBA image.
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The BMP-encoded image. |
| `channels` | An optional `int`. Defaults to `0`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `uint8`. |
| programming_docs |
tensorflow tf.io.TFRecordOptions tf.io.TFRecordOptions
=====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/tf_record.py#L39-L145) |
Options used for manipulating TFRecord files.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.TFRecordOptions`](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordOptions), [`tf.compat.v1.python_io.TFRecordOptions`](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordOptions)
```
tf.io.TFRecordOptions(
compression_type=None,
flush_mode=None,
input_buffer_size=None,
output_buffer_size=None,
window_bits=None,
compression_level=None,
compression_method=None,
mem_level=None,
compression_strategy=None
)
```
| Args |
| `compression_type` | `"GZIP"`, `"ZLIB"`, or `""` (no compression). |
| `flush_mode` | flush mode or `None`, Default: Z\_NO\_FLUSH. |
| `input_buffer_size` | int or `None`. |
| `output_buffer_size` | int or `None`. |
| `window_bits` | int or `None`. |
| `compression_level` | 0 to 9, or `None`. |
| `compression_method` | compression method or `None`. |
| `mem_level` | 1 to 9, or `None`. |
| `compression_strategy` | strategy or `None`. Default: Z\_DEFAULT\_STRATEGY. |
| Raises |
| `ValueError` | If compression\_type is invalid. |
Methods
-------
### `get_compression_type_string`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/tf_record.py#L97-L121)
```
@classmethod
get_compression_type_string(
options
)
```
Convert various option types to a unified string.
| Args |
| `options` | `TFRecordOption`, `TFRecordCompressionType`, or string. |
| Returns |
| Compression type as string (e.g. `'ZLIB'`, `'GZIP'`, or `''`). |
| Raises |
| `ValueError` | If compression\_type is invalid. |
| Class Variables |
| compression\_type\_map |
```
{
0: '',
1: 'ZLIB',
2: 'GZIP'
}
```
|
tensorflow tf.io.decode_and_crop_jpeg tf.io.decode\_and\_crop\_jpeg
=============================
Decode and Crop a JPEG-encoded image to a uint8 tensor.
#### View aliases
**Main aliases**
[`tf.image.decode_and_crop_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.decode_and_crop_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg), [`tf.compat.v1.io.decode_and_crop_jpeg`](https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg)
```
tf.io.decode_and_crop_jpeg(
contents,
crop_window,
channels=0,
ratio=1,
fancy_upscaling=True,
try_recover_truncated=False,
acceptable_fraction=1,
dct_method='',
name=None
)
```
The attr `channels` indicates the desired number of color channels for the decoded image.
#### Accepted values are:
* 0: Use the number of channels in the JPEG-encoded image.
* 1: output a grayscale image.
* 3: output an RGB image.
If needed, the JPEG-encoded image is transformed to match the requested number of color channels.
The attr `ratio` allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.
It is equivalent to a combination of decode and crop, but much faster by only decoding partial jpeg image.
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The JPEG-encoded image. |
| `crop_window` | A `Tensor` of type `int32`. 1-D. The crop window: [crop\_y, crop\_x, crop\_height, crop\_width]. |
| `channels` | An optional `int`. Defaults to `0`. Number of color channels for the decoded image. |
| `ratio` | An optional `int`. Defaults to `1`. Downscaling ratio. |
| `fancy_upscaling` | An optional `bool`. Defaults to `True`. If true use a slower but nicer upscaling of the chroma planes (yuv420/422 only). |
| `try_recover_truncated` | An optional `bool`. Defaults to `False`. If true try to recover an image from truncated input. |
| `acceptable_fraction` | An optional `float`. Defaults to `1`. The minimum required fraction of lines before a truncated input is accepted. |
| `dct_method` | An optional `string`. Defaults to `""`. string specifying a hint about the algorithm used for decompression. Defaults to "" which maps to a system-specific default. Currently valid values are ["INTEGER\_FAST", "INTEGER\_ACCURATE"]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.) |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `uint8`. |
tensorflow tf.io.parse_example tf.io.parse\_example
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_ops.py#L76-L311) |
Parses `Example` protos into a `dict` of tensors.
```
tf.io.parse_example(
serialized, features, example_names=None, name=None
)
```
Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in `serialized`. We refer to `serialized` as a batch with `batch_size` many entries of individual `Example` protos.
`example_names` may contain descriptive names for the corresponding serialized protos. These may be useful for debugging purposes, but they have no effect on the output. If not `None`, `example_names` must be the same length as `serialized`.
This op parses serialized examples into a dictionary mapping keys to `Tensor` `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to `VarLenFeature`, `SparseFeature`, `RaggedFeature`, and `FixedLenFeature` objects. Each `VarLenFeature` and `SparseFeature` is mapped to a `SparseTensor`; each `FixedLenFeature` is mapped to a `Tensor`; and each `RaggedFeature` is mapped to a `RaggedTensor`.
Each `VarLenFeature` maps to a `SparseTensor` of the specified type representing a ragged matrix. Its indices are `[batch, index]` where `batch` identifies the example in `serialized`, and `index` is the value's index in the list of values associated with that feature and example.
Each `SparseFeature` maps to a `SparseTensor` of the specified type representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`. Its `values` come from the feature in the examples with key `value_key`. A `values[i]` comes from a position `k` in the feature of an example at batch entry `batch`. This positional information is recorded in `indices[i]` as `[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of the feature in the example at with key [`SparseFeature.index_key[j]`](sparsefeature#index_key). In other words, we split the indices (except the first index indicating the batch entry) of a `SparseTensor` by dimension into different features of the `Example`. Due to its complexity a `VarLenFeature` should be preferred over a `SparseFeature` whenever possible.
Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or [`tf.float32`](../../tf#float32) if not specified) and shape `(serialized.size(),) + df.shape`.
`FixedLenFeature` entries with a `default_value` are optional. With no default value, we will fail if that `Feature` is missing from any example in `serialized`.
Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type (or [`tf.float32`](../../tf#float32) if not specified) and shape `(serialized.size(), None) + df.shape`. All examples in `serialized` will be padded with `default_value` along the second dimension.
Each `RaggedFeature` maps to a `RaggedTensor` of the specified type. It is formed by stacking the `RaggedTensor` for each example, where the `RaggedTensor` for each individual example is constructed using the tensors specified by `RaggedTensor.values_key` and [`RaggedTensor.partition`](https://www.tensorflow.org/tfx/tf_metadata/api_docs/python/tfmd/proto/schema_pb2/TensorRepresentation/RaggedTensor#partition). See the [`tf.io.RaggedFeature`](raggedfeature) documentation for details and examples.
#### Examples:
For example, if one expects a [`tf.float32`](../../tf#float32) `VarLenFeature` `ft` and three serialized `Example`s are provided:
```
serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
```
then the output will look like:
```
{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
values=[1.0, 2.0, 3.0],
dense_shape=(3, 2)) }
```
If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and `shape=[]` is used then the output will look like:
```
{"ft": [[1.0, 2.0], [3.0, -1.0]]}
```
Given two `Example` input protos in `serialized`:
```
[
features {
feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
feature { key: "gps" value { float_list { value: [] } } }
},
features {
feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
feature { key: "dank" value { int64_list { value: [ 42 ] } } }
feature { key: "gps" value { } }
}
]
```
And arguments
```
example_names: ["input0", "input1"],
features: {
"kw": VarLenFeature(tf.string),
"dank": VarLenFeature(tf.int64),
"gps": VarLenFeature(tf.float32),
}
```
Then the output is a dictionary:
```
{
"kw": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["knit", "big", "emmy"]
dense_shape=[2, 2]),
"dank": SparseTensor(
indices=[[1, 0]],
values=[42],
dense_shape=[2, 1]),
"gps": SparseTensor(
indices=[],
values=[],
dense_shape=[2, 0]),
}
```
For dense results in two serialized `Example`s:
```
[
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
}
]
```
#### We can use arguments:
```
example_names: ["input0", "input1"],
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
}
```
And the expected output is:
```
{
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
}
```
An alternative to `VarLenFeature` to obtain a `SparseTensor` is `SparseFeature`. For example, given two `Example` input protos in `serialized`:
```
[
features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } }
},
features {
feature { key: "val" value { float_list { value: [ 0.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 42 ] } } }
}
]
```
And arguments
```
example_names: ["input0", "input1"],
features: {
"sparse": SparseFeature(
index_key="ix", value_key="val", dtype=tf.float32, size=100),
}
```
Then the output is a dictionary:
```
{
"sparse": SparseTensor(
indices=[[0, 3], [0, 20], [1, 42]],
values=[0.5, -1.0, 0.0]
dense_shape=[2, 100]),
}
```
See the [`tf.io.RaggedFeature`](raggedfeature) documentation for examples showing how `RaggedFeature` can be used to obtain `RaggedTensor`s.
| Args |
| `serialized` | A vector (1-D Tensor) of strings, a batch of binary serialized `Example` protos. |
| `features` | A `dict` mapping feature keys to `FixedLenFeature`, `VarLenFeature`, `SparseFeature`, and `RaggedFeature` values. |
| `example_names` | A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch. |
| `name` | A name for this operation (optional). |
| Returns |
| A `dict` mapping feature keys to `Tensor`, `SparseTensor`, and `RaggedTensor` values. |
| Raises |
| `ValueError` | if any feature is invalid. |
tensorflow tf.io.serialize_tensor tf.io.serialize\_tensor
=======================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/io_ops.py#L136-L213) |
Transforms a Tensor into a serialized TensorProto proto.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.serialize_tensor`](https://www.tensorflow.org/api_docs/python/tf/io/serialize_tensor), [`tf.compat.v1.serialize_tensor`](https://www.tensorflow.org/api_docs/python/tf/io/serialize_tensor)
```
tf.io.serialize_tensor(
tensor, name=None
)
```
This operation transforms data in a [`tf.Tensor`](../tensor) into a [`tf.Tensor`](../tensor) of type [`tf.string`](../../tf#string) containing the data in a binary string format. This operation can transform scalar data and linear arrays, but it is most useful in converting multidimensional arrays into a format accepted by binary storage formats such as a `TFRecord` or [`tf.train.Example`](../train/example).
#### See also:
* [`tf.io.parse_tensor`](parse_tensor): inverse operation of [`tf.io.serialize_tensor`](serialize_tensor) that transforms a scalar string containing a serialized Tensor into a Tensor of a specified type.
* [`tf.ensure_shape`](../ensure_shape): `parse_tensor` cannot statically determine the shape of the parsed tensor. Use [`tf.ensure_shape`](../ensure_shape) to set the static shape when running under a [`tf.function`](../function)
* `.SerializeToString`, serializes a proto to a binary-string
Example of serializing scalar data:
```
t = tf.constant(1)
tf.io.serialize_tensor(t)
<tf.Tensor: shape=(), dtype=string, numpy=b'\x08...\x00'>
```
Example of storing non-scalar data into a [`tf.train.Example`](../train/example):
```
t1 = [[1, 2]]
t2 = [[7, 8]]
nonscalar = tf.concat([t1, t2], 0)
nonscalar
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[1, 2],
[7, 8]], dtype=int32)>
```
Serialize the data using [`tf.io.serialize_tensor`](serialize_tensor).
```
serialized_nonscalar = tf.io.serialize_tensor(nonscalar)
serialized_nonscalar
<tf.Tensor: shape=(), dtype=string, numpy=b'\x08...\x00'>
```
Store the data in a [`tf.train.Feature`](../train/feature).
```
feature_of_bytes = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[serialized_nonscalar.numpy()]))
feature_of_bytes
bytes_list {
value: "\010...\000"
}
```
Put the [`tf.train.Feature`](../train/feature) message into a [`tf.train.Example`](../train/example).
```
features_for_example = {
'feature0': feature_of_bytes
}
example_proto = tf.train.Example(
features=tf.train.Features(feature=features_for_example))
example_proto
features {
feature {
key: "feature0"
value {
bytes_list {
value: "\010...\000"
}
}
}
}
```
| Args |
| `tensor` | A [`tf.Tensor`](../tensor). |
| `name` | string. Optional name for the op. |
| Returns |
| A Tensor of dtype string. |
tensorflow tf.io.parse_single_example tf.io.parse\_single\_example
============================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_ops.py#L409-L448) |
Parses a single `Example` proto.
```
tf.io.parse_single_example(
serialized, features, example_names=None, name=None
)
```
Similar to `parse_example`, except:
For dense tensors, the returned `Tensor` is identical to the output of `parse_example`, except there is no batch dimension, the output shape is the same as the shape given in `dense_shape`.
For `SparseTensor`s, the first (batch) column of the indices matrix is removed (the indices matrix is a column vector), the values vector is unchanged, and the first (`batch_size`) entry of the shape vector is removed (it is now a single element vector).
One might see performance advantages by batching `Example` protos with `parse_example` instead of using this function directly.
| Args |
| `serialized` | A scalar string Tensor, a single serialized Example. |
| `features` | A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` values. |
| `example_names` | (Optional) A scalar string Tensor, the associated name. |
| `name` | A name for this operation (optional). |
| Returns |
| A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. |
| Raises |
| `ValueError` | if any feature is invalid. |
tensorflow tf.io.SparseFeature tf.io.SparseFeature
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/parsing_config.py#L223-L294) |
Configuration for parsing a sparse input feature from an `Example`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.SparseFeature`](https://www.tensorflow.org/api_docs/python/tf/io/SparseFeature), [`tf.compat.v1.io.SparseFeature`](https://www.tensorflow.org/api_docs/python/tf/io/SparseFeature)
```
tf.io.SparseFeature(
index_key, value_key, dtype, size, already_sorted=False
)
```
Note, preferably use `VarLenFeature` (possibly in combination with a `SequenceExample`) in order to parse out `SparseTensor`s instead of `SparseFeature` due to its simplicity.
Closely mimicking the `SparseTensor` that will be obtained by parsing an `Example` with a `SparseFeature` config, a `SparseFeature` contains a
* `value_key`: The name of key for a `Feature` in the `Example` whose parsed `Tensor` will be the resulting [`SparseTensor.values`](../sparse/sparsetensor#values).
* `index_key`: A list of names - one for each dimension in the resulting `SparseTensor` whose `indices[i][dim]` indicating the position of the `i`-th value in the `dim` dimension will be equal to the `i`-th value in the Feature with key named `index_key[dim]` in the `Example`.
* `size`: A list of ints for the resulting [`SparseTensor.dense_shape`](../sparse/sparsetensor#dense_shape).
For example, we can represent the following 2D `SparseTensor`
```
SparseTensor(indices=[[3, 1], [20, 0]],
values=[0.5, -1.0]
dense_shape=[100, 3])
```
with an `Example` input proto
```
features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix0" value { int64_list { value: [ 3, 20 ] } } }
feature { key: "ix1" value { int64_list { value: [ 1, 0 ] } } }
}
```
and `SparseFeature` config with 2 `index_key`s
```
SparseFeature(index_key=["ix0", "ix1"],
value_key="val",
dtype=tf.float32,
size=[100, 3])
```
#### Fields:
* **`index_key`**: A single string name or a list of string names of index features. For each key the underlying feature's type must be `int64` and its length must always match that of the `value_key` feature. To represent `SparseTensor`s with a `dense_shape` of `rank` higher than 1 a list of length `rank` should be used.
* **`value_key`**: Name of value feature. The underlying feature's type must be `dtype` and its length must always match that of all the `index_key`s' features.
* **`dtype`**: Data type of the `value_key` feature.
* **`size`**: A Python int or list thereof specifying the dense shape. Should be a list if and only if `index_key` is a list. In that case the list must be equal to the length of `index_key`. Each for each entry `i` all values in the `index_key`[i] feature must be in `[0, size[i])`.
* **`already_sorted`**: A Python boolean to specify whether the values in `value_key` are already sorted by their index position. If so skip sorting. False by default (optional).
| Attributes |
| `index_key` | A `namedtuple` alias for field number 0 |
| `value_key` | A `namedtuple` alias for field number 1 |
| `dtype` | A `namedtuple` alias for field number 2 |
| `size` | A `namedtuple` alias for field number 3 |
| `already_sorted` | A `namedtuple` alias for field number 4 |
| programming_docs |
tensorflow tf.io.TFRecordWriter tf.io.TFRecordWriter
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/tf_record.py#L214-L317) |
A class to write records to a TFRecords file.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.TFRecordWriter`](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter), [`tf.compat.v1.python_io.TFRecordWriter`](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter)
```
tf.io.TFRecordWriter(
path, options=None
)
```
[TFRecords tutorial](https://www.tensorflow.org/tutorials/load_data/tfrecord)
TFRecords is a binary format which is optimized for high throughput data retrieval, generally in conjunction with [`tf.data`](../data). `TFRecordWriter` is used to write serialized examples to a file for later consumption. The key steps are:
Ahead of time:
* [Convert data into a serialized format](https://www.tensorflow.org/tutorials/load_data/tfrecord#tfexample)
* [Write the serialized data to one or more files](https://www.tensorflow.org/tutorials/load_data/tfrecord#tfrecord_files_in_python)
During training or evaluation:
* [Read serialized examples into memory](https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)
* [Parse (deserialize) examples](https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)
A minimal example is given below:
```
import tempfile
example_path = os.path.join(tempfile.gettempdir(), "example.tfrecords")
np.random.seed(0)
```
```
# Write the records to a file.
with tf.io.TFRecordWriter(example_path) as file_writer:
for _ in range(4):
x, y = np.random.random(), np.random.random()
record_bytes = tf.train.Example(features=tf.train.Features(feature={
"x": tf.train.Feature(float_list=tf.train.FloatList(value=[x])),
"y": tf.train.Feature(float_list=tf.train.FloatList(value=[y])),
})).SerializeToString()
file_writer.write(record_bytes)
```
```
# Read the data back out.
def decode_fn(record_bytes):
return tf.io.parse_single_example(
# Data
record_bytes,
# Schema
{"x": tf.io.FixedLenFeature([], dtype=tf.float32),
"y": tf.io.FixedLenFeature([], dtype=tf.float32)}
)
```
```
for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn):
print("x = {x:.4f}, y = {y:.4f}".format(**batch))
x = 0.5488, y = 0.7152
x = 0.6028, y = 0.5449
x = 0.4237, y = 0.6459
x = 0.4376, y = 0.8918
```
This class implements `__enter__` and `__exit__`, and can be used in `with` blocks like a normal file. (See the usage example above.)
| Args |
| `path` | The path to the TFRecords file. |
| `options` | (optional) String specifying compression type, `TFRecordCompressionType`, or `TFRecordOptions` object. |
| Raises |
| `IOError` | If `path` cannot be opened for writing. |
| `ValueError` | If valid compression\_type can't be determined from `options`. |
Methods
-------
### `close`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/tf_record.py#L315-L317)
```
close()
```
Close the file.
### `flush`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/tf_record.py#L311-L313)
```
flush()
```
Flush the file.
### `write`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/tf_record.py#L303-L309)
```
write(
record
)
```
Write a string record to the file.
| Args |
| `record` | str |
### `__enter__`
```
__enter__()
```
**enter**(self: object) -> object
### `__exit__`
```
__exit__()
```
**exit**(self: tensorflow.python.lib.io.\_pywrap\_record\_io.RecordWriter, \*args) -> None
tensorflow tf.io.decode_gif tf.io.decode\_gif
=================
Decode the frame(s) of a GIF-encoded image to a uint8 tensor.
#### View aliases
**Main aliases**
[`tf.image.decode_gif`](https://www.tensorflow.org/api_docs/python/tf/io/decode_gif)
**Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.image.decode_gif`](https://www.tensorflow.org/api_docs/python/tf/io/decode_gif), [`tf.compat.v1.io.decode_gif`](https://www.tensorflow.org/api_docs/python/tf/io/decode_gif)
```
tf.io.decode_gif(
contents, name=None
)
```
GIF images with frame or transparency compression are not supported. On Linux and MacOS systems, convert animated GIFs from compressed to uncompressed by running:
```
convert \\(src.gif -coalesce \\)dst.gif
```
This op also supports decoding JPEGs and PNGs, though it is cleaner to use [`tf.io.decode_image`](decode_image).
| Args |
| `contents` | A `Tensor` of type `string`. 0-D. The GIF-encoded image. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `uint8`. |
tensorflow tf.io.decode_compressed tf.io.decode\_compressed
========================
Decompress strings.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.decode_compressed`](https://www.tensorflow.org/api_docs/python/tf/io/decode_compressed), [`tf.compat.v1.io.decode_compressed`](https://www.tensorflow.org/api_docs/python/tf/io/decode_compressed)
```
tf.io.decode_compressed(
bytes, compression_type='', name=None
)
```
This op decompresses each element of the `bytes` input `Tensor`, which is assumed to be compressed using the given `compression_type`.
The `output` is a string `Tensor` of the same shape as `bytes`, each element containing the decompressed data from the corresponding element in `bytes`.
| Args |
| `bytes` | A `Tensor` of type `string`. A Tensor of string which is compressed. |
| `compression_type` | An optional `string`. Defaults to `""`. A scalar containing either (i) the empty string (no compression), (ii) "ZLIB", or (iii) "GZIP". |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.io.gfile.join tf.io.gfile.join
================
Join one or more path components intelligently.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.join`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/join)
```
tf.io.gfile.join(
path, *paths
)
```
TensorFlow specific filesystems will be joined like a url (using "/" as the path seperator) on all platforms:
On Windows or Linux/Unix-like:
```
>>> tf.io.gfile.join("gcs://folder", "file.py")
'gcs://folder/file.py'
```
```
tf.io.gfile.join("ram://folder", "file.py")
'ram://folder/file.py'
```
But the native filesystem is handled just like os.path.join:
```
path = tf.io.gfile.join("folder", "file.py")
if os.name == "nt":
expected = "folder\\file.py" # Windows
else:
expected = "folder/file.py" # Linux/Unix-like
path == expected
True
```
| Args |
| `path` | string, path to a directory |
| `paths` | string, additional paths to concatenate |
| Returns |
| `path` | the joined path. |
tensorflow tf.io.gfile.exists tf.io.gfile.exists
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L247-L291) |
Determines whether a path exists or not.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.exists`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/exists)
```
tf.io.gfile.exists(
path
)
```
```
with open("/tmp/x", "w") as f:
f.write("asdf")
4
tf.io.gfile.exists("/tmp/x")
True
```
You can also specify the URI scheme for selecting a different filesystem:
```
# for a GCS filesystem path:
# tf.io.gfile.exists("gs://bucket/file")
# for a local filesystem:
with open("/tmp/x", "w") as f:
f.write("asdf")
4
tf.io.gfile.exists("file:///tmp/x")
True
```
This currently returns `True` for existing directories but don't rely on this behavior, especially if you are using cloud filesystems (e.g., GCS, S3, Hadoop):
```
tf.io.gfile.exists("/tmp")
True
```
| Args |
| `path` | string, a path |
| Returns |
| True if the path exists, whether it's a file or a directory. False if the path does not exist and there are no filesystem errors. |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Propagates any errors reported by the FileSystem API. |
tensorflow tf.io.gfile.rmtree tf.io.gfile.rmtree
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L664-L674) |
Deletes everything under path recursively.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.rmtree`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/rmtree)
```
tf.io.gfile.rmtree(
path
)
```
| Args |
| `path` | string, a path |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.rename tf.io.gfile.rename
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L607-L621) |
Rename or move a file / directory.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.rename`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/rename)
```
tf.io.gfile.rename(
src, dst, overwrite=False
)
```
| Args |
| `src` | string, pathname for a file |
| `dst` | string, pathname to which the file needs to be moved |
| `overwrite` | boolean, if false it's an error for `dst` to be occupied by an existing file. |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.glob tf.io.gfile.glob
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L383-L449) |
Returns a list of files that match the given pattern(s).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.glob`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/glob)
```
tf.io.gfile.glob(
pattern
)
```
The patterns are defined as strings. Supported patterns are defined here. Note that the pattern can be a Python iteratable of string patterns.
The format definition of the pattern is:
**pattern**: `{ term }`
**term**:
* `'*'`: matches any sequence of non-'/' characters
* `'?'`: matches a single non-'/' character
* `'[' [ '^' ] { match-list } ']'`: matches any single character (not) on the list
* `c`: matches character `c` where `c != '*', '?', '\\', '['`
* `'\\' c`: matches character `c`
**character range**:
* `c`: matches character `c` while `c != '\\', '-', ']'`
* `'\\' c`: matches character `c`
* `lo '-' hi`: matches character `c` for `lo <= c <= hi`
#### Examples:
```
tf.io.gfile.glob("*.py")
# For example, ['__init__.py']
```
```
tf.io.gfile.glob("__init__.??")
# As above
```
```
files = {"*.py"}
the_iterator = iter(files)
tf.io.gfile.glob(the_iterator)
# As above
```
See the C++ function `GetMatchingPaths` in [`core/platform/file_system.h`](https://www.tensorflow.org/versions/r2.9/api_docs/python/core/platform/file_system.h) for implementation details.
| Args |
| `pattern` | string or iterable of strings. The glob pattern(s). |
| Returns |
| A list of strings containing filenames that match the given pattern(s). |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If there are filesystem / directory listing errors. |
| [`errors.NotFoundError`](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/errors/NotFoundError) | If pattern to be matched is an invalid directory. |
tensorflow tf.io.gfile.GFile tf.io.gfile.GFile
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/gfile.py#L37-L114) |
File I/O wrappers without thread locking.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.gfile.GFile`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile), [`tf.compat.v1.gfile.Open`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile), [`tf.compat.v1.io.gfile.GFile`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile)
```
tf.io.gfile.GFile(
name, mode='r'
)
```
The main roles of the [`tf.io.gfile`](../gfile) module are:
1. To provide an API that is close to Python's file I/O objects, and
2. To provide an implementation based on TensorFlow's C++ FileSystem API.
The C++ FileSystem API supports multiple file system implementations, including local files, Google Cloud Storage (using a `gs://` prefix, and HDFS (using an `hdfs://` prefix). TensorFlow exports these as `tf.io.gfile`, so that you can use these implementations for saving and loading checkpoints, writing to TensorBoard logs, and accessing training data (among other uses). However, if all your files are local, you can use the regular Python file API without any problem.
>
> **Note:** though similar to Python's I/O implementation, there are semantic differences to make [`tf.io.gfile`](../gfile) more efficient for backing filesystems. For example, a write mode file will not be opened until the first write call to minimize RPC invocations in network filesystems.
>
Once you obtain a `GFile` object, you can use it in most ways as you would any Python's file object:
```
with open("/tmp/x", "w") as f:
f.write("asdf")
4
with tf.io.gfile.GFile("/tmp/x") as f:
f.read()
'asdf'
```
The difference is that you can specify URI schemes to use other filesystems (e.g., `gs://` for GCS, `s3://` for S3, etc.), if they are supported. Using `file://` as an example, we have:
```
with tf.io.gfile.GFile("file:///tmp/x", "w") as f:
f.write("qwert")
f.write("asdf")
tf.io.gfile.GFile("file:///tmp/x").read()
'qwertasdf'
```
You can also read all lines of a file directly:
```
with tf.io.gfile.GFile("file:///tmp/x", "w") as f:
f.write("asdf\n")
f.write("qwer\n")
tf.io.gfile.GFile("/tmp/x").readlines()
['asdf\n', 'qwer\n']
```
You can iterate over the lines:
```
with tf.io.gfile.GFile("file:///tmp/x", "w") as f:
f.write("asdf\n")
f.write("qwer\n")
for line in tf.io.gfile.GFile("/tmp/x"):
print(line[:-1]) # removes the end of line character
asdf
qwer
```
Random access read is possible if the underlying filesystem supports it:
```
with open("/tmp/x", "w") as f:
f.write("asdfqwer")
f = tf.io.gfile.GFile("/tmp/x")
f.read(3)
'asd'
f.seek(4)
f.tell()
4
f.read(3)
'qwe'
f.tell()
7
f.close()
```
| Attributes |
| `mode` | Returns the mode in which the file was opened. |
| `name` | Returns the file name. |
Methods
-------
### `close`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L221-L240)
```
close()
```
Closes the file.
Should be called for the WritableFile to be flushed.
In general, if you use the context manager pattern, you don't need to call this directly.
```
with tf.io.gfile.GFile("/tmp/x", "w") as f:
f.write("asdf\n")
f.write("qwer\n")
# implicit f.close() at the end of the block
```
For cloud filesystems, forgetting to call `close()` might result in data loss as last write might not have been replicated.
### `flush`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L211-L219)
```
flush()
```
Flushes the Writable file.
This only ensures that the data has made its way out of the process without any guarantees on whether it's written to disk. This means that the data would survive an application crash but not necessarily an OS crash.
### `next`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L208-L209)
```
next()
```
### `read`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L102-L119)
```
read(
n=-1
)
```
Returns the contents of a file as a string.
Starts reading from current position in file.
| Args |
| `n` | Read `n` bytes if `n != -1`. If `n = -1`, reads to end of file. |
| Returns |
| `n` bytes of the file (or whole file) in bytes mode or `n` bytes of the string if in string (regular) mode. |
### `readline`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L165-L168)
```
readline()
```
Reads the next line, keeping \n. At EOF, returns ''.
### `readlines`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L170-L179)
```
readlines()
```
Returns all lines from the file in a list.
### `seek`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L121-L163)
```
seek(
offset=None, whence=0, position=None
)
```
Seeks to the offset in the file. (deprecated arguments)
| Args |
| `offset` | The byte count relative to the whence argument. |
| `whence` | Valid values for whence are: 0: start of the file (default) 1: relative to the current position of the file 2: relative to the end of file. `offset` is usually negative. |
### `seekable`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L242-L244)
```
seekable()
```
Returns True as FileIO supports random access ops of seek()/tell()
### `size`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L93-L95)
```
size()
```
Returns the size of the file.
### `tell`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L181-L189)
```
tell()
```
Returns the current position in the file.
### `write`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L97-L100)
```
write(
file_content
)
```
Writes file\_content to the file. Appends to the end of the file.
### `__enter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L191-L193)
```
__enter__()
```
Make usable with "with" statement.
### `__exit__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L195-L197)
```
__exit__(
unused_type, unused_value, unused_traceback
)
```
Make usable with "with" statement.
### `__iter__`
[View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L199-L200)
```
__iter__()
```
tensorflow tf.io.gfile.remove tf.io.gfile.remove
==================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L316-L327) |
Deletes the path located at 'path'.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.remove`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/remove)
```
tf.io.gfile.remove(
path
)
```
| Args |
| `path` | string, a path |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Propagates any errors reported by the FileSystem API. E.g., `NotFoundError` if the path does not exist. |
| programming_docs |
tensorflow tf.io.gfile.stat tf.io.gfile.stat
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L909-L922) |
Returns file statistics for a given path.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.stat`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/stat)
```
tf.io.gfile.stat(
path
)
```
| Args |
| `path` | string, path to a file |
| Returns |
| FileStatistics struct that contains information about the path |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.makedirs tf.io.gfile.makedirs
====================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L499-L511) |
Creates a directory and all parent/intermediate directories.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.makedirs`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/makedirs)
```
tf.io.gfile.makedirs(
path
)
```
It succeeds if path already exists and is writable.
| Args |
| `path` | string, name of the directory to be created |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.get_registered_schemes tf.io.gfile.get\_registered\_schemes
====================================
Returns the currently registered filesystem schemes.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.get_registered_schemes`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/get_registered_schemes)
```
tf.io.gfile.get_registered_schemes()
```
The [`tf.io.gfile`](../gfile) APIs, in addition to accepting traditional filesystem paths, also accept file URIs that begin with a scheme. For example, the local filesystem path `/tmp/tf` can also be addressed as `file:///tmp/tf`. In this case, the scheme is `file`, followed by `://` and then the path, according to [URI syntax](https://datatracker.ietf.org/doc/html/rfc3986#section-3).
This function returns the currently registered schemes that will be recognized by [`tf.io.gfile`](../gfile) APIs. This includes both built-in schemes and those registered by other TensorFlow filesystem implementations, for example those provided by [TensorFlow I/O](https://github.com/tensorflow/io).
The empty string is always included, and represents the "scheme" for regular local filesystem paths.
| Returns |
| List of string schemes, e.g. `['', 'file', 'ram']`, in arbitrary order. |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.mkdir tf.io.gfile.mkdir
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L468-L481) |
Creates a directory with the name given by `path`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.mkdir`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/mkdir)
```
tf.io.gfile.mkdir(
path
)
```
| Args |
| `path` | string, name of the directory to be created |
Notes: The parent directories need to exist. Use [`tf.io.gfile.makedirs`](makedirs) instead if there is the possibility that the parent dirs don't exist.
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.listdir tf.io.gfile.listdir
===================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L749-L776) |
Returns a list of entries contained within a directory.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.listdir`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/listdir)
```
tf.io.gfile.listdir(
path
)
```
The list is in arbitrary order. It does not contain the special entries "." and "..".
| Args |
| `path` | string, path to a directory |
| Returns |
| [filename1, filename2, ... filenameN] as strings |
| Raises |
| errors.NotFoundError if directory doesn't exist |
tensorflow tf.io.gfile.isdir tf.io.gfile.isdir
=================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L690-L703) |
Returns whether the path is a directory or not.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.isdir`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/isdir)
```
tf.io.gfile.isdir(
path
)
```
| Args |
| `path` | string, path to a potential directory |
| Returns |
| True, if the path is a directory; False otherwise |
tensorflow tf.io.gfile.copy tf.io.gfile.copy
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L514-L580) |
Copies data from `src` to `dst`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.copy`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/copy)
```
tf.io.gfile.copy(
src, dst, overwrite=False
)
```
```
with open("/tmp/x", "w") as f:
f.write("asdf")
4
tf.io.gfile.exists("/tmp/x")
True
tf.io.gfile.copy("/tmp/x", "/tmp/y")
tf.io.gfile.exists("/tmp/y")
True
tf.io.gfile.remove("/tmp/y")
```
You can also specify the URI scheme for selecting a different filesystem:
```
with open("/tmp/x", "w") as f:
f.write("asdf")
4
tf.io.gfile.copy("/tmp/x", "file:///tmp/y")
tf.io.gfile.exists("/tmp/y")
True
tf.io.gfile.remove("/tmp/y")
```
Note that you need to always specify a file name, even if moving into a new directory. This is because some cloud filesystems don't have the concept of a directory.
```
with open("/tmp/x", "w") as f:
f.write("asdf")
4
tf.io.gfile.mkdir("/tmp/new_dir")
tf.io.gfile.copy("/tmp/x", "/tmp/new_dir/y")
tf.io.gfile.exists("/tmp/new_dir/y")
True
tf.io.gfile.rmtree("/tmp/new_dir")
```
If you want to prevent errors if the path already exists, you can use `overwrite` argument:
```
with open("/tmp/x", "w") as f:
f.write("asdf")
4
tf.io.gfile.copy("/tmp/x", "file:///tmp/y")
tf.io.gfile.copy("/tmp/x", "file:///tmp/y", overwrite=True)
tf.io.gfile.remove("/tmp/y")
```
Note that the above will still result in an error if you try to overwrite a directory with a file.
Note that you cannot copy a directory, only file arguments are supported.
| Args |
| `src` | string, name of the file whose contents need to be copied |
| `dst` | string, name of the file to which to copy to |
| `overwrite` | boolean, if false it's an error for `dst` to be occupied by an existing file. |
| Raises |
| [`errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | If the operation fails. |
tensorflow tf.io.gfile.walk tf.io.gfile.walk
================
[View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/lib/io/file_io.py#L835-L890) |
Recursive directory tree generator for directories.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.gfile.walk`](https://www.tensorflow.org/api_docs/python/tf/io/gfile/walk)
```
tf.io.gfile.walk(
top, topdown=True, onerror=None
)
```
| Args |
| `top` | string, a Directory name |
| `topdown` | bool, Traverse pre order if True, post order if False. |
| `onerror` | optional handler for errors. Should be a function, it will be called with the error as argument. Rethrowing the error aborts the walk. Errors that happen while listing directories are ignored. |
| Yields |
| Each yield is a 3-tuple: the pathname of a directory, followed by lists of all its subdirectories and leaf files. That is, each yield looks like: `(dirname, [subdirname, subdirname, ...], [filename, filename, ...])`. Each item is a string. |
tensorflow tf.io.RaggedFeature.UniformRowLength tf.io.RaggedFeature.UniformRowLength
====================================
UniformRowLength(length,)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature.UniformRowLength`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature/UniformRowLength)
```
tf.io.RaggedFeature.UniformRowLength(
length
)
```
| Attributes |
| `length` | A `namedtuple` alias for field number 0 |
tensorflow tf.io.RaggedFeature.RowStarts tf.io.RaggedFeature.RowStarts
=============================
RowStarts(key,)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature.RowStarts`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature/RowStarts)
```
tf.io.RaggedFeature.RowStarts(
key
)
```
| Attributes |
| `key` | A `namedtuple` alias for field number 0 |
tensorflow tf.io.RaggedFeature.RowLengths tf.io.RaggedFeature.RowLengths
==============================
RowLengths(key,)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature.RowLengths`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature/RowLengths)
```
tf.io.RaggedFeature.RowLengths(
key
)
```
| Attributes |
| `key` | A `namedtuple` alias for field number 0 |
tensorflow tf.io.RaggedFeature.ValueRowIds tf.io.RaggedFeature.ValueRowIds
===============================
ValueRowIds(key,)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature.ValueRowIds`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature/ValueRowIds)
```
tf.io.RaggedFeature.ValueRowIds(
key
)
```
| Attributes |
| `key` | A `namedtuple` alias for field number 0 |
tensorflow tf.io.RaggedFeature.RowSplits tf.io.RaggedFeature.RowSplits
=============================
RowSplits(key,)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature.RowSplits`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature/RowSplits)
```
tf.io.RaggedFeature.RowSplits(
key
)
```
| Attributes |
| `key` | A `namedtuple` alias for field number 0 |
tensorflow tf.io.RaggedFeature.RowLimits tf.io.RaggedFeature.RowLimits
=============================
RowLimits(key,)
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.io.RaggedFeature.RowLimits`](https://www.tensorflow.org/api_docs/python/tf/io/RaggedFeature/RowLimits)
```
tf.io.RaggedFeature.RowLimits(
key
)
```
| Attributes |
| `key` | A `namedtuple` alias for field number 0 |
tensorflow tf.raw_ops.TridiagonalSolve tf.raw\_ops.TridiagonalSolve
============================
Solves tridiagonal systems of equations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.TridiagonalSolve`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TridiagonalSolve)
```
tf.raw_ops.TridiagonalSolve(
diagonals, rhs, partial_pivoting=True, perturb_singular=False, name=None
)
```
Solves tridiagonal systems of equations. Supports batch dimensions and multiple right-hand sides per each left-hand side. On CPU, solution is computed via Gaussian elimination with or without partial pivoting, depending on `partial_pivoting` attribute. On GPU, Nvidia's cuSPARSE library is used: <https://docs.nvidia.com/cuda/cusparse/index.html#gtsv> Partial pivoting is not yet supported by XLA backends.
| Args |
| `diagonals` | A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`. Tensor of shape `[..., 3, M]` whose innermost 2 dimensions represent the tridiagonal matrices with three rows being the superdiagonal, diagonals, and subdiagonals, in order. The last element of the superdiagonal and the first element of the subdiagonal is ignored. |
| `rhs` | A `Tensor`. Must have the same type as `diagonals`. Tensor of shape `[..., M, K]`, representing K right-hand sides per each left-hand side. |
| `partial_pivoting` | An optional `bool`. Defaults to `True`. Whether to apply partial pivoting. Partial pivoting makes the procedure more stable, but slower. |
| `perturb_singular` | An optional `bool`. Defaults to `False`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `diagonals`. |
tensorflow tf.raw_ops.ReaderRestoreStateV2 tf.raw\_ops.ReaderRestoreStateV2
================================
Restore a reader to a previously saved state.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ReaderRestoreStateV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ReaderRestoreStateV2)
```
tf.raw_ops.ReaderRestoreStateV2(
reader_handle, state, name=None
)
```
Not all Readers support being restored, so this can produce an Unimplemented error.
| Args |
| `reader_handle` | A `Tensor` of type `resource`. Handle to a Reader. |
| `state` | A `Tensor` of type `string`. Result of a ReaderSerializeState of a Reader with type matching reader\_handle. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.RGBToHSV tf.raw\_ops.RGBToHSV
====================
Converts one or more images from RGB to HSV.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.RGBToHSV`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/RGBToHSV)
```
tf.raw_ops.RGBToHSV(
images, name=None
)
```
Outputs a tensor of the same shape as the `images` tensor, containing the HSV value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.
`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.
#### Usage Example:
```
blue_image = tf.stack([
tf.zeros([5,5]),
tf.zeros([5,5]),
tf.ones([5,5])],
axis=-1)
blue_hsv_image = tf.image.rgb_to_hsv(blue_image)
blue_hsv_image[0,0].numpy()
array([0.6666667, 1. , 1. ], dtype=float32)
```
| Args |
| `images` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. RGB data to convert. Last dimension must be size 3. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `images`. |
tensorflow tf.raw_ops.ExtractVolumePatches tf.raw\_ops.ExtractVolumePatches
================================
Extract `patches` from `input` and put them in the `"depth"` output dimension. 3D extension of `extract_image_patches`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ExtractVolumePatches`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ExtractVolumePatches)
```
tf.raw_ops.ExtractVolumePatches(
input, ksizes, strides, padding, name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`. |
| `ksizes` | A list of `ints` that has length `>= 5`. The size of the sliding window for each dimension of `input`. |
| `strides` | A list of `ints` that has length `>= 5`. 1-D of length 5. How far the centers of two consecutive patches are in `input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. The size-related attributes are specified as follows:
```
ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1]
strides = [1, stride_planes, strides_rows, strides_cols, 1]
```
|
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.PlaceholderWithDefault tf.raw\_ops.PlaceholderWithDefault
==================================
A placeholder op that passes through `input` when its output is not fed.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.PlaceholderWithDefault`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/PlaceholderWithDefault)
```
tf.raw_ops.PlaceholderWithDefault(
input, shape, name=None
)
```
| Args |
| `input` | A `Tensor`. The default value to produce when `output` is not fed. |
| `shape` | A [`tf.TensorShape`](../tensorshape) or list of `ints`. The (possibly partial) shape of the tensor. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.CollectiveAllToAllV3 tf.raw\_ops.CollectiveAllToAllV3
================================
Mutually exchanges multiple tensors of identical type and shape.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.CollectiveAllToAllV3`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/CollectiveAllToAllV3)
```
tf.raw_ops.CollectiveAllToAllV3(
input, communicator, group_assignment, timeout_seconds=0, name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `half`, `float64`, `int32`, `int64`. |
| `communicator` | A `Tensor` of type `resource`. |
| `group_assignment` | A `Tensor` of type `int32`. |
| `timeout_seconds` | An optional `float`. Defaults to `0`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.UnbatchDataset tf.raw\_ops.UnbatchDataset
==========================
A dataset that splits the elements of its input into multiple elements.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.UnbatchDataset`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/UnbatchDataset)
```
tf.raw_ops.UnbatchDataset(
input_dataset,
output_types,
output_shapes,
metadata='',
name=None
)
```
| Args |
| `input_dataset` | A `Tensor` of type `variant`. |
| `output_types` | A list of `tf.DTypes` that has length `>= 1`. |
| `output_shapes` | A list of shapes (each a [`tf.TensorShape`](../tensorshape) or list of `ints`) that has length `>= 1`. |
| `metadata` | An optional `string`. Defaults to `""`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `variant`. |
| programming_docs |
tensorflow tf.raw_ops.AnonymousMutableHashTable tf.raw\_ops.AnonymousMutableHashTable
=====================================
Creates an empty anonymous mutable hash table.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.AnonymousMutableHashTable`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/AnonymousMutableHashTable)
```
tf.raw_ops.AnonymousMutableHashTable(
key_dtype, value_dtype, name=None
)
```
This op creates a new anonymous mutable hash table (as a resource) everytime it is executed, with the specified dtype of its keys and values, returning the resource handle. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation. The table is anonymous in the sense that it can only be accessed by the returned resource handle (e.g. it cannot be looked up by a name in a resource manager). The table will be automatically deleted when all resource handles pointing to it are gone.
| Args |
| `key_dtype` | A [`tf.DType`](../dtypes/dtype). Type of the table keys. |
| `value_dtype` | A [`tf.DType`](../dtypes/dtype). Type of the table values. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `resource`. |
tensorflow tf.raw_ops.QuantizeAndDequantize tf.raw\_ops.QuantizeAndDequantize
=================================
Use QuantizeAndDequantizeV2 instead.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.QuantizeAndDequantize`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/QuantizeAndDequantize)
```
tf.raw_ops.QuantizeAndDequantize(
input,
signed_input=True,
num_bits=8,
range_given=False,
input_min=0,
input_max=0,
name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. |
| `signed_input` | An optional `bool`. Defaults to `True`. |
| `num_bits` | An optional `int`. Defaults to `8`. |
| `range_given` | An optional `bool`. Defaults to `False`. |
| `input_min` | An optional `float`. Defaults to `0`. |
| `input_max` | An optional `float`. Defaults to `0`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.CudnnRNNCanonicalToParamsV2 tf.raw\_ops.CudnnRNNCanonicalToParamsV2
=======================================
Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.CudnnRNNCanonicalToParamsV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParamsV2)
```
tf.raw_ops.CudnnRNNCanonicalToParamsV2(
num_layers,
num_units,
input_size,
weights,
biases,
rnn_mode='lstm',
input_mode='linear_input',
direction='unidirectional',
dropout=0,
seed=0,
seed2=0,
num_proj=0,
name=None
)
```
Writes a set of weights into the opaque params buffer so they can be used in upcoming training or inferences.
Note that the params buffer may not be compatible across different GPUs. So any save and restoration should be converted to and from the canonical weights and biases.
num\_layers: Specifies the number of layers in the RNN model. num\_units: Specifies the size of the hidden state. input\_size: Specifies the size of the input state. weights: the canonical form of weights that can be used for saving and restoration. They are more likely to be compatible across different generations. biases: the canonical form of biases that can be used for saving and restoration. They are more likely to be compatible across different generations. num\_params\_weights: number of weight parameter matrix for all layers. num\_params\_biases: number of bias parameter vector for all layers. rnn\_mode: Indicates the type of the RNN model. input\_mode: Indicate whether there is a linear projection between the input and The actual computation before the first layer. 'skip\_input' is only allowed when input\_size == num\_units; 'auto\_select' implies 'skip\_input' when input\_size == num\_units; otherwise, it implies 'linear\_input'. direction: Indicates whether a bidirectional model will be used. dir = (direction == bidirectional) ? 2 : 1 dropout: dropout probability. When set to 0., dropout is disabled. seed: the 1st part of a seed to initialize dropout. seed2: the 2nd part of a seed to initialize dropout. num\_proj: The output dimensionality for the projection matrices. If None or 0, no projection is performed.
| Args |
| `num_layers` | A `Tensor` of type `int32`. |
| `num_units` | A `Tensor` of type `int32`. |
| `input_size` | A `Tensor` of type `int32`. |
| `weights` | A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`. |
| `biases` | A list of at least 1 `Tensor` objects with the same type as `weights`. |
| `rnn_mode` | An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. |
| `input_mode` | An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. |
| `direction` | An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. |
| `dropout` | An optional `float`. Defaults to `0`. |
| `seed` | An optional `int`. Defaults to `0`. |
| `seed2` | An optional `int`. Defaults to `0`. |
| `num_proj` | An optional `int`. Defaults to `0`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `weights`. |
tensorflow tf.raw_ops.FakeQuantWithMinMaxVars tf.raw\_ops.FakeQuantWithMinMaxVars
===================================
Fake-quantize the 'inputs' tensor of type float via global float scalars
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.FakeQuantWithMinMaxVars`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVars)
```
tf.raw_ops.FakeQuantWithMinMaxVars(
inputs, min, max, num_bits=8, narrow_range=False, name=None
)
```
Fake-quantize the `inputs` tensor of type float via global float scalars `min` and `max` to `outputs` tensor of same shape as `inputs`.
Attributes
* `[min; max]` define the clamping range for the `inputs` data.
* `inputs` values are quantized into the quantization range ( `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval.
* `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.
Before quantization, `min` and `max` values are adjusted with the following logic. It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, the behavior can be unexpected:
* If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.
* If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.
* If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1)`, `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.
This operation has a gradient and thus allows for training `min` and `max` values.
| Args |
| `inputs` | A `Tensor` of type `float32`. |
| `min` | A `Tensor` of type `float32`. |
| `max` | A `Tensor` of type `float32`. |
| `num_bits` | An optional `int`. Defaults to `8`. |
| `narrow_range` | An optional `bool`. Defaults to `False`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.LookupTableSize tf.raw\_ops.LookupTableSize
===========================
Computes the number of elements in the given table.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.LookupTableSize`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/LookupTableSize)
```
tf.raw_ops.LookupTableSize(
table_handle, name=None
)
```
| Args |
| `table_handle` | A `Tensor` of type mutable `string`. Handle to the table. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `int64`. |
tensorflow tf.raw_ops.LookupTableExport tf.raw\_ops.LookupTableExport
=============================
Outputs all keys and values in the table.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.LookupTableExport`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/LookupTableExport)
```
tf.raw_ops.LookupTableExport(
table_handle, Tkeys, Tvalues, name=None
)
```
| Args |
| `table_handle` | A `Tensor` of type mutable `string`. Handle to the table. |
| `Tkeys` | A [`tf.DType`](../dtypes/dtype). |
| `Tvalues` | A [`tf.DType`](../dtypes/dtype). |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (keys, values). |
| `keys` | A `Tensor` of type `Tkeys`. |
| `values` | A `Tensor` of type `Tvalues`. |
tensorflow tf.raw_ops.Mean tf.raw\_ops.Mean
================
Computes the mean of elements across dimensions of a tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Mean`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Mean)
```
tf.raw_ops.Mean(
input, axis, keep_dims=False, name=None
)
```
Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. The tensor to reduce. |
| `axis` | A `Tensor`. Must be one of the following types: `int32`, `int64`. The dimensions to reduce. Must be in the range `[-rank(input), rank(input))`. |
| `keep_dims` | An optional `bool`. Defaults to `False`. If true, retain reduced dimensions with length 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.All tf.raw\_ops.All
===============
Computes the "logical and" of elements across dimensions of a tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.All`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/All)
```
tf.raw_ops.All(
input, axis, keep_dims=False, name=None
)
```
Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.
| Args |
| `input` | A `Tensor` of type `bool`. The tensor to reduce. |
| `axis` | A `Tensor`. Must be one of the following types: `int32`, `int64`. The dimensions to reduce. Must be in the range `[-rank(input), rank(input))`. |
| `keep_dims` | An optional `bool`. Defaults to `False`. If true, retain reduced dimensions with length 1. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `bool`. |
tensorflow tf.raw_ops.LogSoftmax tf.raw\_ops.LogSoftmax
======================
Computes log softmax activations.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.LogSoftmax`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/LogSoftmax)
```
tf.raw_ops.LogSoftmax(
logits, name=None
)
```
For each batch `i` and class `j` we have
```
logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))
```
| Args |
| `logits` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 2-D with shape `[batch_size, num_classes]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `logits`. |
tensorflow tf.raw_ops.IsTPUEmbeddingInitialized tf.raw\_ops.IsTPUEmbeddingInitialized
=====================================
Whether TPU Embedding is initialized in a distributed TPU system.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.IsTPUEmbeddingInitialized`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/IsTPUEmbeddingInitialized)
```
tf.raw_ops.IsTPUEmbeddingInitialized(
config='', name=None
)
```
| Args |
| `config` | An optional `string`. Defaults to `""`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `bool`. |
tensorflow tf.raw_ops.RandomPoisson tf.raw\_ops.RandomPoisson
=========================
Use RandomPoissonV2 instead.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.RandomPoisson`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/RandomPoisson)
```
tf.raw_ops.RandomPoisson(
shape, rate, seed=0, seed2=0, name=None
)
```
| Args |
| `shape` | A `Tensor`. Must be one of the following types: `int32`, `int64`. |
| `rate` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. |
| `seed` | An optional `int`. Defaults to `0`. |
| `seed2` | An optional `int`. Defaults to `0`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `rate`. |
tensorflow tf.raw_ops.EnqueueTPUEmbeddingRaggedTensorBatch tf.raw\_ops.EnqueueTPUEmbeddingRaggedTensorBatch
================================================
Eases the porting of code that uses tf.nn.embedding\_lookup().
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.EnqueueTPUEmbeddingRaggedTensorBatch`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingRaggedTensorBatch)
```
tf.raw_ops.EnqueueTPUEmbeddingRaggedTensorBatch(
sample_splits,
embedding_indices,
aggregation_weights,
mode_override,
table_ids,
device_ordinal=-1,
combiners=[],
max_sequence_lengths=[],
num_features=[],
name=None
)
```
sample\_splits[i], embedding\_indices[i] and aggregation\_weights[i] correspond to the ith feature. table\_ids[i] indicates which embedding table to look up ith feature.
The tensors at corresponding positions in two of the input lists, embedding\_indices and aggregation\_weights, must have the same shape, i.e. rank 1 with dim\_size() equal to the total number of lookups into the table described by the corresponding feature.
| Args |
| `sample_splits` | A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`. A list of rank 1 Tensors specifying the break points for splitting embedding\_indices and aggregation\_weights into rows. It corresponds to ids.row\_splits in embedding\_lookup(), when ids is a RaggedTensor. |
| `embedding_indices` | A list with the same length as `sample_splits` of `Tensor` objects with the same type in: `int32`, `int64`. A list of rank 1 Tensors, indices into the embedding tables. It corresponds to ids.values in embedding\_lookup(), when ids is a RaggedTensor. |
| `aggregation_weights` | A list with the same length as `sample_splits` of `Tensor` objects with the same type in: `float32`, `float64`. A list of rank 1 Tensors containing per training example aggregation weights. It corresponds to the values field of a RaggedTensor with the same row\_splits as ids in embedding\_lookup(), when ids is a RaggedTensor. |
| `mode_override` | A `Tensor` of type `string`. A string input that overrides the mode specified in the TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference', 'training', 'backward\_pass\_only'}. When set to 'unspecified', the mode set in TPUEmbeddingConfiguration is used, otherwise mode\_override is used. |
| `table_ids` | A list of `ints`. A list of integers specifying the identifier of the embedding table (offset of TableDescriptor in the TPUEmbeddingConfiguration) to lookup the corresponding input. The ith input is looked up using table\_ids[i]. The size of the table\_ids list must be equal to that of sample\_indices, embedding\_indices and aggregation\_weights. |
| `device_ordinal` | An optional `int`. Defaults to `-1`. The TPU device to use. Should be >= 0 and less than the number of TPU cores in the task on which the node is placed. |
| `combiners` | An optional list of `strings`. Defaults to `[]`. A list of string scalars, one for each embedding table that specify how to normalize the embedding activations after weighted summation. Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have the sum of the weights be 0 for 'mean' or the sum of the squared weights be 0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for all tables. |
| `max_sequence_lengths` | An optional list of `ints`. Defaults to `[]`. |
| `num_features` | An optional list of `ints`. Defaults to `[]`. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.StatelessRandomNormal tf.raw\_ops.StatelessRandomNormal
=================================
Outputs deterministic pseudorandom values from a normal distribution.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.StatelessRandomNormal`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/StatelessRandomNormal)
```
tf.raw_ops.StatelessRandomNormal(
shape,
seed,
dtype=tf.dtypes.float32,
name=None
)
```
The generated values will have mean 0 and standard deviation 1.
The outputs are a deterministic function of `shape` and `seed`.
| Args |
| `shape` | A `Tensor`. Must be one of the following types: `int32`, `int64`. The shape of the output tensor. |
| `seed` | A `Tensor`. Must be one of the following types: `int32`, `int64`. 2 seeds (shape [2]). |
| `dtype` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to [`tf.float32`](../../tf#float32). The type of the output. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `dtype`. |
tensorflow tf.raw_ops.IsotonicRegression tf.raw\_ops.IsotonicRegression
==============================
Solves a batch of isotonic regression problems.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.IsotonicRegression`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/IsotonicRegression)
```
tf.raw_ops.IsotonicRegression(
input,
output_dtype=tf.dtypes.float32,
name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. A (batch\_size, dim)-tensor holding a batch of inputs. |
| `output_dtype` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to [`tf.float32`](../../tf#float32). Dtype of output. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (output, segments). |
| `output` | A `Tensor` of type `output_dtype`. |
| `segments` | A `Tensor` of type `int32`. |
| programming_docs |
tensorflow tf.raw_ops.ResourceAccumulatorTakeGradient tf.raw\_ops.ResourceAccumulatorTakeGradient
===========================================
Extracts the average gradient in the given ConditionalAccumulator.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ResourceAccumulatorTakeGradient`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResourceAccumulatorTakeGradient)
```
tf.raw_ops.ResourceAccumulatorTakeGradient(
handle, num_required, dtype, name=None
)
```
The op blocks until sufficient (i.e., more than num\_required) gradients have been accumulated. If the accumulator has already aggregated more than num\_required gradients, it returns the average of the accumulated gradients. Also automatically increments the recorded global\_step in the accumulator by 1, and resets the aggregate to 0.
| Args |
| `handle` | A `Tensor` of type `resource`. The handle to an accumulator. |
| `num_required` | A `Tensor` of type `int32`. Number of gradients required before we return an aggregate. |
| `dtype` | A [`tf.DType`](../dtypes/dtype) from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. The data type of accumulated gradients. Needs to correspond to the type of the accumulator. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `dtype`. |
tensorflow tf.raw_ops.ShardedFilespec tf.raw\_ops.ShardedFilespec
===========================
Generate a glob pattern matching all sharded file names.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ShardedFilespec`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ShardedFilespec)
```
tf.raw_ops.ShardedFilespec(
basename, num_shards, name=None
)
```
| Args |
| `basename` | A `Tensor` of type `string`. |
| `num_shards` | A `Tensor` of type `int32`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.raw_ops.BoostedTreesPredict tf.raw\_ops.BoostedTreesPredict
===============================
Runs multiple additive regression ensemble predictors on input instances and
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.BoostedTreesPredict`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/BoostedTreesPredict)
```
tf.raw_ops.BoostedTreesPredict(
tree_ensemble_handle, bucketized_features, logits_dimension, name=None
)
```
computes the logits. It is designed to be used during prediction. It traverses all the trees and calculates the final score for each instance.
| Args |
| `tree_ensemble_handle` | A `Tensor` of type `resource`. |
| `bucketized_features` | A list of at least 1 `Tensor` objects with type `int32`. A list of rank 1 Tensors containing bucket id for each feature. |
| `logits_dimension` | An `int`. scalar, dimension of the logits, to be used for partial logits shape. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.RFFT2D tf.raw\_ops.RFFT2D
==================
2D real-valued fast Fourier transform.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.RFFT2D`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/RFFT2D)
```
tf.raw_ops.RFFT2D(
input,
fft_length,
Tcomplex=tf.dtypes.complex64,
name=None
)
```
Computes the 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of `input`.
Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension of `output`: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms.
Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`. A float32 tensor. |
| `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [2]. The FFT length for each dimension. |
| `Tcomplex` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.complex64, tf.complex128`. Defaults to [`tf.complex64`](../../tf#complex64). |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `Tcomplex`. |
tensorflow tf.raw_ops.Select tf.raw\_ops.Select
==================
Selects elements from `x` or `y`, depending on `condition`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Select`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Select)
```
tf.raw_ops.Select(
condition, x, y, name=None
)
```
The `x`, and `y` tensors must all have the same shape, and the output will also have that shape.
The `condition` tensor must be a scalar if `x` and `y` are scalars. If `x` and `y` are vectors or higher rank, then `condition` must be either a scalar, a vector with size matching the first dimension of `x`, or must have the same shape as `x`.
The `condition` tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from `x` (if true) or `y` (if false).
If `condition` is a vector and `x` and `y` are higher rank matrices, then it chooses which row (outer dimension) to copy from `x` and `y`. If `condition` has the same shape as `x` and `y`, then it chooses which element to copy from `x` and `y`.
#### For example:
```
# 'condition' tensor is [[True, False]
# [False, True]]
# 't' is [[1, 2],
# [3, 4]]
# 'e' is [[5, 6],
# [7, 8]]
select(condition, t, e) # => [[1, 6], [7, 4]]
# 'condition' tensor is [True, False]
# 't' is [[1, 2],
# [3, 4]]
# 'e' is [[5, 6],
# [7, 8]]
select(condition, t, e) ==> [[1, 2],
[7, 8]]
```
| Args |
| `condition` | A `Tensor` of type `bool`. |
| `x` | A `Tensor` which may have the same shape as `condition`. If `condition` is rank 1, `x` may have higher rank, but its first dimension must match the size of `condition`. |
| `y` | A `Tensor` with the same type and shape as `x`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `t`. |
tensorflow tf.raw_ops.QueueClose tf.raw\_ops.QueueClose
======================
Closes the given queue.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.QueueClose`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/QueueClose)
```
tf.raw_ops.QueueClose(
handle, cancel_pending_enqueues=False, name=None
)
```
This operation signals that no more elements will be enqueued in the given queue. Subsequent Enqueue(Many) operations will fail. Subsequent Dequeue(Many) operations will continue to succeed if sufficient elements remain in the queue. Subsequent Dequeue(Many) operations that would block will fail immediately.
| Args |
| `handle` | A `Tensor` of type mutable `string`. The handle to a queue. |
| `cancel_pending_enqueues` | An optional `bool`. Defaults to `False`. If true, all pending enqueue requests that are blocked on the given queue will be canceled. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.Conv3DBackpropInput tf.raw\_ops.Conv3DBackpropInput
===============================
Computes the gradients of 3-D convolution with respect to the input.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Conv3DBackpropInput`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Conv3DBackpropInput)
```
tf.raw_ops.Conv3DBackpropInput(
input,
filter,
out_backprop,
strides,
padding,
dilations=[1, 1, 1, 1, 1],
name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. Shape `[batch, depth, rows, cols, in_channels]`. |
| `filter` | A `Tensor`. Must have the same type as `input`. Shape `[depth, rows, cols, in_channels, out_channels]`. `in_channels` must match between `input` and `filter`. |
| `out_backprop` | A `Tensor`. Must have the same type as `input`. Backprop signal of shape `[batch, out_depth, out_rows, out_cols, out_channels]`. |
| `strides` | A list of `ints` that has length `>= 5`. 1-D tensor of length 5. The stride of the sliding window for each dimension of `input`. Must have `strides[0] = strides[4] = 1`. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. |
| `dilations` | An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.TensorArrayClose tf.raw\_ops.TensorArrayClose
============================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.TensorArrayClose`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TensorArrayClose)
```
tf.raw_ops.TensorArrayClose(
handle, name=None
)
```
| Args |
| `handle` | A `Tensor` of type mutable `string`. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.BesselK1 tf.raw\_ops.BesselK1
====================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.BesselK1`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/BesselK1)
```
tf.raw_ops.BesselK1(
x, name=None
)
```
| Args |
| `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.AnonymousMemoryCache tf.raw\_ops.AnonymousMemoryCache
================================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.AnonymousMemoryCache`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/AnonymousMemoryCache)
```
tf.raw_ops.AnonymousMemoryCache(
name=None
)
```
| Args |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (handle, deleter). |
| `handle` | A `Tensor` of type `resource`. |
| `deleter` | A `Tensor` of type `variant`. |
tensorflow tf.raw_ops.AccumulatorTakeGradient tf.raw\_ops.AccumulatorTakeGradient
===================================
Extracts the average gradient in the given ConditionalAccumulator.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.AccumulatorTakeGradient`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/AccumulatorTakeGradient)
```
tf.raw_ops.AccumulatorTakeGradient(
handle, num_required, dtype, name=None
)
```
The op blocks until sufficient (i.e., more than num\_required) gradients have been accumulated. If the accumulator has already aggregated more than num\_required gradients, it returns the average of the accumulated gradients. Also automatically increments the recorded global\_step in the accumulator by 1, and resets the aggregate to 0.
| Args |
| `handle` | A `Tensor` of type mutable `string`. The handle to an accumulator. |
| `num_required` | A `Tensor` of type `int32`. Number of gradients required before we return an aggregate. |
| `dtype` | A [`tf.DType`](../dtypes/dtype) from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. The data type of accumulated gradients. Needs to correspond to the type of the accumulator. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `dtype`. |
tensorflow tf.raw_ops.ResourceSparseApplyAdagradDA tf.raw\_ops.ResourceSparseApplyAdagradDA
========================================
Update entries in '*var' and '*accum' according to the proximal adagrad scheme.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ResourceSparseApplyAdagradDA`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradDA)
```
tf.raw_ops.ResourceSparseApplyAdagradDA(
var,
gradient_accumulator,
gradient_squared_accumulator,
grad,
indices,
lr,
l1,
l2,
global_step,
use_locking=False,
name=None
)
```
| Args |
| `var` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `gradient_accumulator` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `gradient_squared_accumulator` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `grad` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. The gradient. |
| `indices` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A vector of indices into the first dimension of var and accum. |
| `lr` | A `Tensor`. Must have the same type as `grad`. Learning rate. Must be a scalar. |
| `l1` | A `Tensor`. Must have the same type as `grad`. L1 regularization. Must be a scalar. |
| `l2` | A `Tensor`. Must have the same type as `grad`. L2 regularization. Must be a scalar. |
| `global_step` | A `Tensor` of type `int64`. Training step number. Must be a scalar. |
| `use_locking` | An optional `bool`. Defaults to `False`. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.AssignAdd tf.raw\_ops.AssignAdd
=====================
Update 'ref' by adding 'value' to it.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.AssignAdd`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/AssignAdd)
```
tf.raw_ops.AssignAdd(
ref, value, use_locking=False, name=None
)
```
This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.
| Args |
| `ref` | A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. Should be from a `Variable` node. |
| `value` | A `Tensor`. Must have the same type as `ref`. The value to be added to the variable. |
| `use_locking` | An optional `bool`. Defaults to `False`. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
| `name` | A name for the operation (optional). |
| Returns |
| A mutable `Tensor`. Has the same type as `ref`. |
tensorflow tf.raw_ops.ConjugateTranspose tf.raw\_ops.ConjugateTranspose
==============================
Shuffle dimensions of x according to a permutation and conjugate the result.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ConjugateTranspose`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ConjugateTranspose)
```
tf.raw_ops.ConjugateTranspose(
x, perm, name=None
)
```
The output `y` has the same rank as `x`. The shapes of `x` and `y` satisfy: `y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]` `y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])`
| Args |
| `x` | A `Tensor`. |
| `perm` | A `Tensor`. Must be one of the following types: `int32`, `int64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.TensorArrayPack tf.raw\_ops.TensorArrayPack
===========================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.TensorArrayPack`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TensorArrayPack)
```
tf.raw_ops.TensorArrayPack(
handle, flow_in, dtype, element_shape=None, name=None
)
```
| Args |
| `handle` | A `Tensor` of type mutable `string`. |
| `flow_in` | A `Tensor` of type `float32`. |
| `dtype` | A [`tf.DType`](../dtypes/dtype). |
| `element_shape` | An optional [`tf.TensorShape`](../tensorshape) or list of `ints`. Defaults to `None`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `dtype`. |
tensorflow tf.raw_ops.InplaceUpdate tf.raw\_ops.InplaceUpdate
=========================
Updates specified rows 'i' with values 'v'.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.InplaceUpdate`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/InplaceUpdate)
```
tf.raw_ops.InplaceUpdate(
x, i, v, name=None
)
```
Computes `x[i, :] = v; return x`.
Originally this function is mutative however for compilation we make this operation create / operate on a copy of `x`.
| Args |
| `x` | A `Tensor`. A tensor of type `T`. |
| `i` | A `Tensor` of type `int32`. A vector. Indices into the left-most dimension of `x`. |
| `v` | A `Tensor`. Must have the same type as `x`. A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.SparseFillEmptyRowsGrad tf.raw\_ops.SparseFillEmptyRowsGrad
===================================
The gradient of SparseFillEmptyRows.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.SparseFillEmptyRowsGrad`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/SparseFillEmptyRowsGrad)
```
tf.raw_ops.SparseFillEmptyRowsGrad(
reverse_index_map, grad_values, name=None
)
```
Takes vectors reverse\_index\_map, shaped `[N]`, and grad\_values, shaped `[N_full]`, where `N_full >= N` and copies data into either `d_values` or `d_default_value`. Here `d_values` is shaped `[N]` and `d_default_value` is a scalar.
d\_values[j] = grad\_values[reverse\_index\_map[j]] d\_default*value = sum*{k : 0 .. N\_full - 1} ( grad\_values[k] \* 1{k not in reverse\_index\_map})
| Args |
| `reverse_index_map` | A `Tensor` of type `int64`. 1-D. The reverse index map from SparseFillEmptyRows. |
| `grad_values` | A `Tensor`. 1-D. The gradients from backprop. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (d\_values, d\_default\_value). |
| `d_values` | A `Tensor`. Has the same type as `grad_values`. |
| `d_default_value` | A `Tensor`. Has the same type as `grad_values`. |
| programming_docs |
tensorflow tf.raw_ops.SerializeManySparse tf.raw\_ops.SerializeManySparse
===============================
Serialize an `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor` object.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.SerializeManySparse`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/SerializeManySparse)
```
tf.raw_ops.SerializeManySparse(
sparse_indices,
sparse_values,
sparse_shape,
out_type=tf.dtypes.string,
name=None
)
```
The `SparseTensor` must have rank `R` greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the `SparseTensor` must be sorted in increasing order of this first dimension. The serialized `SparseTensor` objects going into each row of `serialized_sparse` will have rank `R-1`.
The minibatch size `N` is extracted from `sparse_shape[0]`.
| Args |
| `sparse_indices` | A `Tensor` of type `int64`. 2-D. The `indices` of the minibatch `SparseTensor`. |
| `sparse_values` | A `Tensor`. 1-D. The `values` of the minibatch `SparseTensor`. |
| `sparse_shape` | A `Tensor` of type `int64`. 1-D. The `shape` of the minibatch `SparseTensor`. |
| `out_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.string, tf.variant`. Defaults to [`tf.string`](../../tf#string). The `dtype` to use for serialization; the supported types are `string` (default) and `variant`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `out_type`. |
tensorflow tf.raw_ops.SaveV2 tf.raw\_ops.SaveV2
==================
Saves tensors in V2 checkpoint format.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.SaveV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/SaveV2)
```
tf.raw_ops.SaveV2(
prefix, tensor_names, shape_and_slices, tensors, name=None
)
```
By default, saves the named tensors in full. If the caller wishes to save specific slices of full tensors, "shape\_and\_slices" should be non-empty strings and correspondingly well-formed.
| Args |
| `prefix` | A `Tensor` of type `string`. Must have a single element. The prefix of the V2 checkpoint to which we write the tensors. |
| `tensor_names` | A `Tensor` of type `string`. shape {N}. The names of the tensors to be saved. |
| `shape_and_slices` | A `Tensor` of type `string`. shape {N}. The slice specs of the tensors to be saved. Empty strings indicate that they are non-partitioned tensors. |
| `tensors` | A list of `Tensor` objects. `N` tensors to save. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.FresnelSin tf.raw\_ops.FresnelSin
======================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.FresnelSin`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/FresnelSin)
```
tf.raw_ops.FresnelSin(
x, name=None
)
```
| Args |
| `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.UncompressElement tf.raw\_ops.UncompressElement
=============================
Uncompresses a compressed dataset element.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.UncompressElement`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/UncompressElement)
```
tf.raw_ops.UncompressElement(
compressed, output_types, output_shapes, name=None
)
```
| Args |
| `compressed` | A `Tensor` of type `variant`. |
| `output_types` | A list of `tf.DTypes` that has length `>= 1`. |
| `output_shapes` | A list of shapes (each a [`tf.TensorShape`](../tensorshape) or list of `ints`) that has length `>= 1`. |
| `name` | A name for the operation (optional). |
| Returns |
| A list of `Tensor` objects of type `output_types`. |
tensorflow tf.raw_ops.Case tf.raw\_ops.Case
================
An n-way switch statement which calls a single branch function.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Case`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Case)
```
tf.raw_ops.Case(
branch_index, input, Tout, branches, output_shapes=[], name=None
)
```
```
An n-way switch statement, implementing the following:
```
switch (branch_index) {
case 0:
output = branches[0](input);
break;
case 1:
output = branches[1](input);
break;
...
case [[nbranches-1]]:
default:
output = branches[nbranches-1](input);
break;
}
```
```
| Args |
| `branch_index` | A `Tensor` of type `int32`. The branch selector, an int32 Tensor. |
| `input` | A list of `Tensor` objects. A list of input tensors passed to the branch function. |
| `Tout` | A list of `tf.DTypes`. A list of output types. |
| `branches` | A list of functions decorated with @Defun that has length `>= 1`. A list of functions each of which takes 'inputs' and returns a list of tensors, whose types are the same as what every other branch returns. |
| `output_shapes` | An optional list of shapes (each a [`tf.TensorShape`](../tensorshape) or list of `ints`). Defaults to `[]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A list of `Tensor` objects of type `Tout`. |
tensorflow tf.raw_ops.ImageProjectiveTransformV3 tf.raw\_ops.ImageProjectiveTransformV3
======================================
Applies the given transform to each of the images.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ImageProjectiveTransformV3`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ImageProjectiveTransformV3)
```
tf.raw_ops.ImageProjectiveTransformV3(
images,
transforms,
output_shape,
fill_value,
interpolation,
fill_mode='CONSTANT',
name=None
)
```
If one row of `transforms` is `[a0, a1, a2, b0, b1, b2, c0, c1]`, then it maps the *output* point `(x, y)` to a transformed *input* point `(x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k)`, where `k = c0 x + c1 y + 1`. If the transformed point lays outside of the input image, the output pixel is set to fill\_value.
| Args |
| `images` | A `Tensor`. Must be one of the following types: `uint8`, `int32`, `int64`, `half`, `float32`, `float64`. 4-D with shape `[batch, height, width, channels]`. |
| `transforms` | A `Tensor` of type `float32`. 2-D Tensor, `[batch, 8]` or `[1, 8]` matrix, where each row corresponds to a 3 x 3 projective transformation matrix, with the last entry assumed to be 1. If there is one row, the same transformation will be applied to all images. |
| `output_shape` | A `Tensor` of type `int32`. 1-D Tensor [new\_height, new\_width]. |
| `fill_value` | A `Tensor` of type `float32`. float, the value to be filled when fill\_mode is constant". |
| `interpolation` | A `string`. Interpolation method, "NEAREST" or "BILINEAR". |
| `fill_mode` | An optional `string`. Defaults to `"CONSTANT"`. Fill mode, "REFLECT", "WRAP", "CONSTANT", or "NEAREST". |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `images`. |
tensorflow tf.raw_ops.Unpack tf.raw\_ops.Unpack
==================
Unpacks a given dimension of a rank-`R` tensor into `num` rank-`(R-1)` tensors.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Unpack`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Unpack)
```
tf.raw_ops.Unpack(
value, num, axis=0, name=None
)
```
Unpacks `num` tensors from `value` by chipping it along the `axis` dimension. For example, given a tensor of shape `(A, B, C, D)`;
If `axis == 0` then the i'th tensor in `output` is the slice `value[i, :, :, :]` and each tensor in `output` will have shape `(B, C, D)`. (Note that the dimension unpacked along is gone, unlike `split`).
If `axis == 1` then the i'th tensor in `output` is the slice `value[:, i, :, :]` and each tensor in `output` will have shape `(A, C, D)`. Etc.
This is the opposite of `pack`.
| Args |
| `value` | A `Tensor`. 1-D or higher, with `axis` dimension size equal to `num`. |
| `num` | An `int` that is `>= 0`. |
| `axis` | An optional `int`. Defaults to `0`. Dimension along which to unpack. Negative values wrap around, so the valid range is `[-R, R)`. |
| `name` | A name for the operation (optional). |
| Returns |
| A list of `num` `Tensor` objects with the same type as `value`. |
tensorflow tf.raw_ops.ReaderNumWorkUnitsCompletedV2 tf.raw\_ops.ReaderNumWorkUnitsCompletedV2
=========================================
Returns the number of work units this Reader has finished processing.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ReaderNumWorkUnitsCompletedV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompletedV2)
```
tf.raw_ops.ReaderNumWorkUnitsCompletedV2(
reader_handle, name=None
)
```
| Args |
| `reader_handle` | A `Tensor` of type `resource`. Handle to a Reader. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `int64`. |
tensorflow tf.raw_ops.ConfigureTPUEmbedding tf.raw\_ops.ConfigureTPUEmbedding
=================================
Sets up TPUEmbedding in a distributed TPU system.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ConfigureTPUEmbedding`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ConfigureTPUEmbedding)
```
tf.raw_ops.ConfigureTPUEmbedding(
config, name=None
)
```
| Args |
| `config` | A `string`. Serialized tensorflow.tpu.TPUEmbeddingConfiguration that describes the embedding lookups of the program. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.Size tf.raw\_ops.Size
================
Returns the size of a tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Size`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Size)
```
tf.raw_ops.Size(
input,
out_type=tf.dtypes.int32,
name=None
)
```
This operation returns an integer representing the number of elements in `input`.
#### For example:
```
# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12
```
| Args |
| `input` | A `Tensor`. |
| `out_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.int32, tf.int64`. Defaults to [`tf.int32`](../../tf#int32). |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `out_type`. |
tensorflow tf.raw_ops.ResourceScatterMin tf.raw\_ops.ResourceScatterMin
==============================
Reduces sparse updates into the variable referenced by `resource` using the `min` operation.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ResourceScatterMin`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResourceScatterMin)
```
tf.raw_ops.ResourceScatterMin(
resource, indices, updates, name=None
)
```
This operation computes
```
# Scalar indices
ref[indices, ...] = min(ref[indices, ...], updates[...])
# Vector indices (for each i)
ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
```
Duplicate entries are handled correctly: if multiple `indices` reference the same location, their contributions are combined.
Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.
| Args |
| `resource` | A `Tensor` of type `resource`. Should be from a `Variable` node. |
| `indices` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A tensor of indices into the first dimension of `ref`. |
| `updates` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. A tensor of updated values to add to `ref`. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.IdentityReaderV2 tf.raw\_ops.IdentityReaderV2
============================
A Reader that outputs the queued work as both the key and value.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.IdentityReaderV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/IdentityReaderV2)
```
tf.raw_ops.IdentityReaderV2(
container='', shared_name='', name=None
)
```
To use, enqueue strings in a Queue. ReaderRead will take the front work string and output (work, work).
| Args |
| `container` | An optional `string`. Defaults to `""`. If non-empty, this reader is placed in the given container. Otherwise, a default container is used. |
| `shared_name` | An optional `string`. Defaults to `""`. If non-empty, this reader is named in the given bucket with this shared\_name. Otherwise, the node name is used instead. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `resource`. |
tensorflow tf.raw_ops.ResourceSparseApplyRMSProp tf.raw\_ops.ResourceSparseApplyRMSProp
======================================
Update '\*var' according to the RMSProp algorithm.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ResourceSparseApplyRMSProp`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResourceSparseApplyRMSProp)
```
tf.raw_ops.ResourceSparseApplyRMSProp(
var,
ms,
mom,
lr,
rho,
momentum,
epsilon,
grad,
indices,
use_locking=False,
name=None
)
```
Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean\_square = decay \* mean\_square + (1-decay) \* gradient \*\* 2 Delta = learning\_rate \* gradient / sqrt(mean\_square + epsilon)
ms <- rho \* ms*{t-1} + (1-rho) \* grad \* grad mom <- momentum \* mom*{t-1} + lr \* grad / sqrt(ms + epsilon) var <- var - mom
| Args |
| `var` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `ms` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `mom` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `lr` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. Scaling factor. Must be a scalar. |
| `rho` | A `Tensor`. Must have the same type as `lr`. Decay rate. Must be a scalar. |
| `momentum` | A `Tensor`. Must have the same type as `lr`. |
| `epsilon` | A `Tensor`. Must have the same type as `lr`. Ridge term. Must be a scalar. |
| `grad` | A `Tensor`. Must have the same type as `lr`. The gradient. |
| `indices` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A vector of indices into the first dimension of var, ms and mom. |
| `use_locking` | An optional `bool`. Defaults to `False`. If `True`, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.CudnnRNNBackpropV2 tf.raw\_ops.CudnnRNNBackpropV2
==============================
Backprop step of CudnnRNN.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.CudnnRNNBackpropV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/CudnnRNNBackpropV2)
```
tf.raw_ops.CudnnRNNBackpropV2(
input,
input_h,
input_c,
params,
output,
output_h,
output_c,
output_backprop,
output_h_backprop,
output_c_backprop,
reserve_space,
host_reserved,
rnn_mode='lstm',
input_mode='linear_input',
direction='unidirectional',
dropout=0,
seed=0,
seed2=0,
name=None
)
```
Compute the backprop of both data and weights in a RNN. Takes an extra "host\_reserved" inupt than CudnnRNNBackprop, which is used to determine RNN cudnnRNNAlgo\_t and cudnnMathType\_t.
rnn\_mode: Indicates the type of the RNN model. input\_mode: Indicates whether there is a linear projection between the input and the actual computation before the first layer. 'skip\_input' is only allowed when input\_size == num\_units; 'auto\_select' implies 'skip\_input' when input\_size == num\_units; otherwise, it implies 'linear\_input'. direction: Indicates whether a bidirectional model will be used. Should be "unidirectional" or "bidirectional". dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq\_length, batch\_size, input\_size]. input\_h: A 3-D tensor with the shape of [num\_layer \* dir, batch\_size, num\_units]. input\_c: For LSTM, a 3-D tensor with the shape of [num\_layer \* dir, batch, num\_units]. For other models, it is ignored. params: A 1-D tensor that contains the weights and biases in an opaque layout. The size must be created through CudnnRNNParamsSize, and initialized separately. Note that they might not be compatible across different generations. So it is a good idea to save and restore output: A 3-D tensor with the shape of [seq\_length, batch\_size, dir \* num\_units]. output\_h: The same shape has input\_h. output\_c: The same shape as input\_c for LSTM. An empty tensor for other models. output\_backprop: A 3-D tensor with the same shape as output in the forward pass. output\_h\_backprop: A 3-D tensor with the same shape as output\_h in the forward pass. output\_c\_backprop: A 3-D tensor with the same shape as output\_c in the forward pass. reserve\_space: The same reserve\_space produced in the forward operation. host\_reserved: The same host\_reserved produced in the forward operation. input\_backprop: The backprop to input in the forward pass. Has the same shape as input. input\_h\_backprop: The backprop to input\_h in the forward pass. Has the same shape as input\_h. input\_c\_backprop: The backprop to input\_c in the forward pass. Has the same shape as input\_c. params\_backprop: The backprop to the params buffer in the forward pass. Has the same shape as params.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. |
| `input_h` | A `Tensor`. Must have the same type as `input`. |
| `input_c` | A `Tensor`. Must have the same type as `input`. |
| `params` | A `Tensor`. Must have the same type as `input`. |
| `output` | A `Tensor`. Must have the same type as `input`. |
| `output_h` | A `Tensor`. Must have the same type as `input`. |
| `output_c` | A `Tensor`. Must have the same type as `input`. |
| `output_backprop` | A `Tensor`. Must have the same type as `input`. |
| `output_h_backprop` | A `Tensor`. Must have the same type as `input`. |
| `output_c_backprop` | A `Tensor`. Must have the same type as `input`. |
| `reserve_space` | A `Tensor`. Must have the same type as `input`. |
| `host_reserved` | A `Tensor` of type `int8`. |
| `rnn_mode` | An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. |
| `input_mode` | An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. |
| `direction` | An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. |
| `dropout` | An optional `float`. Defaults to `0`. |
| `seed` | An optional `int`. Defaults to `0`. |
| `seed2` | An optional `int`. Defaults to `0`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (input\_backprop, input\_h\_backprop, input\_c\_backprop, params\_backprop). |
| `input_backprop` | A `Tensor`. Has the same type as `input`. |
| `input_h_backprop` | A `Tensor`. Has the same type as `input`. |
| `input_c_backprop` | A `Tensor`. Has the same type as `input`. |
| `params_backprop` | A `Tensor`. Has the same type as `input`. |
| programming_docs |
tensorflow tf.raw_ops.Betainc tf.raw\_ops.Betainc
===================
Compute the regularized incomplete beta integral \(I\_x(a, b)\).
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Betainc`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Betainc)
```
tf.raw_ops.Betainc(
a, b, x, name=None
)
```
The regularized incomplete beta integral is defined as:
\(I\_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\)
where
\(B(x; a, b) = \int\_0^x t^{a-1} (1 - t)^{b-1} dt\)
is the incomplete beta function and \(B(a, b)\) is the *complete* beta function.
| Args |
| `a` | A `Tensor`. Must be one of the following types: `float32`, `float64`. |
| `b` | A `Tensor`. Must have the same type as `a`. |
| `x` | A `Tensor`. Must have the same type as `a`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `a`. |
tensorflow tf.raw_ops.TensorArrayConcatV3 tf.raw\_ops.TensorArrayConcatV3
===============================
Concat the elements from the TensorArray into value `value`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.TensorArrayConcatV3`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TensorArrayConcatV3)
```
tf.raw_ops.TensorArrayConcatV3(
handle, flow_in, dtype, element_shape_except0=None, name=None
)
```
Takes `T` elements of shapes
```
(n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...)
```
and concatenates them into a Tensor of shape:
```
```(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)```
```
All elements must have the same shape (excepting the first dimension).
| Args |
| `handle` | A `Tensor` of type `resource`. The handle to a TensorArray. |
| `flow_in` | A `Tensor` of type `float32`. A float scalar that enforces proper chaining of operations. |
| `dtype` | A `tf.DType`. The type of the elem that is returned. |
| `element_shape_except0` | An optional `tf.TensorShape` or list of `ints`. Defaults to `None`. The expected shape of an element, if known, excluding the first dimension. Used to validate the shapes of TensorArray elements. If this shape is not fully specified, concatenating zero-size TensorArrays is an error. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (value, lengths). |
| `value` | A `Tensor` of type `dtype`. |
| `lengths` | A `Tensor` of type `int64`. |
tensorflow tf.raw_ops.InplaceSub tf.raw\_ops.InplaceSub
======================
Subtracts `v` into specified rows of `x`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.InplaceSub`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/InplaceSub)
```
tf.raw_ops.InplaceSub(
x, i, v, name=None
)
```
Computes y = x; y[i, :] -= v; return y.
| Args |
| `x` | A `Tensor`. A `Tensor` of type T. |
| `i` | A `Tensor` of type `int32`. A vector. Indices into the left-most dimension of `x`. |
| `v` | A `Tensor`. Must have the same type as `x`. A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.TensorArrayV2 tf.raw\_ops.TensorArrayV2
=========================
Deprecated. Use TensorArrayV3
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.TensorArrayV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TensorArrayV2)
```
tf.raw_ops.TensorArrayV2(
size,
dtype,
element_shape=None,
dynamic_size=False,
clear_after_read=True,
tensor_array_name='',
name=None
)
```
| Args |
| `size` | A `Tensor` of type `int32`. |
| `dtype` | A [`tf.DType`](../dtypes/dtype). |
| `element_shape` | An optional [`tf.TensorShape`](../tensorshape) or list of `ints`. Defaults to `None`. |
| `dynamic_size` | An optional `bool`. Defaults to `False`. |
| `clear_after_read` | An optional `bool`. Defaults to `True`. |
| `tensor_array_name` | An optional `string`. Defaults to `""`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.raw_ops.SparseMatrixZeros tf.raw\_ops.SparseMatrixZeros
=============================
Creates an all-zeros CSRSparseMatrix with shape `dense_shape`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.SparseMatrixZeros`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/SparseMatrixZeros)
```
tf.raw_ops.SparseMatrixZeros(
dense_shape, type, name=None
)
```
| Args |
| `dense_shape` | A `Tensor` of type `int64`. The desired matrix shape. |
| `type` | A [`tf.DType`](../dtypes/dtype) from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `variant`. |
tensorflow tf.raw_ops.ResizeBicubic tf.raw\_ops.ResizeBicubic
=========================
Resize `images` to `size` using bicubic interpolation.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ResizeBicubic`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResizeBicubic)
```
tf.raw_ops.ResizeBicubic(
images, size, align_corners=False, half_pixel_centers=False, name=None
)
```
Input images can be of different types but output images are always float.
| Args |
| `images` | A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`, `bfloat16`. 4-D with shape `[batch, height, width, channels]`. |
| `size` | A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images. |
| `align_corners` | An optional `bool`. Defaults to `False`. If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to false. |
| `half_pixel_centers` | An optional `bool`. Defaults to `False`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.SdcaOptimizerV2 tf.raw\_ops.SdcaOptimizerV2
===========================
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.SdcaOptimizerV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/SdcaOptimizerV2)
```
tf.raw_ops.SdcaOptimizerV2(
sparse_example_indices,
sparse_feature_indices,
sparse_feature_values,
dense_features,
example_weights,
example_labels,
sparse_indices,
sparse_weights,
dense_weights,
example_state_data,
loss_type,
l1,
l2,
num_loss_partitions,
num_inner_iterations,
adaptive=True,
name=None
)
```
linear models with L1 + L2 regularization. As global optimization objective is strongly-convex, the optimizer optimizes the dual objective at each step. The optimizer applies each update one example at a time. Examples are sampled uniformly, and the optimizer is learning rate free and enjoys linear convergence rate.
[Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
Shai Shalev-Shwartz, Tong Zhang. 2012
\[Loss Objective = \sum f\_{i} (wx\_{i}) + (l2 / 2) \* |w|^2 + l1 \* |w|\]
[Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015
[Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015
| Args |
| `sparse_example_indices` | A list of `Tensor` objects with type `int64`. a list of vectors which contain example indices. |
| `sparse_feature_indices` | A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors which contain feature indices. |
| `sparse_feature_values` | A list of `Tensor` objects with type `float32`. a list of vectors which contains feature value associated with each feature group. |
| `dense_features` | A list of `Tensor` objects with type `float32`. a list of matrices which contains the dense feature values. |
| `example_weights` | A `Tensor` of type `float32`. a vector which contains the weight associated with each example. |
| `example_labels` | A `Tensor` of type `float32`. a vector which contains the label/target associated with each example. |
| `sparse_indices` | A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. a list of vectors where each value is the indices which has corresponding weights in sparse\_weights. This field maybe omitted for the dense approach. |
| `sparse_weights` | A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. a list of vectors where each value is the weight associated with a sparse feature group. |
| `dense_weights` | A list with the same length as `dense_features` of `Tensor` objects with type `float32`. a list of vectors where the values are the weights associated with a dense feature group. |
| `example_state_data` | A `Tensor` of type `float32`. a list of vectors containing the example state data. |
| `loss_type` | A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses. |
| `l1` | A `float`. Symmetric l1 regularization strength. |
| `l2` | A `float`. Symmetric l2 regularization strength. |
| `num_loss_partitions` | An `int` that is `>= 1`. Number of partitions of the global loss function. |
| `num_inner_iterations` | An `int` that is `>= 1`. Number of iterations per mini-batch. |
| `adaptive` | An optional `bool`. Defaults to `True`. Whether to use Adaptive SDCA for the inner loop. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (out\_example\_state\_data, out\_delta\_sparse\_weights, out\_delta\_dense\_weights). |
| `out_example_state_data` | A `Tensor` of type `float32`. |
| `out_delta_sparse_weights` | A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. |
| `out_delta_dense_weights` | A list with the same length as `dense_features` of `Tensor` objects with type `float32`. |
tensorflow tf.raw_ops.InterleaveDataset tf.raw\_ops.InterleaveDataset
=============================
Creates a dataset that applies `f` to the outputs of `input_dataset`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.InterleaveDataset`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/InterleaveDataset)
```
tf.raw_ops.InterleaveDataset(
input_dataset,
other_arguments,
cycle_length,
block_length,
f,
output_types,
output_shapes,
metadata='',
name=None
)
```
Unlike MapDataset, the `f` in InterleaveDataset is expected to return a Dataset variant, and InterleaveDataset will flatten successive results into a single Dataset. Unlike FlatMapDataset, InterleaveDataset will interleave sequences of up to `block_length` consecutive elements from `cycle_length` input elements.
| Args |
| `input_dataset` | A `Tensor` of type `variant`. |
| `other_arguments` | A list of `Tensor` objects. |
| `cycle_length` | A `Tensor` of type `int64`. |
| `block_length` | A `Tensor` of type `int64`. |
| `f` | A function decorated with @Defun. A function mapping elements of `input_dataset`, concatenated with `other_arguments`, to a Dataset variant that contains elements matching `output_types` and `output_shapes`. |
| `output_types` | A list of `tf.DTypes` that has length `>= 1`. |
| `output_shapes` | A list of shapes (each a [`tf.TensorShape`](../tensorshape) or list of `ints`) that has length `>= 1`. |
| `metadata` | An optional `string`. Defaults to `""`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `variant`. |
tensorflow tf.raw_ops.ResourceApplyPowerSign tf.raw\_ops.ResourceApplyPowerSign
==================================
Update '\*var' according to the AddSign update.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ResourceApplyPowerSign`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResourceApplyPowerSign)
```
tf.raw_ops.ResourceApplyPowerSign(
var, m, lr, logbase, sign_decay, beta, grad, use_locking=False, name=None
)
```
m*t <- beta1 \* m*{t-1} + (1 - beta1) \* g update <- exp(logbase \* sign\_decay \* sign(g) \* sign(m\_t)) \* g variable <- variable - lr\_t \* update
| Args |
| `var` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `m` | A `Tensor` of type `resource`. Should be from a Variable(). |
| `lr` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. Scaling factor. Must be a scalar. |
| `logbase` | A `Tensor`. Must have the same type as `lr`. Must be a scalar. |
| `sign_decay` | A `Tensor`. Must have the same type as `lr`. Must be a scalar. |
| `beta` | A `Tensor`. Must have the same type as `lr`. Must be a scalar. |
| `grad` | A `Tensor`. Must have the same type as `lr`. The gradient. |
| `use_locking` | An optional `bool`. Defaults to `False`. If `True`, updating of the var and m tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.Unbatch tf.raw\_ops.Unbatch
===================
Reverses the operation of Batch for a single output Tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Unbatch`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Unbatch)
```
tf.raw_ops.Unbatch(
batched_tensor,
batch_index,
id,
timeout_micros,
container='',
shared_name='',
name=None
)
```
An instance of Unbatch either receives an empty batched\_tensor, in which case it asynchronously waits until the values become available from a concurrently running instance of Unbatch with the same container and shared\_name, or receives a non-empty batched\_tensor in which case it finalizes all other concurrently running instances and outputs its own element from the batch.
batched\_tensor: The possibly transformed output of Batch. The size of the first dimension should remain unchanged by the transformations for the operation to work. batch\_index: The matching batch\_index obtained from Batch. id: The id scalar emitted by Batch. unbatched\_tensor: The Tensor corresponding to this execution. timeout\_micros: Maximum amount of time (in microseconds) to wait to receive the batched input tensor associated with a given invocation of the op. container: Container to control resource sharing. shared\_name: Instances of Unbatch with the same container and shared\_name are assumed to possibly belong to the same batch. If left empty, the op name will be used as the shared name.
| Args |
| `batched_tensor` | A `Tensor`. |
| `batch_index` | A `Tensor` of type `int64`. |
| `id` | A `Tensor` of type `int64`. |
| `timeout_micros` | An `int`. |
| `container` | An optional `string`. Defaults to `""`. |
| `shared_name` | An optional `string`. Defaults to `""`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `batched_tensor`. |
tensorflow tf.raw_ops.Ndtri tf.raw\_ops.Ndtri
=================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Ndtri`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Ndtri)
```
tf.raw_ops.Ndtri(
x, name=None
)
```
| Args |
| `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.UnsortedSegmentMax tf.raw\_ops.UnsortedSegmentMax
==============================
Computes the maximum along segments of a tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.UnsortedSegmentMax`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/UnsortedSegmentMax)
```
tf.raw_ops.UnsortedSegmentMax(
data, segment_ids, num_segments, name=None
)
```
Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments.
This operator is similar to [`tf.math.unsorted_segment_sum`](../math/unsorted_segment_sum), Instead of computing the sum over segments, it computes the maximum such that:
\(output\_i = \max\_{j...} data[j...]\) where max is over tuples `j...` such that `segment_ids[j...] == i`.
If the maximum is empty for a given segment ID `i`, it outputs the smallest possible value for the specific numeric type, `output[i] = numeric_limits<T>::lowest()`.
If the given segment ID `i` is negative, then the corresponding value is dropped, and will not be included in the result.
#### For example:
```
c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.math.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2).numpy()
array([[4, 3, 3, 4],
[5, 6, 7, 8]], dtype=int32)
```
| Args |
| `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. |
| `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A tensor whose shape is a prefix of `data.shape`. The values must be less than `num_segments`.
|
| `num_segments` | A `Tensor`. Must be one of the following types: `int32`, `int64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `data`. |
tensorflow tf.raw_ops.TextLineReader tf.raw\_ops.TextLineReader
==========================
A Reader that outputs the lines of a file delimited by '\n'.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.TextLineReader`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TextLineReader)
```
tf.raw_ops.TextLineReader(
skip_header_lines=0,
container='',
shared_name='',
name=None
)
```
| Args |
| `skip_header_lines` | An optional `int`. Defaults to `0`. Number of lines to skip from the beginning of every file. |
| `container` | An optional `string`. Defaults to `""`. If non-empty, this reader is placed in the given container. Otherwise, a default container is used. |
| `shared_name` | An optional `string`. Defaults to `""`. If non-empty, this reader is named in the given bucket with this shared\_name. Otherwise, the node name is used instead. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type mutable `string`. |
| programming_docs |
tensorflow tf.raw_ops.LMDBReader tf.raw\_ops.LMDBReader
======================
A Reader that outputs the records from a LMDB file.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.LMDBReader`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/LMDBReader)
```
tf.raw_ops.LMDBReader(
container='', shared_name='', name=None
)
```
| Args |
| `container` | An optional `string`. Defaults to `""`. If non-empty, this reader is placed in the given container. Otherwise, a default container is used. |
| `shared_name` | An optional `string`. Defaults to `""`. If non-empty, this reader is named in the given bucket with this shared\_name. Otherwise, the node name is used instead. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type mutable `string`. |
tensorflow tf.raw_ops.ExperimentalIteratorGetDevice tf.raw\_ops.ExperimentalIteratorGetDevice
=========================================
Returns the name of the device on which `resource` has been placed.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ExperimentalIteratorGetDevice`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ExperimentalIteratorGetDevice)
```
tf.raw_ops.ExperimentalIteratorGetDevice(
resource, name=None
)
```
| Args |
| `resource` | A `Tensor` of type `resource`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters tf.raw\_ops.RetrieveTPUEmbeddingProximalAdagradParameters
=========================================================
Retrieve proximal Adagrad embedding parameters.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters)
```
tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters(
num_shards,
shard_id,
table_id=-1,
table_name='',
config='',
name=None
)
```
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
| Args |
| `num_shards` | An `int`. |
| `shard_id` | An `int`. |
| `table_id` | An optional `int`. Defaults to `-1`. |
| `table_name` | An optional `string`. Defaults to `""`. |
| `config` | An optional `string`. Defaults to `""`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (parameters, accumulators). |
| `parameters` | A `Tensor` of type `float32`. |
| `accumulators` | A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.QuantizedInstanceNorm tf.raw\_ops.QuantizedInstanceNorm
=================================
Quantized Instance normalization.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.QuantizedInstanceNorm`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/QuantizedInstanceNorm)
```
tf.raw_ops.QuantizedInstanceNorm(
x,
x_min,
x_max,
output_range_given=False,
given_y_min=0,
given_y_max=0,
variance_epsilon=1e-05,
min_separation=0.001,
name=None
)
```
| Args |
| `x` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. A 4D input Tensor. |
| `x_min` | A `Tensor` of type `float32`. The value represented by the lowest quantized input. |
| `x_max` | A `Tensor` of type `float32`. The value represented by the highest quantized input. |
| `output_range_given` | An optional `bool`. Defaults to `False`. If True, `given_y_min` and `given_y_min` and `given_y_max` are used as the output range. Otherwise, the implementation computes the output range. |
| `given_y_min` | An optional `float`. Defaults to `0`. Output in `y_min` if `output_range_given` is True. |
| `given_y_max` | An optional `float`. Defaults to `0`. Output in `y_max` if `output_range_given` is True. |
| `variance_epsilon` | An optional `float`. Defaults to `1e-05`. A small float number to avoid dividing by 0. |
| `min_separation` | An optional `float`. Defaults to `0.001`. Minimum value of `y_max - y_min` |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (y, y\_min, y\_max). |
| `y` | A `Tensor`. Has the same type as `x`. |
| `y_min` | A `Tensor` of type `float32`. |
| `y_max` | A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.PyFunc tf.raw\_ops.PyFunc
==================
Invokes a python function to compute func(input)->output.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.PyFunc`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/PyFunc)
```
tf.raw_ops.PyFunc(
input, token, Tout, name=None
)
```
This operation is considered stateful. For a stateless version, see PyFuncStateless.
| Args |
| `input` | A list of `Tensor` objects. List of Tensors that will provide input to the Op. |
| `token` | A `string`. A token representing a registered python function in this address space. |
| `Tout` | A list of `tf.DTypes`. Data types of the outputs from the op. The length of the list specifies the number of outputs. |
| `name` | A name for the operation (optional). |
| Returns |
| A list of `Tensor` objects of type `Tout`. |
tensorflow tf.raw_ops.RandomPoissonV2 tf.raw\_ops.RandomPoissonV2
===========================
Outputs random values from the Poisson distribution(s) described by rate.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.RandomPoissonV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/RandomPoissonV2)
```
tf.raw_ops.RandomPoissonV2(
shape,
rate,
seed=0,
seed2=0,
dtype=tf.dtypes.int64,
name=None
)
```
This op uses two algorithms, depending on rate. If rate >= 10, then the algorithm by Hormann is used to acquire samples via transformation-rejection. See <http://www.sciencedirect.com/science/article/pii/0167668793909974>
Otherwise, Knuth's algorithm is used to acquire samples via multiplying uniform random variables. See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer Programming, Volume 2. Addison Wesley
| Args |
| `shape` | A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D integer tensor. Shape of independent samples to draw from each distribution described by the shape parameters given in rate. |
| `rate` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. A tensor in which each scalar is a "rate" parameter describing the associated poisson distribution. |
| `seed` | An optional `int`. Defaults to `0`. If either `seed` or `seed2` are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. |
| `seed2` | An optional `int`. Defaults to `0`. A second seed to avoid seed collision. |
| `dtype` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to [`tf.int64`](../../tf#int64). |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `dtype`. |
tensorflow tf.raw_ops.ConditionalAccumulator tf.raw\_ops.ConditionalAccumulator
==================================
A conditional accumulator for aggregating gradients.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ConditionalAccumulator`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ConditionalAccumulator)
```
tf.raw_ops.ConditionalAccumulator(
dtype,
shape,
container='',
shared_name='',
reduction_type='MEAN',
name=None
)
```
The accumulator accepts gradients marked with local\_step greater or equal to the most recent global\_step known to the accumulator. The average can be extracted from the accumulator, provided sufficient gradients have been accumulated. Extracting the average automatically resets the aggregate to 0, and increments the global\_step recorded by the accumulator.
| Args |
| `dtype` | A [`tf.DType`](../dtypes/dtype) from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. The type of the value being accumulated. |
| `shape` | A [`tf.TensorShape`](../tensorshape) or list of `ints`. The shape of the values, can be [], in which case shape is unknown. |
| `container` | An optional `string`. Defaults to `""`. If non-empty, this accumulator is placed in the given container. Otherwise, a default container is used. |
| `shared_name` | An optional `string`. Defaults to `""`. If non-empty, this accumulator will be shared under the given name across multiple sessions. |
| `reduction_type` | An optional `string` from: `"MEAN", "SUM"`. Defaults to `"MEAN"`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type mutable `string`. |
tensorflow tf.raw_ops.RngSkip tf.raw\_ops.RngSkip
===================
Advance the counter of a counter-based RNG.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.RngSkip`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/RngSkip)
```
tf.raw_ops.RngSkip(
resource, algorithm, delta, name=None
)
```
The state of the RNG after `rng_skip(n)` will be the same as that after `stateful_uniform([n])` (or any other distribution). The actual increment added to the counter is an unspecified implementation detail.
| Args |
| `resource` | A `Tensor` of type `resource`. The handle of the resource variable that stores the state of the RNG. |
| `algorithm` | A `Tensor` of type `int64`. The RNG algorithm. |
| `delta` | A `Tensor` of type `int64`. The amount of advancement. |
| `name` | A name for the operation (optional). |
| Returns |
| The created Operation. |
tensorflow tf.raw_ops.Selu tf.raw\_ops.Selu
================
Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Selu`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Selu)
```
tf.raw_ops.Selu(
features, name=None
)
```
if < 0, `scale * features` otherwise.
To be used together with `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`. For correct dropout, use `tf.contrib.nn.alpha_dropout`.
See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
| Args |
| `features` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `features`. |
tensorflow tf.raw_ops.LeftShift tf.raw\_ops.LeftShift
=====================
Elementwise computes the bitwise left-shift of `x` and `y`.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.LeftShift`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/LeftShift)
```
tf.raw_ops.LeftShift(
x, y, name=None
)
```
If `y` is negative, or greater than or equal to the width of `x` in bits the result is implementation defined.
#### Example:
```
import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]
for dtype in dtype_list:
lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
left_shift_result = bitwise_ops.left_shift(lhs, rhs)
print(left_shift_result)
# This will print:
# tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8)
# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16)
# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32)
# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64)
lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.left_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)>
```
| Args |
| `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. |
| `y` | A `Tensor`. Must have the same type as `x`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.GetSessionHandle tf.raw\_ops.GetSessionHandle
============================
Store the input tensor in the state of the current session.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.GetSessionHandle`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/GetSessionHandle)
```
tf.raw_ops.GetSessionHandle(
value, name=None
)
```
| Args |
| `value` | A `Tensor`. The tensor to be stored. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `string`. |
tensorflow tf.raw_ops.Dilation2DBackpropFilter tf.raw\_ops.Dilation2DBackpropFilter
====================================
Computes the gradient of morphological 2-D dilation with respect to the filter.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Dilation2DBackpropFilter`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Dilation2DBackpropFilter)
```
tf.raw_ops.Dilation2DBackpropFilter(
input, filter, out_backprop, strides, rates, padding, name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. 4-D with shape `[batch, in_height, in_width, depth]`. |
| `filter` | A `Tensor`. Must have the same type as `input`. 3-D with shape `[filter_height, filter_width, depth]`. |
| `out_backprop` | A `Tensor`. Must have the same type as `input`. 4-D with shape `[batch, out_height, out_width, depth]`. |
| `strides` | A list of `ints` that has length `>= 4`. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: `[1, stride_height, stride_width, 1]`. |
| `rates` | A list of `ints` that has length `>= 4`. 1-D of length 4. The input stride for atrous morphological dilation. Must be: `[1, rate_height, rate_width, 1]`. |
| `padding` | A `string` from: `"SAME", "VALID"`. The type of padding algorithm to use. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.EnsureShape tf.raw\_ops.EnsureShape
=======================
Ensures that the tensor's shape matches the expected shape.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.EnsureShape`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/EnsureShape)
```
tf.raw_ops.EnsureShape(
input, shape, name=None
)
```
Raises an error if the input tensor's shape does not match the specified shape. Returns the input tensor otherwise.
| Args |
| `input` | A `Tensor`. A tensor, whose shape is to be validated. |
| `shape` | A [`tf.TensorShape`](../tensorshape) or list of `ints`. The expected (possibly partially specified) shape of the input tensor. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.QuantizedConv2DAndRequantize tf.raw\_ops.QuantizedConv2DAndRequantize
========================================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.QuantizedConv2DAndRequantize`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/QuantizedConv2DAndRequantize)
```
tf.raw_ops.QuantizedConv2DAndRequantize(
input,
filter,
min_input,
max_input,
min_filter,
max_filter,
min_freezed_output,
max_freezed_output,
strides,
padding,
out_type=tf.dtypes.qint8,
dilations=[1, 1, 1, 1],
padding_list=[],
name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. |
| `filter` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. |
| `min_input` | A `Tensor` of type `float32`. |
| `max_input` | A `Tensor` of type `float32`. |
| `min_filter` | A `Tensor` of type `float32`. |
| `max_filter` | A `Tensor` of type `float32`. |
| `min_freezed_output` | A `Tensor` of type `float32`. |
| `max_freezed_output` | A `Tensor` of type `float32`. |
| `strides` | A list of `ints`. |
| `padding` | A `string` from: `"SAME", "VALID"`. |
| `out_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to [`tf.qint8`](../../tf#qint8). |
| `dilations` | An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. |
| `padding_list` | An optional list of `ints`. Defaults to `[]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (output, min\_output, max\_output). |
| `output` | A `Tensor` of type `out_type`. |
| `min_output` | A `Tensor` of type `float32`. |
| `max_output` | A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.DecodeRaw tf.raw\_ops.DecodeRaw
=====================
Reinterpret the bytes of a string as a vector of numbers.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.DecodeRaw`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/DecodeRaw)
```
tf.raw_ops.DecodeRaw(
bytes, out_type, little_endian=True, name=None
)
```
| Args |
| `bytes` | A `Tensor` of type `string`. All the elements must have the same length. |
| `out_type` | A [`tf.DType`](../dtypes/dtype) from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64, tf.complex64, tf.complex128, tf.bool, tf.bfloat16`. |
| `little_endian` | An optional `bool`. Defaults to `True`. Whether the input `bytes` are in little-endian order. Ignored for `out_type` values that are stored in a single byte like `uint8`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `out_type`. |
| programming_docs |
tensorflow tf.raw_ops.BesselI1e tf.raw\_ops.BesselI1e
=====================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.BesselI1e`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/BesselI1e)
```
tf.raw_ops.BesselI1e(
x, name=None
)
```
| Args |
| `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `x`. |
tensorflow tf.raw_ops.Fill tf.raw\_ops.Fill
================
Creates a tensor filled with a scalar value.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.Fill`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Fill)
```
tf.raw_ops.Fill(
dims, value, name=None
)
```
This operation creates a tensor of shape `dims` and fills it with `value`.
#### For example:
```
# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
[9, 9, 9]]
```
[`tf.fill`](../fill) differs from [`tf.constant`](../constant) in a few ways:
* [`tf.fill`](../fill) only supports scalar contents, whereas [`tf.constant`](../constant) supports Tensor values.
* [`tf.fill`](../fill) creates an Op in the computation graph that constructs the actual Tensor value at runtime. This is in contrast to [`tf.constant`](../constant) which embeds the entire Tensor into the graph with a `Const` node.
* Because [`tf.fill`](../fill) evaluates at graph runtime, it supports dynamic shapes based on other runtime Tensors, unlike [`tf.constant`](../constant).
| Args |
| `dims` | A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D. Represents the shape of the output tensor. |
| `value` | A `Tensor`. 0-D (scalar). Value to fill the returned tensor. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `value`. |
numpy compatibility
-------------------
Equivalent to np.full
tensorflow tf.raw_ops.StatelessWhile tf.raw\_ops.StatelessWhile
==========================
output = input; While (Cond(output)) { output = Body(output) }
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.StatelessWhile`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/StatelessWhile)
```
tf.raw_ops.StatelessWhile(
input, cond, body, output_shapes=[], parallel_iterations=10, name=None
)
```
| Args |
| `input` | A list of `Tensor` objects. A list of input tensors whose types are T. |
| `cond` | A function decorated with @Defun. A function takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise. This should only be used when the while condition and body functions do not have stateful ops. |
| `body` | A function decorated with @Defun. A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T. |
| `output_shapes` | An optional list of shapes (each a [`tf.TensorShape`](../tensorshape) or list of `ints`). Defaults to `[]`. |
| `parallel_iterations` | An optional `int`. Defaults to `10`. |
| `name` | A name for the operation (optional). |
| Returns |
| A list of `Tensor` objects. Has the same type as `input`. |
tensorflow tf.raw_ops.AudioSpectrogram tf.raw\_ops.AudioSpectrogram
============================
Produces a visualization of audio data over time.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.AudioSpectrogram`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/AudioSpectrogram)
```
tf.raw_ops.AudioSpectrogram(
input, window_size, stride, magnitude_squared=False, name=None
)
```
Spectrograms are a standard way of representing audio information as a series of slices of frequency information, one slice for each window of time. By joining these together into a sequence, they form a distinctive fingerprint of the sound over time.
This op expects to receive audio data as an input, stored as floats in the range -1 to 1, together with a window width in samples, and a stride specifying how far to move the window between slices. From this it generates a three dimensional output. The first dimension is for the channels in the input, so a stereo audio input would have two here for example. The second dimension is time, with successive frequency slices. The third dimension has an amplitude value for each frequency during that time slice.
This means the layout when converted and saved as an image is rotated 90 degrees clockwise from a typical spectrogram. Time is descending down the Y axis, and the frequency decreases from left to right.
Each value in the result represents the square root of the sum of the real and imaginary parts of an FFT on the current window of samples. In this way, the lowest dimension represents the power of each frequency in the current window, and adjacent windows are concatenated in the next dimension.
To get a more intuitive and visual look at what this operation does, you can run tensorflow/examples/wav\_to\_spectrogram to read in an audio file and save out the resulting spectrogram as a PNG image.
| Args |
| `input` | A `Tensor` of type `float32`. Float representation of audio data. |
| `window_size` | An `int`. How wide the input window is in samples. For the highest efficiency this should be a power of two, but other values are accepted. |
| `stride` | An `int`. How widely apart the center of adjacent sample windows should be. |
| `magnitude_squared` | An optional `bool`. Defaults to `False`. Whether to return the squared magnitude or just the magnitude. Using squared magnitude can avoid extra calculations. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize tf.raw\_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize
==========================================================
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize)
```
tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize(
input,
filter,
bias,
min_input,
max_input,
min_filter,
max_filter,
min_freezed_output,
max_freezed_output,
summand,
min_summand,
max_summand,
strides,
padding,
out_type=tf.dtypes.quint8,
dilations=[1, 1, 1, 1],
padding_list=[],
name=None
)
```
| Args |
| `input` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. |
| `filter` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. |
| `bias` | A `Tensor`. Must be one of the following types: `float32`, `qint32`. |
| `min_input` | A `Tensor` of type `float32`. |
| `max_input` | A `Tensor` of type `float32`. |
| `min_filter` | A `Tensor` of type `float32`. |
| `max_filter` | A `Tensor` of type `float32`. |
| `min_freezed_output` | A `Tensor` of type `float32`. |
| `max_freezed_output` | A `Tensor` of type `float32`. |
| `summand` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. |
| `min_summand` | A `Tensor` of type `float32`. |
| `max_summand` | A `Tensor` of type `float32`. |
| `strides` | A list of `ints`. |
| `padding` | A `string` from: `"SAME", "VALID"`. |
| `out_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to [`tf.quint8`](../../tf#quint8). |
| `dilations` | An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. |
| `padding_list` | An optional list of `ints`. Defaults to `[]`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (output, min\_output, max\_output). |
| `output` | A `Tensor` of type `out_type`. |
| `min_output` | A `Tensor` of type `float32`. |
| `max_output` | A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.CTCGreedyDecoder tf.raw\_ops.CTCGreedyDecoder
============================
Performs greedy decoding on the logits given in inputs.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.CTCGreedyDecoder`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/CTCGreedyDecoder)
```
tf.raw_ops.CTCGreedyDecoder(
inputs, sequence_length, merge_repeated=False, blank_index=-1, name=None
)
```
A note about the attribute merge\_repeated: if enabled, when consecutive logits' maximum indices are the same, only the first of these is emitted. Labeling the blank '\*', the sequence "A B B \* B B" becomes "A B B" if merge\_repeated = True and "A B B B B" if merge\_repeated = False.
Regardless of the value of merge\_repeated, if the maximum index of a given time and batch corresponds to the blank, index `(num_classes - 1)`, no new element is emitted.
| Args |
| `inputs` | A `Tensor`. Must be one of the following types: `float32`, `float64`. 3-D, shape: `(max_time x batch_size x num_classes)`, the logits. |
| `sequence_length` | A `Tensor` of type `int32`. A vector containing sequence lengths, size `(batch_size)`. |
| `merge_repeated` | An optional `bool`. Defaults to `False`. If True, merge repeated classes in output. |
| `blank_index` | An optional `int`. Defaults to `-1`. |
| `name` | A name for the operation (optional). |
| Returns |
| A tuple of `Tensor` objects (decoded\_indices, decoded\_values, decoded\_shape, log\_probability). |
| `decoded_indices` | A `Tensor` of type `int64`. |
| `decoded_values` | A `Tensor` of type `int64`. |
| `decoded_shape` | A `Tensor` of type `int64`. |
| `log_probability` | A `Tensor`. Has the same type as `inputs`. |
tensorflow tf.raw_ops.ExtractGlimpse tf.raw\_ops.ExtractGlimpse
==========================
Extracts a glimpse from the input tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ExtractGlimpse`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ExtractGlimpse)
```
tf.raw_ops.ExtractGlimpse(
input,
size,
offsets,
centered=True,
normalized=True,
uniform_noise=True,
noise='uniform',
name=None
)
```
Returns a set of windows called glimpses extracted at location `offsets` from the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise.
The result is a 4-D tensor of shape `[batch_size, glimpse_height, glimpse_width, channels]`. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the `size` parameter.
The argument `normalized` and `centered` controls how the windows are built:
* If the coordinates are normalized but not centered, 0.0 and 1.0 correspond to the minimum and maximum of each height and width dimension.
* If the coordinates are both normalized and centered, they range from -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the lower right corner is located at (1.0, 1.0) and the center is at (0, 0).
* If the coordinates are not normalized they are interpreted as numbers of pixels.
| Args |
| `input` | A `Tensor` of type `float32`. A 4-D float tensor of shape `[batch_size, height, width, channels]`. |
| `size` | A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width. |
| `offsets` | A `Tensor` of type `float32`. A 2-D integer tensor of shape `[batch_size, 2]` containing the y, x locations of the center of each window. |
| `centered` | An optional `bool`. Defaults to `True`. indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the input images. If false, the (0,0) offset corresponds to the upper left corner of the input images. |
| `normalized` | An optional `bool`. Defaults to `True`. indicates if the offset coordinates are normalized. |
| `uniform_noise` | An optional `bool`. Defaults to `True`. indicates if the noise should be generated using a uniform distribution or a Gaussian distribution. |
| `noise` | An optional `string`. Defaults to `"uniform"`. indicates if the noise should `uniform`, `gaussian`, or `zero`. The default is `uniform` which means the noise type will be decided by `uniform_noise`. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor` of type `float32`. |
tensorflow tf.raw_ops.QuantizeAndDequantizeV2 tf.raw\_ops.QuantizeAndDequantizeV2
===================================
Quantizes then dequantizes a tensor.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.QuantizeAndDequantizeV2`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV2)
```
tf.raw_ops.QuantizeAndDequantizeV2(
input,
input_min,
input_max,
signed_input=True,
num_bits=8,
range_given=False,
round_mode='HALF_TO_EVEN',
narrow_range=False,
axis=-1,
name=None
)
```
This op simulates the precision loss from the quantized forward pass by:
1. Quantizing the tensor to fixed point numbers, which should match the target quantization method when it is used in inference.
2. Dequantizing it back to floating point numbers for the following ops, most likely matmul.
There are different ways to quantize. This version uses only scaling, so 0.0 maps to 0.
From the specified 'num\_bits' in the quantized output type, it determines minimum and maximum representable quantized values.
e.g.
* [-128, 127] for signed, num\_bits = 8, or
* [0, 255] for unsigned, num\_bits = 8.
If range\_given == False, the initial input\_min, input\_max will be determined automatically as the minimum and maximum values in the input tensor, otherwise the specified values of input\_min, input\_max are used.
>
> **Note:** If the input\_min, input\_max are specified, they do not need to equal the actual minimum and maximum values in the tensor. e.g. in some cases it may be beneficial to specify these values such that the low probability extremes of the input distribution are clipped.
>
This op determines the maximum scale\_factor that would map the initial [input\_min, input\_max] range to a range that lies within the representable quantized range.
It determines the scale from one of input\_min and input\_max, then updates the other one to maximize the representable range.
e.g.
* if the output is signed, num\_bits = 8, [input\_min, input\_max] = [-10.0, 5.0]: it would use a scale\_factor of -128 / -10.0 = 12.8 In this case, it would update input\_max to be 127 / 12.8 = 9.921875
* if the output is signed, num\_bits = 8, [input\_min, input\_max] = [-10.0, 10.0]: it would use a scale\_factor of 127 / 10.0 = 12.7 In this case, it would update input\_min to be 128.0 / 12.7 = -10.07874
* if the output is unsigned, input\_min is forced to be 0, and only the specified input\_max is used.
After determining the scale\_factor and updating the input range, it applies the following to each value in the 'input' tensor.
output = round(clamp(value, input\_min, input\_max) \* scale\_factor) / scale\_factor.
The above round function rounds the value based on the given round\_mode.
| Args |
| `input` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. Tensor to quantize and then dequantize. |
| `input_min` | A `Tensor`. Must have the same type as `input`. If `range_given == True`, this specifies the minimum input value that needs to be represented, otherwise it is determined from the min value of the `input` tensor. |
| `input_max` | A `Tensor`. Must have the same type as `input`. If `range_given == True`, this specifies the maximum input value that needs to be represented, otherwise it is determined from the max value of the `input` tensor. |
| `signed_input` | An optional `bool`. Defaults to `True`. Whether the quantization is signed or unsigned. (actually this parameter should have been called **`signed_output`**) |
| `num_bits` | An optional `int`. Defaults to `8`. The bitwidth of the quantization. |
| `range_given` | An optional `bool`. Defaults to `False`. Whether the range is given or should be determined from the `input` tensor. |
| `round_mode` | An optional `string` from: `"HALF_TO_EVEN", "HALF_UP"`. Defaults to `"HALF_TO_EVEN"`. The 'round\_mode' attribute controls which rounding tie-breaking algorithm is used when rounding float values to their quantized equivalents. The following rounding modes are currently supported: * HALF\_TO\_EVEN: this is the default round\_mode.
* HALF\_UP: round towards positive. In this mode 7.5 rounds up to 8 and -7.5 rounds up to -7.
|
| `narrow_range` | An optional `bool`. Defaults to `False`. If True, then the absolute value of the quantized minimum value is the same as the quantized maximum value, instead of 1 greater. i.e. for 8 bit quantization, the minimum value is -127 instead of -128. |
| `axis` | An optional `int`. Defaults to `-1`. If specified, this axis is treated as a channel or slice axis, and a separate quantization range is used for each channel or slice along this axis. |
| `name` | A name for the operation (optional). |
| Returns |
| A `Tensor`. Has the same type as `input`. |
tensorflow tf.raw_ops.ScatterNdSub tf.raw\_ops.ScatterNdSub
========================
Applies sparse subtraction to individual values or slices in a Variable.
#### View aliases
**Compat aliases for migration**
See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details.
[`tf.compat.v1.raw_ops.ScatterNdSub`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/ScatterNdSub)
```
tf.raw_ops.ScatterNdSub(
ref, indices, updates, use_locking=False, name=None
)
```
within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`. It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
```
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:
```
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
```
The resulting update to ref would look like this:
```
[1, -9, 3, -6, -4, 6, 7, -4]
```
See [`tf.scatter_nd`](../scatter_nd) for more details about how to make updates to slices.
| Args |
| `ref` | A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. A mutable Tensor. Should be from a Variable node. |
| `indices` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref. |
| `updates` | A `Tensor`. Must have the same type as `ref`. A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref. |
| `use_locking` | An optional `bool`. Defaults to `False`. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
| `name` | A name for the operation (optional). |
| Returns |
| A mutable `Tensor`. Has the same type as `ref`. |
| programming_docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.