code
stringlengths
2.5k
150k
kind
stringclasses
1 value
tensorflow tf.math.top_k tf.math.top\_k ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_ops.py#L5706-L5759) | Finds values and indices of the `k` largest entries for the last dimension. #### View aliases **Main aliases** [`tf.nn.top_k`](https://www.tensorflow.org/api_docs/python/tf/math/top_k) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.top_k`](https://www.tensorflow.org/api_docs/python/tf/math/top_k), [`tf.compat.v1.nn.top_k`](https://www.tensorflow.org/api_docs/python/tf/math/top_k) ``` tf.math.top_k( input, k=1, sorted=True, name=None ) ``` If the input is a vector (rank=1), finds the `k` largest entries in the vector and outputs their values and indices as vectors. Thus `values[j]` is the `j`-th largest entry in `input`, and its index is `indices[j]`. ``` result = tf.math.top_k([1, 2, 98, 1, 1, 99, 3, 1, 3, 96, 4, 1], k=3) result.values.numpy() array([99, 98, 96], dtype=int32) result.indices.numpy() array([5, 2, 9], dtype=int32) ``` For matrices (resp. higher rank input), computes the top `k` entries in each row (resp. vector along the last dimension). Thus, ``` input = tf.random.normal(shape=(3,4,5,6)) k = 2 values, indices = tf.math.top_k(input, k=k) values.shape.as_list() [3, 4, 5, 2] values.shape == indices.shape == input.shape[:-1] + [k] True ``` The indices can be used to `gather` from a tensor who's shape matches `input`. ``` gathered_values = tf.gather(input, indices, batch_dims=-1) assert tf.reduce_all(gathered_values == values) ``` If two elements are equal, the lower-index element appears first. ``` result = tf.math.top_k([1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0], k=3) result.indices.numpy() array([0, 1, 3], dtype=int32) ``` | Args | | `input` | 1-D or higher `Tensor` with last dimension at least `k`. | | `k` | 0-D `int32` `Tensor`. Number of top elements to look for along the last dimension (along each row for matrices). | | `sorted` | If true the resulting `k` elements will be sorted by the values in descending order. | | `name` | Optional name for the operation. | | Returns | | A tuple with two named fields: | | `values` | The `k` largest elements along each last dimensional slice. | | `indices` | The indices of `values` within the last dimension of `input`. | tensorflow tf.math.ceil tf.math.ceil ============ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L5351-L5376) | Return the ceiling of the input, element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.ceil`](https://www.tensorflow.org/api_docs/python/tf/math/ceil), [`tf.compat.v1.math.ceil`](https://www.tensorflow.org/api_docs/python/tf/math/ceil) ``` tf.math.ceil( x, name=None ) ``` #### For example: ``` tf.math.ceil([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) <tf.Tensor: shape=(7,), dtype=float32, numpy=array([-1., -1., -0., 1., 2., 2., 2.], dtype=float32)> ``` | Args | | `x` | A [`tf.Tensor`](../tensor). Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. `int32` | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor). Has the same type as `x`. | numpy compatibility ------------------- Equivalent to np.ceil tensorflow tf.math.reduce_variance tf.math.reduce\_variance ======================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L2644-L2704) | Computes the variance of elements across dimensions of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.reduce_variance`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_variance) ``` tf.math.reduce_variance( input_tensor, axis=None, keepdims=False, name=None ) ``` Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each of the entries in `axis`, which must be unique. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` x = tf.constant([[1., 2.], [3., 4.]]) tf.math.reduce_variance(x) <tf.Tensor: shape=(), dtype=float32, numpy=1.25> tf.math.reduce_variance(x, 0) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 1.], ...)> tf.math.reduce_variance(x, 1) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([0.25, 0.25], ...)> ``` | Args | | `input_tensor` | The tensor to reduce. Should have real or complex type. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name scope for the associated operations (optional). | | Returns | | The reduced tensor, of the same dtype as the input\_tensor. Note, for `complex64` or `complex128` input, the returned `Tensor` will be of type `float32` or `float64`, respectively. | numpy compatibility ------------------- Equivalent to np.var Please note `np.var` has a `dtype` parameter that could be used to specify the output type. By default this is `dtype=float64`. On the other hand, [`tf.math.reduce_variance`](reduce_variance) has aggressive type inference from `input_tensor`. tensorflow tf.math.divide_no_nan tf.math.divide\_no\_nan ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1624-L1655) | Computes a safe divide which returns 0 if `y` (denominator) is zero. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.div_no_nan`](https://www.tensorflow.org/api_docs/python/tf/math/divide_no_nan), [`tf.compat.v1.math.divide_no_nan`](https://www.tensorflow.org/api_docs/python/tf/math/divide_no_nan) ``` tf.math.divide_no_nan( x, y, name=None ) ``` #### For example: ``` tf.constant(3.0) / 0.0 <tf.Tensor: shape=(), dtype=float32, numpy=inf> tf.math.divide_no_nan(3.0, 0.0) <tf.Tensor: shape=(), dtype=float32, numpy=0.0> ``` Note that 0 is returned if `y` is 0 even if `x` is nonfinite: ``` tf.math.divide_no_nan(np.nan, 0.0) <tf.Tensor: shape=(), dtype=float32, numpy=0.0> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `float32`, `float64`. | | `y` | A `Tensor` whose dtype is compatible with `x`. | | `name` | A name for the operation (optional). | | Returns | | The element-wise value of the x divided by y. | tensorflow tf.math.unsorted_segment_sum tf.math.unsorted\_segment\_sum ============================== Computes the sum along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.unsorted_segment_sum`](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_sum), [`tf.compat.v1.unsorted_segment_sum`](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_sum) ``` tf.math.unsorted_segment_sum( data, segment_ids, num_segments, name=None ) ``` Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments. Computes a tensor such that \(output[i] = \sum\_{j...} data[j...]\) where the sum is over tuples `j...` such that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids` need not be sorted and need not cover all values in the full range of valid values. If the sum is empty for a given segment ID `i`, `output[i] = 0`. If the given segment ID `i` is negative, the value is dropped and will not be added to the sum of the segment. `num_segments` should equal the number of distinct segment IDs. ``` c = [[1,2,3,4], [5,6,7,8], [4,3,2,1]] tf.math.unsorted_segment_sum(c, [0, 1, 0], num_segments=2).numpy() array([[5, 5, 5, 5], [5, 6, 7, 8]], dtype=int32) ``` | Args | | `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. | | `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A tensor whose shape is a prefix of `data.shape`. The values must be less than `num_segments`. | | `num_segments` | A `Tensor`. Must be one of the following types: `int32`, `int64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `data`. | tensorflow tf.math.real tf.math.real ============ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L793-L825) | Returns the real part of a complex (or real) tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.real`](https://www.tensorflow.org/api_docs/python/tf/math/real), [`tf.compat.v1.real`](https://www.tensorflow.org/api_docs/python/tf/math/real) ``` tf.math.real( input, name=None ) ``` Given a tensor `input`, this operation returns a tensor of type `float` that is the real part of each element in `input` considered as a complex number. #### For example: ``` x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j]) tf.math.real(x) # [-2.25, 3.25] ``` If `input` is already real, it is returned unchanged. | Args | | `input` | A `Tensor`. Must have numeric type. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `float32` or `float64`. | tensorflow tf.math.truediv tf.math.truediv =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1558-L1589) | Divides x / y elementwise (using Python 3 division operator semantics). #### View aliases **Main aliases** [`tf.truediv`](https://www.tensorflow.org/api_docs/python/tf/math/truediv) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.truediv`](https://www.tensorflow.org/api_docs/python/tf/math/truediv), [`tf.compat.v1.truediv`](https://www.tensorflow.org/api_docs/python/tf/math/truediv) ``` tf.math.truediv( x, y, name=None ) ``` > > **Note:** Prefer using the Tensor operator or tf.divide which obey Python division operator semantics. > This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal `x / y` division in Python 3 and in Python 2.7 with `from __future__ import division`. If you want integer division that rounds down, use `x // y` or `tf.math.floordiv`. `x` and `y` must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32` and `int64` (matching the behavior of Numpy). | Args | | `x` | `Tensor` numerator of numeric type. | | `y` | `Tensor` denominator of numeric type. | | `name` | A name for the operation (optional). | | Returns | | `x / y` evaluated in floating point. | | Raises | | `TypeError` | If `x` and `y` have different dtypes. | tensorflow tf.math.segment_min tf.math.segment\_min ==================== Computes the minimum along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.segment_min`](https://www.tensorflow.org/api_docs/python/tf/math/segment_min), [`tf.compat.v1.segment_min`](https://www.tensorflow.org/api_docs/python/tf/math/segment_min) ``` tf.math.segment_min( data, segment_ids, name=None ) ``` Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments. Computes a tensor such that \(output\_i = \min\_j(data\_j)\) where `min` is over `j` such that `segment_ids[j] == i`. If the min is empty for a given segment ID `i`, `output[i] = 0`. #### For example: ``` c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_min(c, tf.constant([0, 0, 1])).numpy() array([[1, 2, 2, 1], [5, 6, 7, 8]], dtype=int32) ``` | Args | | `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. | | `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A 1-D tensor whose size is equal to the size of `data`'s first dimension. Values should be sorted and can be repeated. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `data`. | tensorflow tf.math.rsqrt tf.math.rsqrt ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L5496-L5518) | Computes reciprocal of square root of x element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.rsqrt`](https://www.tensorflow.org/api_docs/python/tf/math/rsqrt), [`tf.compat.v1.rsqrt`](https://www.tensorflow.org/api_docs/python/tf/math/rsqrt) ``` tf.math.rsqrt( x, name=None ) ``` #### For example: ``` x = tf.constant([2., 0., -2.]) tf.math.rsqrt(x) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([0.707, inf, nan], dtype=float32)> ``` | Args | | `x` | A [`tf.Tensor`](../tensor). Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor). Has the same type as `x`. | tensorflow tf.math.l2_normalize tf.math.l2\_normalize ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L644-L700) | Normalizes along dimension `axis` using an L2 norm. (deprecated arguments) #### View aliases **Main aliases** [`tf.linalg.l2_normalize`](https://www.tensorflow.org/api_docs/python/tf/math/l2_normalize), [`tf.nn.l2_normalize`](https://www.tensorflow.org/api_docs/python/tf/math/l2_normalize) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.linalg.l2_normalize`](https://www.tensorflow.org/api_docs/python/tf/math/l2_normalize), [`tf.compat.v1.math.l2_normalize`](https://www.tensorflow.org/api_docs/python/tf/math/l2_normalize), [`tf.compat.v1.nn.l2_normalize`](https://www.tensorflow.org/api_docs/python/tf/math/l2_normalize) ``` tf.math.l2_normalize( x, axis=None, epsilon=1e-12, name=None, dim=None ) ``` For a 1-D tensor with `axis = 0`, computes ``` output = x / sqrt(max(sum(x**2), epsilon)) ``` For `x` with more dimensions, independently normalizes each 1-D slice along dimension `axis`. 1-D tensor example: ``` >>> x = tf.constant([3.0, 4.0]) >>> tf.math.l2_normalize(x).numpy() array([0.6, 0.8], dtype=float32) ``` 2-D tensor example: ``` >>> x = tf.constant([[3.0], [4.0]]) >>> tf.math.l2_normalize(x, 0).numpy() array([[0.6], [0.8]], dtype=float32) ``` ``` x = tf.constant([[3.0], [4.0]]) tf.math.l2_normalize(x, 1).numpy() array([[1.], [1.]], dtype=float32) ``` | Args | | `x` | A `Tensor`. | | `axis` | Dimension along which to normalize. A scalar or a vector of integers. | | `epsilon` | A lower bound value for the norm. Will use `sqrt(epsilon)` as the divisor if `norm < sqrt(epsilon)`. | | `name` | A name for this operation (optional). | | `dim` | Deprecated, do not use. | | Returns | | A `Tensor` with the same shape as `x`. | tensorflow tf.math.cumulative_logsumexp tf.math.cumulative\_logsumexp ============================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4366-L4419) | Compute the cumulative log-sum-exp of the tensor `x` along `axis`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.cumulative_logsumexp`](https://www.tensorflow.org/api_docs/python/tf/math/cumulative_logsumexp) ``` tf.math.cumulative_logsumexp( x, axis=0, exclusive=False, reverse=False, name=None ) ``` By default, this op performs an inclusive cumulative log-sum-exp, which means that the first element of the input is identical to the first element of the output. This operation is significantly more numerically stable than the equivalent tensorflow operation `tf.math.log(tf.math.cumsum(tf.math.exp(x)))`, although computes the same result given infinite numerical precision. However, note that in some cases, it may be less stable than [`tf.math.reduce_logsumexp`](reduce_logsumexp) for a given element, as it applies the "log-sum-exp trick" in a different way. More precisely, where [`tf.math.reduce_logsumexp`](reduce_logsumexp) uses the following trick: ``` log(sum(exp(x))) == log(sum(exp(x - max(x)))) + max(x) ``` it cannot be directly used here as there is no fast way of applying it to each prefix `x[:i]`. Instead, this function implements a prefix scan using pairwise log-add-exp, which is a commutative and associative (up to floating point precision) operator: ``` log_add_exp(x, y) = log(exp(x) + exp(y)) = log(1 + exp(min(x, y) - max(x, y))) + max(x, y) ``` However, reducing using the above operator leads to a different computation tree (logs are taken repeatedly instead of only at the end), and the maximum is only computed pairwise instead of over the entire prefix. In general, this leads to a different and slightly less precise computation. | Args | | `x` | A `Tensor`. Must be one of the following types: `float16`, `float32`, `float64`. | | `axis` | A `Tensor` of type `int32` or `int64` (default: 0). Must be in the range `[-rank(x), rank(x))`. | | `exclusive` | If `True`, perform exclusive cumulative log-sum-exp. | | `reverse` | If `True`, performs the cumulative log-sum-exp in the reverse direction. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same shape and type as `x`. | tensorflow tf.math.reduce_euclidean_norm tf.math.reduce\_euclidean\_norm =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L2326-L2368) | Computes the Euclidean norm of elements across dimensions of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.reduce_euclidean_norm`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_euclidean_norm) ``` tf.math.reduce_euclidean_norm( input_tensor, axis=None, keepdims=False, name=None ) ``` Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each of the entries in `axis`, which must be unique. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` x = tf.constant([[1, 2, 3], [1, 1, 1]]) # x.dtype is tf.int32 tf.math.reduce_euclidean_norm(x) # returns 4 as dtype is tf.int32 y = tf.constant([[1, 2, 3], [1, 1, 1]], dtype = tf.float32) tf.math.reduce_euclidean_norm(y) # returns 4.1231055 which is sqrt(17) tf.math.reduce_euclidean_norm(y, 0) # [sqrt(2), sqrt(5), sqrt(10)] tf.math.reduce_euclidean_norm(y, 1) # [sqrt(14), sqrt(3)] tf.math.reduce_euclidean_norm(y, 1, keepdims=True) # [[sqrt(14)], [sqrt(3)]] tf.math.reduce_euclidean_norm(y, [0, 1]) # sqrt(17) ``` | Args | | `input_tensor` | The tensor to reduce. Should have numeric type. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name for the operation (optional). | | Returns | | The reduced tensor, of the same dtype as the input\_tensor. |
programming_docs
tensorflow tf.math.softplus tf.math.softplus ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L633-L656) | Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`. #### View aliases **Main aliases** [`tf.nn.softplus`](https://www.tensorflow.org/api_docs/python/tf/math/softplus) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.softplus`](https://www.tensorflow.org/api_docs/python/tf/math/softplus), [`tf.compat.v1.nn.softplus`](https://www.tensorflow.org/api_docs/python/tf/math/softplus) ``` tf.math.softplus( features, name=None ) ``` `softplus` is a smooth approximation of `relu`. Like `relu`, `softplus` always takes on positive values. #### Example: ``` import tensorflow as tf tf.math.softplus(tf.range(0, 2, dtype=tf.float32)).numpy() array([0.6931472, 1.3132616], dtype=float32) ``` | Args | | `features` | `Tensor` | | `name` | Optional: name to associate with this operation. | | Returns | | `Tensor` | tensorflow tf.math.subtract tf.math.subtract ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L544-L548) | Returns x - y element-wise. #### View aliases **Main aliases** [`tf.subtract`](https://www.tensorflow.org/api_docs/python/tf/math/subtract) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.subtract`](https://www.tensorflow.org/api_docs/python/tf/math/subtract), [`tf.compat.v1.subtract`](https://www.tensorflow.org/api_docs/python/tf/math/subtract) ``` tf.math.subtract( x, y, name=None ) ``` > > **Note:** [`tf.subtract`](subtract) supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) > Both input and output have a range `(-inf, inf)`. Example usages below. Subtract operation between an array and a scalar: ``` x = [1, 2, 3, 4, 5] y = 1 tf.subtract(x, y) <tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)> tf.subtract(y, x) <tf.Tensor: shape=(5,), dtype=int32, numpy=array([ 0, -1, -2, -3, -4], dtype=int32)> ``` Note that binary `-` operator can be used instead: ``` x = tf.convert_to_tensor([1, 2, 3, 4, 5]) y = tf.convert_to_tensor(1) x - y <tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)> ``` Subtract operation between an array and a tensor of same shape: ``` x = [1, 2, 3, 4, 5] y = tf.constant([5, 4, 3, 2, 1]) tf.subtract(y, x) <tf.Tensor: shape=(5,), dtype=int32, numpy=array([ 4, 2, 0, -2, -4], dtype=int32)> ``` For example, ``` x = tf.constant([1, 2], dtype=tf.int8) y = [2**8 + 1, 2**8 + 2] tf.subtract(x, y) <tf.Tensor: shape=(2,), dtype=int8, numpy=array([0, 0], dtype=int8)> ``` When subtracting two input values of different shapes, [`tf.subtract`](subtract) follows the [general broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules) . The two input array shapes are compared element-wise. Starting with the trailing dimensions, the two dimensions either have to be equal or one of them needs to be `1`. For example, ``` x = np.ones(6).reshape(2, 3, 1) y = np.ones(6).reshape(2, 1, 3) tf.subtract(x, y) <tf.Tensor: shape=(2, 3, 3), dtype=float64, numpy= array([[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]])> ``` Example with inputs of different dimensions: ``` x = np.ones(6).reshape(2, 3, 1) y = np.ones(6).reshape(1, 6) tf.subtract(x, y) <tf.Tensor: shape=(2, 3, 6), dtype=float64, numpy= array([[[0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.]], [[0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.]]])> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.sinh tf.math.sinh ============ Computes hyperbolic sine of x element-wise. #### View aliases **Main aliases** [`tf.sinh`](https://www.tensorflow.org/api_docs/python/tf/math/sinh) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.sinh`](https://www.tensorflow.org/api_docs/python/tf/math/sinh), [`tf.compat.v1.sinh`](https://www.tensorflow.org/api_docs/python/tf/math/sinh) ``` tf.math.sinh( x, name=None ) ``` Given an input tensor, this function computes hyperbolic sine of every element in the tensor. Input range is `[-inf,inf]` and output range is `[-inf,inf]`. ``` x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.is_inf tf.math.is\_inf =============== Returns which elements of x are Inf. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.debugging.is_inf`](https://www.tensorflow.org/api_docs/python/tf/math/is_inf), [`tf.compat.v1.is_inf`](https://www.tensorflow.org/api_docs/python/tf/math/is_inf), [`tf.compat.v1.math.is_inf`](https://www.tensorflow.org/api_docs/python/tf/math/is_inf) ``` tf.math.is_inf( x, name=None ) ``` #### Example: ``` x = tf.constant([5.0, np.inf, 6.8, np.inf]) tf.math.is_inf(x) ==> [False, True, False, True] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | numpy compatibility ------------------- Equivalent to np.isinf tensorflow tf.math.floordiv tf.math.floordiv ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1691-L1718) | Divides `x / y` elementwise, rounding toward the most negative integer. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.floordiv`](https://www.tensorflow.org/api_docs/python/tf/math/floordiv), [`tf.compat.v1.math.floordiv`](https://www.tensorflow.org/api_docs/python/tf/math/floordiv) ``` tf.math.floordiv( x, y, name=None ) ``` Mathematically, this is equivalent to floor(x / y). For example: floor(8.4 / 4.0) = floor(2.1) = 2.0 floor(-8.4 / 4.0) = floor(-2.1) = -3.0 This is equivalent to the '//' operator in Python 3.0 and above. > > **Note:** `x` and `y` must have the same type, and the result will have the same type as well. > | Args | | `x` | `Tensor` numerator of real numeric type. | | `y` | `Tensor` denominator of real numeric type. | | `name` | A name for the operation (optional). | | Returns | | `x / y` rounded toward -infinity. | | Raises | | `TypeError` | If the inputs are complex. | tensorflow tf.math.lgamma tf.math.lgamma ============== Computes the log of the absolute value of `Gamma(x)` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.lgamma`](https://www.tensorflow.org/api_docs/python/tf/math/lgamma), [`tf.compat.v1.math.lgamma`](https://www.tensorflow.org/api_docs/python/tf/math/lgamma) ``` tf.math.lgamma( x, name=None ) ``` For positive numbers, this function computes log((input - 1)!) for every element in the tensor. `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539` #### Example: ``` x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6]) tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow Module: tf.math.special Module: tf.math.special ======================= Public API for tf.math.special namespace. Functions --------- [`bessel_i0(...)`](bessel_i0): Computes the Bessel i0 function of `x` element-wise. [`bessel_i0e(...)`](bessel_i0e): Computes the Bessel i0e function of `x` element-wise. [`bessel_i1(...)`](bessel_i1): Computes the Bessel i1 function of `x` element-wise. [`bessel_i1e(...)`](bessel_i1e): Computes the Bessel i1e function of `x` element-wise. [`bessel_j0(...)`](special/bessel_j0): Computes the Bessel j0 function of `x` element-wise. [`bessel_j1(...)`](special/bessel_j1): Computes the Bessel j1 function of `x` element-wise. [`bessel_k0(...)`](special/bessel_k0): Computes the Bessel k0 function of `x` element-wise. [`bessel_k0e(...)`](special/bessel_k0e): Computes the Bessel k0e function of `x` element-wise. [`bessel_k1(...)`](special/bessel_k1): Computes the Bessel k1 function of `x` element-wise. [`bessel_k1e(...)`](special/bessel_k1e): Computes the Bessel k1e function of `x` element-wise. [`bessel_y0(...)`](special/bessel_y0): Computes the Bessel y0 function of `x` element-wise. [`bessel_y1(...)`](special/bessel_y1): Computes the Bessel y1 function of `x` element-wise. [`dawsn(...)`](special/dawsn): Computes Dawson's integral of `x` element-wise. [`expint(...)`](special/expint): Computes the Exponential integral of `x` element-wise. [`fresnel_cos(...)`](special/fresnel_cos): Computes Fresnel's cosine integral of `x` element-wise. [`fresnel_sin(...)`](special/fresnel_sin): Computes Fresnel's sine integral of `x` element-wise. [`spence(...)`](special/spence): Computes Spence's integral of `x` element-wise. tensorflow tf.math.not_equal tf.math.not\_equal ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1925-L1959) | Returns the truth value of (x != y) element-wise. #### View aliases **Main aliases** [`tf.not_equal`](https://www.tensorflow.org/api_docs/python/tf/math/not_equal) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.not_equal`](https://www.tensorflow.org/api_docs/python/tf/math/not_equal), [`tf.compat.v1.not_equal`](https://www.tensorflow.org/api_docs/python/tf/math/not_equal) ``` tf.math.not_equal( x, y, name=None ) ``` Performs a [broadcast](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the arguments and then an element-wise inequality comparison, returning a Tensor of boolean values. #### For example: ``` x = tf.constant([2, 4]) y = tf.constant(2) tf.math.not_equal(x, y) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, True])> ``` ``` x = tf.constant([2, 4]) y = tf.constant([2, 4]) tf.math.not_equal(x, y) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, False])> ``` | Args | | `x` | A [`tf.Tensor`](../tensor) or [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or [`tf.IndexedSlices`](../indexedslices). | | `y` | A [`tf.Tensor`](../tensor) or [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or [`tf.IndexedSlices`](../indexedslices). | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor) of type bool with the same size as that of x or y. | | Raises | | [`tf.errors.InvalidArgumentError`](../errors/invalidargumenterror): If shapes of arguments are incompatible | tensorflow tf.math.acosh tf.math.acosh ============= Computes inverse hyperbolic cosine of x element-wise. #### View aliases **Main aliases** [`tf.acosh`](https://www.tensorflow.org/api_docs/python/tf/math/acosh) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.acosh`](https://www.tensorflow.org/api_docs/python/tf/math/acosh), [`tf.compat.v1.math.acosh`](https://www.tensorflow.org/api_docs/python/tf/math/acosh) ``` tf.math.acosh( x, name=None ) ``` Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is `[1, inf]`. It returns `nan` if the input lies outside the range. ``` x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.greater tf.math.greater =============== Returns the truth value of (x > y) element-wise. #### View aliases **Main aliases** [`tf.greater`](https://www.tensorflow.org/api_docs/python/tf/math/greater) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.greater`](https://www.tensorflow.org/api_docs/python/tf/math/greater), [`tf.compat.v1.math.greater`](https://www.tensorflow.org/api_docs/python/tf/math/greater) ``` tf.math.greater( x, y, name=None ) ``` > > **Note:** [`math.greater`](greater) supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) > #### Example: ``` x = tf.constant([5, 4, 6]) y = tf.constant([5, 2, 5]) tf.math.greater(x, y) ==> [False, True, True] x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.greater(x, y) ==> [False, False, True] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | tensorflow tf.math.xlog1py tf.math.xlog1py =============== Compute x \* log1p(y). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.xlog1py`](https://www.tensorflow.org/api_docs/python/tf/math/xlog1py) ``` tf.math.xlog1py( x, y, name=None ) ``` Given `x` and `y`, compute `x * log1p(y)`. This function safely returns zero when `x = 0`, no matter what the value of `y` is. #### Example: ``` tf.math.xlog1py(0., 1.) <tf.Tensor: shape=(), dtype=float32, numpy=0.> tf.math.xlog1py(1., 1.) <tf.Tensor: shape=(), dtype=float32, numpy=0.6931472> tf.math.xlog1py(2., 2.) <tf.Tensor: shape=(), dtype=float32, numpy=2.1972246> tf.math.xlog1py(0., -1.) <tf.Tensor: shape=(), dtype=float32, numpy=0.> ``` | Args | | `x` | A [`tf.Tensor`](../tensor) of type `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128` | | `y` | A [`tf.Tensor`](../tensor) of type `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128` | | `name` | A name for the operation (optional). | | Returns | | `x * log1p(y)`. | scipy compatibility ------------------- Equivalent to scipy.special.xlog1py tensorflow tf.math.reduce_all tf.math.reduce\_all =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L3177-L3222) | Computes [`tf.math.logical_and`](logical_and) of elements across dimensions of a tensor. #### View aliases **Main aliases** [`tf.reduce_all`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_all) ``` tf.math.reduce_all( input_tensor, axis=None, keepdims=False, name=None ) ``` This is the reduction operation for the elementwise [`tf.math.logical_and`](logical_and) op. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each of the entries in `axis`, which must be unique. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` x = tf.constant([[True, True], [False, False]]) tf.math.reduce_all(x) <tf.Tensor: shape=(), dtype=bool, numpy=False> tf.math.reduce_all(x, 0) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, False])> tf.math.reduce_all(x, 1) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, False])> ``` | Args | | `input_tensor` | The boolean tensor to reduce. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name for the operation (optional). | | Returns | | The reduced tensor. | numpy compatibility ------------------- Equivalent to np.all tensorflow tf.math.logical_not tf.math.logical\_not ==================== Returns the truth value of `NOT x` element-wise. #### View aliases **Main aliases** [`tf.logical_not`](https://www.tensorflow.org/api_docs/python/tf/math/logical_not) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.logical_not`](https://www.tensorflow.org/api_docs/python/tf/math/logical_not), [`tf.compat.v1.math.logical_not`](https://www.tensorflow.org/api_docs/python/tf/math/logical_not) ``` tf.math.logical_not( x, name=None ) ``` #### Example: ``` tf.math.logical_not(tf.constant([True, False])) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, True])> ``` | Args | | `x` | A `Tensor` of type `bool`. A `Tensor` of type `bool`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | tensorflow tf.math.segment_max tf.math.segment\_max ==================== Computes the maximum along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.segment_max`](https://www.tensorflow.org/api_docs/python/tf/math/segment_max), [`tf.compat.v1.segment_max`](https://www.tensorflow.org/api_docs/python/tf/math/segment_max) ``` tf.math.segment_max( data, segment_ids, name=None ) ``` Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments. Computes a tensor such that \(output\_i = \max\_j(data\_j)\) where `max` is over `j` such that `segment_ids[j] == i`. If the max is empty for a given segment ID `i`, `output[i] = 0`. #### For example: ``` c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_max(c, tf.constant([0, 0, 1])).numpy() array([[4, 3, 3, 4], [5, 6, 7, 8]], dtype=int32) ``` | Args | | `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. | | `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A 1-D tensor whose size is equal to the size of `data`'s first dimension. Values should be sorted and can be repeated. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `data`. |
programming_docs
tensorflow tf.math.log tf.math.log =========== Computes natural logarithm of x element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.log`](https://www.tensorflow.org/api_docs/python/tf/math/log), [`tf.compat.v1.math.log`](https://www.tensorflow.org/api_docs/python/tf/math/log) ``` tf.math.log( x, name=None ) ``` I.e., \(y = \log\_e x\). #### Example: ``` x = tf.constant([0, 0.5, 1, 5]) tf.math.log(x) <tf.Tensor: shape=(4,), dtype=float32, numpy=array([ -inf, -0.6931472, 0. , 1.609438 ], dtype=float32)> ``` See: <https://en.wikipedia.org/wiki/Logarithm> | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.divide tf.math.divide ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L445-L477) | Computes Python style division of `x` by `y`. #### View aliases **Main aliases** [`tf.divide`](https://www.tensorflow.org/api_docs/python/tf/math/divide) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.divide`](https://www.tensorflow.org/api_docs/python/tf/math/divide), [`tf.compat.v1.math.divide`](https://www.tensorflow.org/api_docs/python/tf/math/divide) ``` tf.math.divide( x, y, name=None ) ``` #### For example: ``` x = tf.constant([16, 12, 11]) y = tf.constant([4, 6, 2]) tf.divide(x,y) <tf.Tensor: shape=(3,), dtype=float64, numpy=array([4. , 2. , 5.5])> ``` | Args | | `x` | A `Tensor` | | `y` | A `Tensor` | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` with same shape as input | tensorflow tf.math.expm1 tf.math.expm1 ============= Computes `exp(x) - 1` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.expm1`](https://www.tensorflow.org/api_docs/python/tf/math/expm1), [`tf.compat.v1.math.expm1`](https://www.tensorflow.org/api_docs/python/tf/math/expm1) ``` tf.math.expm1( x, name=None ) ``` i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor. `e` denotes Euler's number and is approximately equal to 2.718281. ``` x = tf.constant(2.0) tf.math.expm1(x) ==> 6.389056 x = tf.constant([2.0, 8.0]) tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32) x = tf.constant(1 + 1j) tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j) ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.cumsum tf.math.cumsum ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4240-L4309) | Compute the cumulative sum of the tensor `x` along `axis`. #### View aliases **Main aliases** [`tf.cumsum`](https://www.tensorflow.org/api_docs/python/tf/math/cumsum) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.cumsum`](https://www.tensorflow.org/api_docs/python/tf/math/cumsum), [`tf.compat.v1.math.cumsum`](https://www.tensorflow.org/api_docs/python/tf/math/cumsum) ``` tf.math.cumsum( x, axis=0, exclusive=False, reverse=False, name=None ) ``` By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output: For example: ``` # tf.cumsum([a, b, c]) # [a, a + b, a + b + c] x = tf.constant([2, 4, 6, 8]) tf.cumsum(x) <tf.Tensor: shape=(4,), dtype=int32, numpy=array([ 2, 6, 12, 20], dtype=int32)> ``` ``` # using varying `axis` values y = tf.constant([[2, 4, 6, 8], [1,3,5,7]]) tf.cumsum(y, axis=0) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[ 2, 4, 6, 8], [ 3, 7, 11, 15]], dtype=int32)> tf.cumsum(y, axis=1) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[ 2, 6, 12, 20], [ 1, 4, 9, 16]], dtype=int32)> ``` By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed instead: ``` # tf.cumsum([a, b, c], exclusive=True) => [0, a, a + b] x = tf.constant([2, 4, 6, 8]) tf.cumsum(x, exclusive=True) <tf.Tensor: shape=(4,), dtype=int32, numpy=array([ 0, 2, 6, 12], dtype=int32)> ``` By setting the `reverse` kwarg to `True`, the cumsum is performed in the opposite direction: ``` # tf.cumsum([a, b, c], reverse=True) # [a + b + c, b + c, c] x = tf.constant([2, 4, 6, 8]) tf.cumsum(x, reverse=True) <tf.Tensor: shape=(4,), dtype=int32, numpy=array([20, 18, 14, 8], dtype=int32)> ``` This is more efficient than using separate [`tf.reverse`](../reverse) ops. The `reverse` and `exclusive` kwargs can also be combined: ``` # tf.cumsum([a, b, c], exclusive=True, reverse=True) # [b + c, c, 0] x = tf.constant([2, 4, 6, 8]) tf.cumsum(x, exclusive=True, reverse=True) <tf.Tensor: shape=(4,), dtype=int32, numpy=array([18, 14, 8, 0], dtype=int32)> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. | | `axis` | A `Tensor` of type `int32` (default: 0). Must be in the range `[-rank(x), rank(x))`. | | `exclusive` | If `True`, perform exclusive cumsum. | | `reverse` | A `bool` (default: False). | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.abs tf.math.abs =========== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L364-L408) | Computes the absolute value of a tensor. #### View aliases **Main aliases** [`tf.abs`](https://www.tensorflow.org/api_docs/python/tf/math/abs) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.abs`](https://www.tensorflow.org/api_docs/python/tf/math/abs), [`tf.compat.v1.math.abs`](https://www.tensorflow.org/api_docs/python/tf/math/abs) ``` tf.math.abs( x, name=None ) ``` Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input. Given a tensor `x` of complex numbers, this operation returns a tensor of type `float32` or `float64` that is the absolute value of each element in `x`. For a complex number \(a + bj\), its absolute value is computed as \(\sqrt{a^2 + b^2}\). #### For example: ``` # real number x = tf.constant([-2.25, 3.25]) tf.abs(x) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.25, 3.25], dtype=float32)> ``` ``` # complex number x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) tf.abs(x) <tf.Tensor: shape=(2, 1), dtype=float64, numpy= array([[5.25594901], [6.60492241]])> ``` | Args | | `x` | A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, `complex64` or `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`, with absolute values. Note, for `complex64` or `complex128` input, the returned `Tensor` will be of type `float32` or `float64`, respectively. If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)` | tensorflow tf.math.reduce_any tf.math.reduce\_any =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L3283-L3328) | Computes [`tf.math.logical_or`](logical_or) of elements across dimensions of a tensor. #### View aliases **Main aliases** [`tf.reduce_any`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_any) ``` tf.math.reduce_any( input_tensor, axis=None, keepdims=False, name=None ) ``` This is the reduction operation for the elementwise [`tf.math.logical_or`](logical_or) op. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each of the entries in `axis`, which must be unique. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` x = tf.constant([[True, True], [False, False]]) tf.reduce_any(x) <tf.Tensor: shape=(), dtype=bool, numpy=True> tf.reduce_any(x, 0) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, True])> tf.reduce_any(x, 1) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, False])> ``` | Args | | `input_tensor` | The boolean tensor to reduce. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name for the operation (optional). | | Returns | | The reduced tensor. | numpy compatibility ------------------- Equivalent to np.any tensorflow tf.math.confusion_matrix tf.math.confusion\_matrix ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/confusion_matrix.py#L90-L194) | Computes the confusion matrix from predictions and labels. ``` tf.math.confusion_matrix( labels, predictions, num_classes=None, weights=None, dtype=tf.dtypes.int32, name=None ) ``` The matrix columns represent the prediction labels and the rows represent the real labels. The confusion matrix is always a 2-D array of shape `[n, n]`, where `n` is the number of valid labels for a given classification task. Both prediction and labels must be 1-D arrays of the same shape in order for this function to work. If `num_classes` is `None`, then `num_classes` will be set to one plus the maximum value in either predictions or labels. Class labels are expected to start at 0. For example, if `num_classes` is 3, then the possible labels would be `[0, 1, 2]`. If `weights` is not `None`, then each prediction contributes its corresponding weight to the total value of the confusion matrix cell. #### For example: ``` tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==> [[0 0 0 0 0] [0 0 1 0 0] [0 0 1 0 0] [0 0 0 0 0] [0 0 0 0 1]] ``` Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`, resulting in a 5x5 confusion matrix. | Args | | `labels` | 1-D `Tensor` of real labels for the classification task. | | `predictions` | 1-D `Tensor` of predictions for a given classification. | | `num_classes` | The possible number of labels the classification task can have. If this value is not provided, it will be calculated using both predictions and labels array. | | `weights` | An optional `Tensor` whose shape matches `predictions`. | | `dtype` | Data type of the confusion matrix. | | `name` | Scope name. | | Returns | | A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion matrix, where `n` is the number of possible labels in the classification task. | | Raises | | `ValueError` | If both predictions and labels are not 1-D vectors and have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`. | tensorflow tf.math.nextafter tf.math.nextafter ================= Returns the next representable value of `x1` in the direction of `x2`, element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.nextafter`](https://www.tensorflow.org/api_docs/python/tf/math/nextafter) ``` tf.math.nextafter( x1, x2, name=None ) ``` This operation returns the same result as the C++ std::nextafter function. It can also return a subnormal number. | Args | | `x1` | A `Tensor`. Must be one of the following types: `float64`, `float32`. | | `x2` | A `Tensor`. Must have the same type as `x1`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x1`. | cpp compatibility ----------------- Equivalent to C++ std::nextafter function. tensorflow tf.math.segment_prod tf.math.segment\_prod ===================== Computes the product along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.segment_prod`](https://www.tensorflow.org/api_docs/python/tf/math/segment_prod), [`tf.compat.v1.segment_prod`](https://www.tensorflow.org/api_docs/python/tf/math/segment_prod) ``` tf.math.segment_prod( data, segment_ids, name=None ) ``` Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments. Computes a tensor such that \(output\_i = \prod\_j data\_j\) where the product is over `j` such that `segment_ids[j] == i`. If the product is empty for a given segment ID `i`, `output[i] = 1`. #### For example: ``` c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_prod(c, tf.constant([0, 0, 1])).numpy() array([[4, 6, 6, 4], [5, 6, 7, 8]], dtype=int32) ``` | Args | | `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. | | `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A 1-D tensor whose size is equal to the size of `data`'s first dimension. Values should be sorted and can be repeated. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `data`. | tensorflow tf.math.unsorted_segment_mean tf.math.unsorted\_segment\_mean =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4545-L4597) | Computes the mean along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.unsorted_segment_mean`](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_mean), [`tf.compat.v1.unsorted_segment_mean`](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_mean) ``` tf.math.unsorted_segment_mean( data, segment_ids, num_segments, name=None ) ``` Read [the section on segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) for an explanation of segments. This operator is similar to the [`tf.math.unsorted_segment_sum`](unsorted_segment_sum) operator. Instead of computing the sum over segments, it computes the mean of all entries belonging to a segment such that: \(output\_i = 1/N\_i \sum\_{j...} data[j...]\) where the sum is over tuples `j...` such that `segment_ids[j...] == i` with \N\_i\ being the number of occurrences of id \i\. If there is no entry for a given segment ID `i`, it outputs 0. If the given segment ID `i` is negative, the value is dropped and will not be added to the sum of the segment. | Args | | `data` | A `Tensor` with floating point or complex dtype. | | `segment_ids` | An integer tensor whose shape is a prefix of `data.shape`. The values must be less than `num_segments`. The values are always validated to be in range on CPU, never validated on GPU. | | `num_segments` | An integer scalar `Tensor`. The number of distinct segment IDs. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has same shape as data, except for the first `segment_ids.rank` dimensions, which are replaced with a single dimension which has size `num_segments`. | tensorflow tf.math.logical_xor tf.math.logical\_xor ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1789-L1838) | Logical XOR function. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.logical_xor`](https://www.tensorflow.org/api_docs/python/tf/math/logical_xor), [`tf.compat.v1.math.logical_xor`](https://www.tensorflow.org/api_docs/python/tf/math/logical_xor) ``` tf.math.logical_xor( x, y, name='LogicalXor' ) ``` x ^ y = (x | y) & ~(x & y) Requires that `x` and `y` have the same shape or have [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) shapes. For example, `x` and `y` can be: * Two single elements of type `bool` * One [`tf.Tensor`](../tensor) of type `bool` and one single `bool`, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor. * Two [`tf.Tensor`](../tensor) objects of type `bool` of the same shape. In this case, the result will be the element-wise logical XOR of the two input tensors. #### Usage: ``` a = tf.constant([True]) b = tf.constant([False]) tf.math.logical_xor(a, b) <tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])> ``` ``` c = tf.constant([True]) x = tf.constant([False, True, True, False]) tf.math.logical_xor(c, x) <tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False, True])> ``` ``` y = tf.constant([False, False, True, True]) z = tf.constant([False, True, False, True]) tf.math.logical_xor(y, z) <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])> ``` | Args | | `x` | A [`tf.Tensor`](../tensor) type bool. | | `y` | A [`tf.Tensor`](../tensor) of type bool. | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor) of type bool with the same size as that of x or y. | tensorflow tf.math.bessel_i1 tf.math.bessel\_i1 ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/special_math_ops.py#L308-L334) | Computes the Bessel i1 function of `x` element-wise. #### View aliases **Main aliases** [`tf.math.special.bessel_i1`](https://www.tensorflow.org/api_docs/python/tf/math/bessel_i1) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.bessel_i1`](https://www.tensorflow.org/api_docs/python/tf/math/bessel_i1), [`tf.compat.v1.math.special.bessel_i1`](https://www.tensorflow.org/api_docs/python/tf/math/bessel_i1) ``` tf.math.bessel_i1( x, name=None ) ``` Modified Bessel function of order 1. It is preferable to use the numerically stabler function `i1e(x)` instead. ``` tf.math.special.bessel_i1([-1., -0.5, 0.5, 1.]).numpy() array([-0.5651591 , -0.25789431, 0.25789431, 0.5651591 ], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.i1
programming_docs
tensorflow tf.math.is_non_decreasing tf.math.is\_non\_decreasing =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/check_ops.py#L2068-L2107) | Returns `True` if `x` is non-decreasing. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.debugging.is_non_decreasing`](https://www.tensorflow.org/api_docs/python/tf/math/is_non_decreasing), [`tf.compat.v1.is_non_decreasing`](https://www.tensorflow.org/api_docs/python/tf/math/is_non_decreasing), [`tf.compat.v1.math.is_non_decreasing`](https://www.tensorflow.org/api_docs/python/tf/math/is_non_decreasing) ``` tf.math.is_non_decreasing( x, name=None ) ``` Elements of `x` are compared in row-major order. The tensor `[x[0],...]` is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`. If `x` has less than two elements, it is trivially non-decreasing. See also: `is_strictly_increasing` ``` x1 = tf.constant([1.0, 1.0, 3.0]) tf.math.is_non_decreasing(x1) <tf.Tensor: shape=(), dtype=bool, numpy=True> x2 = tf.constant([3.0, 1.0, 2.0]) tf.math.is_non_decreasing(x2) <tf.Tensor: shape=(), dtype=bool, numpy=False> ``` | Args | | `x` | Numeric `Tensor`. | | `name` | A name for this operation (optional). Defaults to "is\_non\_decreasing" | | Returns | | Boolean `Tensor`, equal to `True` iff `x` is non-decreasing. | | Raises | | `TypeError` | if `x` is not a numeric tensor. | tensorflow tf.math.reduce_prod tf.math.reduce\_prod ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L2758-L2804) | Computes [`tf.math.multiply`](multiply) of elements across dimensions of a tensor. #### View aliases **Main aliases** [`tf.reduce_prod`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_prod) ``` tf.math.reduce_prod( input_tensor, axis=None, keepdims=False, name=None ) ``` This is the reduction operation for the elementwise [`tf.math.multiply`](multiply) op. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` x = tf.constant([[1., 2.], [3., 4.]]) tf.math.reduce_prod(x) <tf.Tensor: shape=(), dtype=float32, numpy=24.> tf.math.reduce_prod(x, 0) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([3., 8.], dtype=float32)> tf.math.reduce_prod(x, 1) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([2., 12.], dtype=float32)> ``` | Args | | `input_tensor` | The tensor to reduce. Should have numeric type. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name for the operation (optional). | | Returns | | The reduced tensor. | numpy compatibility ------------------- Equivalent to np.prod tensorflow tf.math.invert_permutation tf.math.invert\_permutation =========================== Computes the inverse permutation of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.invert_permutation`](https://www.tensorflow.org/api_docs/python/tf/math/invert_permutation), [`tf.compat.v1.math.invert_permutation`](https://www.tensorflow.org/api_docs/python/tf/math/invert_permutation) ``` tf.math.invert_permutation( x, name=None ) ``` This operation computes the inverse of an index permutation. It takes a 1-D integer tensor `x`, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor `y` and an input tensor `x`, this operation computes the following: `y[x[i]] = i for i in [0, 1, ..., len(x) - 1]` The values must include 0. There can be no duplicate values or negative values. #### For example: ``` # tensor `x` is [3, 4, 0, 2, 1] invert_permutation(x) ==> [2, 4, 3, 0, 1] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.argmin tf.math.argmin ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L321-L356) | Returns the index with the smallest value across axes of a tensor. #### View aliases **Main aliases** [`tf.argmin`](https://www.tensorflow.org/api_docs/python/tf/math/argmin) ``` tf.math.argmin( input, axis=None, output_type=tf.dtypes.int64, name=None ) ``` Returns the smallest index in case of ties. | Args | | `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. | | `axis` | A `Tensor`. Must be one of the following types: `int32`, `int64`. int32 or int64, must be in the range `-rank(input), rank(input))`. Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0. | | `output_type` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.int32, tf.int64`. Defaults to [`tf.int64`](../../tf#int64). | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `output_type`. | #### Usage: ``` import tensorflow as tf a = [1, 10, 26.9, 2.8, 166.32, 62.3] b = tf.math.argmin(input = a) c = tf.keras.backend.eval(b) # c = 0 # here a[0] = 1 which is the smallest element of a across axis 0 ``` tensorflow tf.math.negative tf.math.negative ================ Computes numerical negative value element-wise. #### View aliases **Main aliases** [`tf.negative`](https://www.tensorflow.org/api_docs/python/tf/math/negative) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.negative`](https://www.tensorflow.org/api_docs/python/tf/math/negative), [`tf.compat.v1.negative`](https://www.tensorflow.org/api_docs/python/tf/math/negative) ``` tf.math.negative( x, name=None ) ``` I.e., \(y = -x\). | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)` | tensorflow tf.math.atan2 tf.math.atan2 ============= Computes arctangent of `y/x` element-wise, respecting signs of the arguments. #### View aliases **Main aliases** [`tf.atan2`](https://www.tensorflow.org/api_docs/python/tf/math/atan2) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.atan2`](https://www.tensorflow.org/api_docs/python/tf/math/atan2), [`tf.compat.v1.math.atan2`](https://www.tensorflow.org/api_docs/python/tf/math/atan2) ``` tf.math.atan2( y, x, name=None ) ``` This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt{x^2 + y^2} \). #### For example: ``` x = [1., 1.] y = [1., -1.] print((tf.math.atan2(y,x) * (180 / np.pi)).numpy()) [ 45. -45.] ``` | Args | | `y` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `x` | A `Tensor`. Must have the same type as `y`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `y`. | tensorflow tf.math.reduce_sum tf.math.reduce\_sum =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L2248-L2312) | Computes the sum of elements across dimensions of a tensor. #### View aliases **Main aliases** [`tf.reduce_sum`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) ``` tf.math.reduce_sum( input_tensor, axis=None, keepdims=False, name=None ) ``` This is the reduction operation for the elementwise [`tf.math.add`](add) op. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each of the entries in `axis`, which must be unique. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` # x has a shape of (2, 3) (two rows and three columns): x = tf.constant([[1, 1, 1], [1, 1, 1]]) x.numpy() array([[1, 1, 1], [1, 1, 1]], dtype=int32) # sum all the elements # 1 + 1 + 1 + 1 + 1+ 1 = 6 tf.reduce_sum(x).numpy() 6 # reduce along the first dimension # the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2] tf.reduce_sum(x, 0).numpy() array([2, 2, 2], dtype=int32) # reduce along the second dimension # the result is [1, 1] + [1, 1] + [1, 1] = [3, 3] tf.reduce_sum(x, 1).numpy() array([3, 3], dtype=int32) # keep the original dimensions tf.reduce_sum(x, 1, keepdims=True).numpy() array([[3], [3]], dtype=int32) # reduce along both dimensions # the result is 1 + 1 + 1 + 1 + 1 + 1 = 6 # or, equivalently, reduce along rows, then reduce the resultant array # [1, 1, 1] + [1, 1, 1] = [2, 2, 2] # 2 + 2 + 2 = 6 tf.reduce_sum(x, [0, 1]).numpy() 6 ``` | Args | | `input_tensor` | The tensor to reduce. Should have numeric type. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor)]`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name for the operation (optional). | | Returns | | The reduced tensor, of the same dtype as the input\_tensor. | numpy compatibility ------------------- Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to int64 while tensorflow returns the same dtype as the input. tensorflow tf.math.polyval tf.math.polyval =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L5142-L5211) | Computes the elementwise value of a polynomial. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.polyval`](https://www.tensorflow.org/api_docs/python/tf/math/polyval) ``` tf.math.polyval( coeffs, x, name=None ) ``` If `x` is a tensor and `coeffs` is a list n + 1 tensors, this function returns the value of the n-th order polynomial `p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1)` evaluated using Horner's method, i.e. ``` p(x) = coeffs[n-1] + x * (coeffs[n-2] + ... + x * (coeffs[1] + x * coeffs[0])) ``` #### Usage Example: ``` coefficients = [1.0, 2.5, -4.2] x = 5.0 y = tf.math.polyval(coefficients, x) y <tf.Tensor: shape=(), dtype=float32, numpy=33.3> ``` #### Usage Example: ``` tf.math.polyval([2, 1, 0], 3) # evaluates 2 * (3**2) + 1 * (3**1) + 0 * (3**0) <tf.Tensor: shape=(), dtype=int32, numpy=21> ``` [`tf.math.polyval`](polyval) can also be used in polynomial regression. Taking advantage of this function can facilitate writing a polynomial equation as compared to explicitly writing it out, especially for higher degree polynomials. ``` x = tf.constant(3) theta1 = tf.Variable(2) theta2 = tf.Variable(1) theta3 = tf.Variable(0) tf.math.polyval([theta1, theta2, theta3], x) <tf.Tensor: shape=(), dtype=int32, numpy=21> ``` | Args | | `coeffs` | A list of `Tensor` representing the coefficients of the polynomial. | | `x` | A `Tensor` representing the variable of the polynomial. | | `name` | A name for the operation (optional). | | Returns | | A `tensor` of the shape as the expression p(x) with usual broadcasting rules for element-wise addition and multiplication applied. | numpy compatibility ------------------- Equivalent to numpy.polyval. tensorflow tf.math.acos tf.math.acos ============ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L5521-L5550) | Computes acos of x element-wise. #### View aliases **Main aliases** [`tf.acos`](https://www.tensorflow.org/api_docs/python/tf/math/acos) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.acos`](https://www.tensorflow.org/api_docs/python/tf/math/acos), [`tf.compat.v1.math.acos`](https://www.tensorflow.org/api_docs/python/tf/math/acos) ``` tf.math.acos( x, name=None ) ``` Provided an input tensor, the [`tf.math.acos`](acos) operation returns the inverse cosine of each element of the tensor. If `y = tf.math.cos(x)` then, `x = tf.math.acos(y)`. Input range is `[-1, 1]` and the output has a range of `[0, pi]`. #### For example: ``` x = tf.constant([1.0, -0.5, 3.4, 0.2, 0.0, -2], dtype = tf.float32) tf.math.acos(x) <tf.Tensor: shape=(6,), dtype=float32, numpy= array([0. , 2.0943952, nan, 1.3694383, 1.5707964, nan], dtype=float32)> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as x. | tensorflow tf.math.erfcinv tf.math.erfcinv =============== Computes the inverse of complementary error function. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.erfcinv`](https://www.tensorflow.org/api_docs/python/tf/math/erfcinv) ``` tf.math.erfcinv( x, name=None ) ``` Given `x`, compute the inverse complementary error function of `x`. This function is the inverse of [`tf.math.erfc`](erfc), and is defined on `[0, 2]`. ``` tf.math.erfcinv([0., 0.2, 1., 1.5, 2.]) <tf.Tensor: shape=(5,), dtype=float32, numpy= array([ inf, 0.9061935, -0. , -0.4769363, -inf], dtype=float32)> ``` | Args | | `x` | `Tensor` with type `float` or `double`. | | `name` | A name for the operation (optional). | | Returns | | Inverse complementary error function of `x`. | numpy compatibility ------------------- Equivalent to scipy.special.erfcinv tensorflow tf.math.greater_equal tf.math.greater\_equal ====================== Returns the truth value of (x >= y) element-wise. #### View aliases **Main aliases** [`tf.greater_equal`](https://www.tensorflow.org/api_docs/python/tf/math/greater_equal) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.greater_equal`](https://www.tensorflow.org/api_docs/python/tf/math/greater_equal), [`tf.compat.v1.math.greater_equal`](https://www.tensorflow.org/api_docs/python/tf/math/greater_equal) ``` tf.math.greater_equal( x, y, name=None ) ``` > > **Note:** [`math.greater_equal`](greater_equal) supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) > #### Example: ``` x = tf.constant([5, 4, 6, 7]) y = tf.constant([5, 2, 5, 10]) tf.math.greater_equal(x, y) ==> [True, True, True, False] x = tf.constant([5, 4, 6, 7]) y = tf.constant([5]) tf.math.greater_equal(x, y) ==> [True, False, True, True] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | tensorflow tf.math.sobol_sample tf.math.sobol\_sample ===================== Generates points from the Sobol sequence. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.sobol_sample`](https://www.tensorflow.org/api_docs/python/tf/math/sobol_sample) ``` tf.math.sobol_sample( dim, num_results, skip=0, dtype=tf.dtypes.float32, name=None ) ``` Creates a Sobol sequence with `num_results` samples. Each sample has dimension `dim`. Skips the first `skip` samples. | Args | | `dim` | Positive scalar `Tensor` representing each sample's dimension. | | `num_results` | Positive scalar `Tensor` of dtype int32. The number of Sobol points to return in the output. | | `skip` | (Optional) Positive scalar `Tensor` of dtype int32. The number of initial points of the Sobol sequence to skip. Default value is 0. | | `dtype` | (Optional) The `tf.Dtype` of the sample. One of: [`tf.float32`](../../tf#float32) or [`tf.float64`](../../tf#float64). Defaults to [`tf.float32`](../../tf#float32). | | `name` | (Optional) Python `str` name prefixed to ops created by this function. | | Returns | | `Tensor` of samples from Sobol sequence with `shape` [num\_results, dim]. | tensorflow tf.math.floor tf.math.floor ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L5553-L5576) | Returns element-wise largest integer not greater than x. #### View aliases **Main aliases** [`tf.floor`](https://www.tensorflow.org/api_docs/python/tf/math/floor) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.floor`](https://www.tensorflow.org/api_docs/python/tf/math/floor), [`tf.compat.v1.math.floor`](https://www.tensorflow.org/api_docs/python/tf/math/floor) ``` tf.math.floor( x, name=None ) ``` Both input range is `(-inf, inf)` and the output range consists of all integer values. #### For example: ``` x = tf.constant([1.3324, -1.5, 5.555, -2.532, 0.99, float("inf")]) tf.floor(x).numpy() array([ 1., -2., 5., -3., 0., inf], dtype=float32) ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as x. | tensorflow tf.math.erfc tf.math.erfc ============ Computes the complementary error function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.erfc`](https://www.tensorflow.org/api_docs/python/tf/math/erfc), [`tf.compat.v1.math.erfc`](https://www.tensorflow.org/api_docs/python/tf/math/erfc) ``` tf.math.erfc( x, name=None ) ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.zero_fraction tf.math.zero\_fraction ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/nn_impl.py#L724-L763) | Returns the fraction of zeros in `value`. #### View aliases **Main aliases** [`tf.nn.zero_fraction`](https://www.tensorflow.org/api_docs/python/tf/math/zero_fraction) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.zero_fraction`](https://www.tensorflow.org/api_docs/python/tf/math/zero_fraction), [`tf.compat.v1.nn.zero_fraction`](https://www.tensorflow.org/api_docs/python/tf/math/zero_fraction) ``` tf.math.zero_fraction( value, name=None ) ``` If `value` is empty, the result is `nan`. This is useful in summaries to measure and report sparsity. For example, ``` z = tf.nn.relu(...) summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z)) ``` | Args | | `value` | A tensor of numeric type. | | `name` | A name for the operation (optional). | | Returns | | The fraction of zeros in `value`, with type `float32`. |
programming_docs
tensorflow tf.math.equal tf.math.equal ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1888-L1922) | Returns the truth value of (x == y) element-wise. #### View aliases **Main aliases** [`tf.equal`](https://www.tensorflow.org/api_docs/python/tf/math/equal) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.equal`](https://www.tensorflow.org/api_docs/python/tf/math/equal), [`tf.compat.v1.math.equal`](https://www.tensorflow.org/api_docs/python/tf/math/equal) ``` tf.math.equal( x, y, name=None ) ``` Performs a [broadcast](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the arguments and then an element-wise equality comparison, returning a Tensor of boolean values. #### For example: ``` x = tf.constant([2, 4]) y = tf.constant(2) tf.math.equal(x, y) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, False])> ``` ``` x = tf.constant([2, 4]) y = tf.constant([2, 4]) tf.math.equal(x, y) <tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, True])> ``` | Args | | `x` | A [`tf.Tensor`](../tensor) or [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or [`tf.IndexedSlices`](../indexedslices). | | `y` | A [`tf.Tensor`](../tensor) or [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or [`tf.IndexedSlices`](../indexedslices). | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor) of type bool with the same size as that of x or y. | | Raises | | [`tf.errors.InvalidArgumentError`](../errors/invalidargumenterror): If shapes of arguments are incompatible | tensorflow tf.math.less_equal tf.math.less\_equal =================== Returns the truth value of (x <= y) element-wise. #### View aliases **Main aliases** [`tf.less_equal`](https://www.tensorflow.org/api_docs/python/tf/math/less_equal) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.less_equal`](https://www.tensorflow.org/api_docs/python/tf/math/less_equal), [`tf.compat.v1.math.less_equal`](https://www.tensorflow.org/api_docs/python/tf/math/less_equal) ``` tf.math.less_equal( x, y, name=None ) ``` > > **Note:** [`math.less_equal`](less_equal) supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) > #### Example: ``` x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less_equal(x, y) ==> [True, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 6]) tf.math.less_equal(x, y) ==> [True, True, True] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | tensorflow tf.math.unsorted_segment_prod tf.math.unsorted\_segment\_prod =============================== Computes the product along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.unsorted_segment_prod`](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_prod), [`tf.compat.v1.unsorted_segment_prod`](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_prod) ``` tf.math.unsorted_segment_prod( data, segment_ids, num_segments, name=None ) ``` Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments. This operator is similar to [`tf.math.unsorted_segment_sum`](unsorted_segment_sum), Instead of computing the sum over segments, it computes the product of all entries belonging to a segment such that: \(output\_i = \prod\_{j...} data[j...]\) where the product is over tuples `j...` such that `segment_ids[j...] == i`. #### For example: ``` c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) tf.math.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2).numpy() array([[4, 6, 6, 4], [5, 6, 7, 8]], dtype=int32) ``` If there is no entry for a given segment ID `i`, it outputs 1. If the given segment ID `i` is negative, then the corresponding value is dropped, and will not be included in the result. Caution: On CPU, values in `segment_ids` are always validated to be less than `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices result in safe but unspecified behavior, which may include ignoring out-of-bound indices or outputting a tensor with a 0 stored in the first dimension of its shape if `num_segments` is 0. | Args | | `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. | | `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A tensor whose shape is a prefix of `data.shape`. The values must be less than `num_segments`. | | `num_segments` | A `Tensor`. Must be one of the following types: `int32`, `int64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `data`. | tensorflow tf.math.square tf.math.square ============== Computes square of x element-wise. #### View aliases **Main aliases** [`tf.square`](https://www.tensorflow.org/api_docs/python/tf/math/square) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.square`](https://www.tensorflow.org/api_docs/python/tf/math/square), [`tf.compat.v1.square`](https://www.tensorflow.org/api_docs/python/tf/math/square) ``` tf.math.square( x, name=None ) ``` I.e., \(y = x \* x = x^2\). ``` tf.math.square([-2., 0., 3.]) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([4., 0., 9.], dtype=float32)> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. If `x` is a `SparseTensor`, returns `SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape)` | tensorflow tf.math.segment_mean tf.math.segment\_mean ===================== Computes the mean along segments of a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.segment_mean`](https://www.tensorflow.org/api_docs/python/tf/math/segment_mean), [`tf.compat.v1.segment_mean`](https://www.tensorflow.org/api_docs/python/tf/math/segment_mean) ``` tf.math.segment_mean( data, segment_ids, name=None ) ``` Read [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) for an explanation of segments. Computes a tensor such that \(output\_i = \frac{\sum\_j data\_j}{N}\) where `mean` is over `j` such that `segment_ids[j] == i` and `N` is the total number of values summed. If the mean is empty for a given segment ID `i`, `output[i] = 0`. #### For example: ``` c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_mean(c, tf.constant([0, 0, 1])).numpy() array([[2.5, 2.5, 2.5, 2.5], [5., 6., 7., 8.]], dtype=float32) ``` | Args | | `data` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. | | `segment_ids` | A `Tensor`. Must be one of the following types: `int32`, `int64`. A 1-D tensor whose size is equal to the size of `data`'s first dimension. Values should be sorted and can be repeated. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `data`. | tensorflow tf.math.atanh tf.math.atanh ============= Computes inverse hyperbolic tangent of x element-wise. #### View aliases **Main aliases** [`tf.atanh`](https://www.tensorflow.org/api_docs/python/tf/math/atanh) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.atanh`](https://www.tensorflow.org/api_docs/python/tf/math/atanh), [`tf.compat.v1.math.atanh`](https://www.tensorflow.org/api_docs/python/tf/math/atanh) ``` tf.math.atanh( x, name=None ) ``` Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is `[-1,1]` and output range is `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the input is `1`, output will be `inf`. Values outside the range will have `nan` as output. ``` x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")]) tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.argmax tf.math.argmax ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L267-L301) | Returns the index with the largest value across axes of a tensor. #### View aliases **Main aliases** [`tf.argmax`](https://www.tensorflow.org/api_docs/python/tf/math/argmax) ``` tf.math.argmax( input, axis=None, output_type=tf.dtypes.int64, name=None ) ``` In case of identity returns the smallest index. #### For example: ``` A = tf.constant([2, 20, 30, 3, 6]) tf.math.argmax(A) # A[2] is maximum in tensor A <tf.Tensor: shape=(), dtype=int64, numpy=2> B = tf.constant([[2, 20, 30, 3, 6], [3, 11, 16, 1, 8], [14, 45, 23, 5, 27]]) tf.math.argmax(B, 0) <tf.Tensor: shape=(5,), dtype=int64, numpy=array([2, 2, 0, 2, 2])> tf.math.argmax(B, 1) <tf.Tensor: shape=(3,), dtype=int64, numpy=array([2, 2, 1])> C = tf.constant([0, 0, 0, 0]) tf.math.argmax(C) # Returns smallest index in case of ties <tf.Tensor: shape=(), dtype=int64, numpy=0> ``` | Args | | `input` | A `Tensor`. | | `axis` | An integer, the axis to reduce across. Default to 0. | | `output_type` | An optional output dtype ([`tf.int32`](../../tf#int32) or [`tf.int64`](../../tf#int64)). Defaults to [`tf.int64`](../../tf#int64). | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of type `output_type`. | tensorflow tf.math.reduce_mean tf.math.reduce\_mean ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L2584-L2641) | Computes the mean of elements across dimensions of a tensor. #### View aliases **Main aliases** [`tf.reduce_mean`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean) ``` tf.math.reduce_mean( input_tensor, axis=None, keepdims=False, name=None ) ``` Reduces `input_tensor` along the dimensions given in `axis` by computing the mean of elements across the dimensions in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each of the entries in `axis`, which must be unique. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensions are reduced, and a tensor with a single element is returned. #### For example: ``` x = tf.constant([[1., 1.], [2., 2.]]) tf.reduce_mean(x) <tf.Tensor: shape=(), dtype=float32, numpy=1.5> tf.reduce_mean(x, 0) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.5, 1.5], dtype=float32)> tf.reduce_mean(x, 1) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 2.], dtype=float32)> ``` | Args | | `input_tensor` | The tensor to reduce. Should have numeric type. | | `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. | | `keepdims` | If true, retains reduced dimensions with length 1. | | `name` | A name for the operation (optional). | | Returns | | The reduced tensor. | numpy compatibility ------------------- Equivalent to np.mean Please note that `np.mean` has a `dtype` parameter that could be used to specify the output type. By default this is `dtype=float64`. On the other hand, [`tf.reduce_mean`](reduce_mean) has an aggressive type inference from `input_tensor`, for example: ``` x = tf.constant([1, 0, 1, 0]) tf.reduce_mean(x) <tf.Tensor: shape=(), dtype=int32, numpy=0> y = tf.constant([1., 0., 1., 0.]) tf.reduce_mean(y) <tf.Tensor: shape=(), dtype=float32, numpy=0.5> ``` tensorflow tf.math.sigmoid tf.math.sigmoid =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4142-L4192) | Computes sigmoid of `x` element-wise. #### View aliases **Main aliases** [`tf.nn.sigmoid`](https://www.tensorflow.org/api_docs/python/tf/math/sigmoid), [`tf.sigmoid`](https://www.tensorflow.org/api_docs/python/tf/math/sigmoid) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.sigmoid`](https://www.tensorflow.org/api_docs/python/tf/math/sigmoid), [`tf.compat.v1.nn.sigmoid`](https://www.tensorflow.org/api_docs/python/tf/math/sigmoid), [`tf.compat.v1.sigmoid`](https://www.tensorflow.org/api_docs/python/tf/math/sigmoid) ``` tf.math.sigmoid( x, name=None ) ``` Formula for calculating \(\mathrm{sigmoid}(x) = y = 1 / (1 + \exp(-x))\). For \(x \in (-\infty, \infty)\), \(\mathrm{sigmoid}(x) \in (0, 1)\). #### Example Usage: If a positive number is large, then its sigmoid will approach to 1 since the formula will be `y = <large_num> / (1 + <large_num>)` ``` x = tf.constant([0.0, 1.0, 50.0, 100.0]) tf.math.sigmoid(x) <tf.Tensor: shape=(4,), dtype=float32, numpy=array([0.5, 0.7310586, 1.0, 1.0], dtype=float32)> ``` If a negative number is large, its sigmoid will approach to 0 since the formula will be `y = 1 / (1 + <large_num>)` ``` x = tf.constant([-100.0, -50.0, -1.0, 0.0]) tf.math.sigmoid(x) <tf.Tensor: shape=(4,), dtype=float32, numpy= array([0.0000000e+00, 1.9287499e-22, 2.6894143e-01, 0.5], dtype=float32)> ``` | Args | | `x` | A Tensor with type `float16`, `float32`, `float64`, `complex64`, or `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A Tensor with the same type as `x`. | #### Usage Example: ``` x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32) tf.sigmoid(x) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([0. , 0.5, 1. ], dtype=float32)> ``` scipy compatibility ------------------- Equivalent to scipy.special.expit tensorflow tf.math.sin tf.math.sin =========== Computes sine of x element-wise. #### View aliases **Main aliases** [`tf.sin`](https://www.tensorflow.org/api_docs/python/tf/math/sin) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.sin`](https://www.tensorflow.org/api_docs/python/tf/math/sin), [`tf.compat.v1.sin`](https://www.tensorflow.org/api_docs/python/tf/math/sin) ``` tf.math.sin( x, name=None ) ``` Given an input tensor, this function computes sine of every element in the tensor. Input range is `(-inf, inf)` and output range is `[-1,1]`. ``` x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10, float("inf")]) tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.tan tf.math.tan =========== Computes tan of x element-wise. #### View aliases **Main aliases** [`tf.tan`](https://www.tensorflow.org/api_docs/python/tf/math/tan) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.tan`](https://www.tensorflow.org/api_docs/python/tf/math/tan), [`tf.compat.v1.tan`](https://www.tensorflow.org/api_docs/python/tf/math/tan) ``` tf.math.tan( x, name=None ) ``` Given an input tensor, this function computes tangent of every element in the tensor. Input range is `(-inf, inf)` and output range is `(-inf, inf)`. If input lies outside the boundary, `nan` is returned. ``` x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.bessel_i0 tf.math.bessel\_i0 ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/special_math_ops.py#L252-L278) | Computes the Bessel i0 function of `x` element-wise. #### View aliases **Main aliases** [`tf.math.special.bessel_i0`](https://www.tensorflow.org/api_docs/python/tf/math/bessel_i0) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.bessel_i0`](https://www.tensorflow.org/api_docs/python/tf/math/bessel_i0), [`tf.compat.v1.math.special.bessel_i0`](https://www.tensorflow.org/api_docs/python/tf/math/bessel_i0) ``` tf.math.bessel_i0( x, name=None ) ``` Modified Bessel function of order 0. It is preferable to use the numerically stabler function `i0e(x)` instead. ``` tf.math.special.bessel_i0([-1., -0.5, 0.5, 1.]).numpy() array([1.26606588, 1.06348337, 1.06348337, 1.26606588], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.i0 tensorflow tf.math.logical_and tf.math.logical\_and ==================== Returns the truth value of x AND y element-wise. #### View aliases **Main aliases** [`tf.logical_and`](https://www.tensorflow.org/api_docs/python/tf/math/logical_and) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.logical_and`](https://www.tensorflow.org/api_docs/python/tf/math/logical_and), [`tf.compat.v1.math.logical_and`](https://www.tensorflow.org/api_docs/python/tf/math/logical_and) ``` tf.math.logical_and( x, y, name=None ) ``` Logical AND function. Requires that `x` and `y` have the same shape or have [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) shapes. For example, `x` and `y` can be: * Two single elements of type `bool`. * One [`tf.Tensor`](../tensor) of type `bool` and one single `bool`, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor. * Two [`tf.Tensor`](../tensor) objects of type `bool` of the same shape. In this case, the result will be the element-wise logical AND of the two input tensors. You can also use the `&` operator instead. #### Usage: ``` a = tf.constant([True]) b = tf.constant([False]) tf.math.logical_and(a, b) <tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])> a & b <tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])> ``` ``` c = tf.constant([True]) x = tf.constant([False, True, True, False]) tf.math.logical_and(c, x) <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])> c & x <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])> ``` ``` y = tf.constant([False, False, True, True]) z = tf.constant([False, True, False, True]) tf.math.logical_and(y, z) <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False, True])> y & z <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False, True])> ``` This op also supports broadcasting ``` tf.logical_and([[True, False]], [[True], [False]]) <tf.Tensor: shape=(2, 2), dtype=bool, numpy= array([[ True, False], [False, False]])> ``` The reduction version of this elementwise operation is [`tf.math.reduce_all`](reduce_all). | Args | | `x` | A [`tf.Tensor`](../tensor) of type bool. | | `y` | A [`tf.Tensor`](../tensor) of type bool. | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor) of type bool with the shape that `x` and `y` broadcast to. | | Args | | `x` | A `Tensor` of type `bool`. | | `y` | A `Tensor` of type `bool`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. |
programming_docs
tensorflow tf.math.logical_or tf.math.logical\_or =================== Returns the truth value of x OR y element-wise. #### View aliases **Main aliases** [`tf.logical_or`](https://www.tensorflow.org/api_docs/python/tf/math/logical_or) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.logical_or`](https://www.tensorflow.org/api_docs/python/tf/math/logical_or), [`tf.compat.v1.math.logical_or`](https://www.tensorflow.org/api_docs/python/tf/math/logical_or) ``` tf.math.logical_or( x, y, name=None ) ``` Logical OR function. Requires that `x` and `y` have the same shape or have [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) shapes. For example, `x` and `y` can be: * Two single elements of type `bool`. * One [`tf.Tensor`](../tensor) of type `bool` and one single `bool`, where the result will be calculated by applying logical OR with the single element to each element in the larger Tensor. * Two [`tf.Tensor`](../tensor) objects of type `bool` of the same shape. In this case, the result will be the element-wise logical OR of the two input tensors. You can also use the `|` operator instead. #### Usage: ``` a = tf.constant([True]) b = tf.constant([False]) tf.math.logical_or(a, b) <tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])> a | b <tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])> ``` ``` c = tf.constant([False]) x = tf.constant([False, True, True, False]) tf.math.logical_or(c, x) <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])> c | x <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])> ``` ``` y = tf.constant([False, False, True, True]) z = tf.constant([False, True, False, True]) tf.math.logical_or(y, z) <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, True])> y | z <tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, True])> ``` This op also supports broadcasting ``` tf.logical_or([[True, False]], [[True], [False]]) <tf.Tensor: shape=(2, 2), dtype=bool, numpy= array([[ True, True], [ True, False]])> ``` The reduction version of this elementwise operation is [`tf.math.reduce_any`](reduce_any). | Args | | `x` | A [`tf.Tensor`](../tensor) of type bool. | | `y` | A [`tf.Tensor`](../tensor) of type bool. | | `name` | A name for the operation (optional). | | Returns | | A [`tf.Tensor`](../tensor) of type bool with the shape that `x` and `y` broadcast to. | | Args | | `x` | A `Tensor` of type `bool`. | | `y` | A `Tensor` of type `bool`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | tensorflow tf.math.is_finite tf.math.is\_finite ================== Returns which elements of x are finite. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.debugging.is_finite`](https://www.tensorflow.org/api_docs/python/tf/math/is_finite), [`tf.compat.v1.is_finite`](https://www.tensorflow.org/api_docs/python/tf/math/is_finite), [`tf.compat.v1.math.is_finite`](https://www.tensorflow.org/api_docs/python/tf/math/is_finite) ``` tf.math.is_finite( x, name=None ) ``` #### Example: ``` x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan]) tf.math.is_finite(x) ==> [True, True, True, False, False] ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `bool`. | numpy compatibility ------------------- Equivalent to np.isfinite tensorflow tf.math.xdivy tf.math.xdivy ============= Returns 0 if x == 0, and x / y otherwise, elementwise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.xdivy`](https://www.tensorflow.org/api_docs/python/tf/math/xdivy) ``` tf.math.xdivy( x, y, name=None ) ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.math.conj tf.math.conj ============ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4422-L4478) | Returns the complex conjugate of a complex number. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.conj`](https://www.tensorflow.org/api_docs/python/tf/math/conj), [`tf.compat.v1.math.conj`](https://www.tensorflow.org/api_docs/python/tf/math/conj) ``` tf.math.conj( x, name=None ) ``` Given a tensor `x` of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in `x`. The complex numbers in `x` must be of the form \(a + bj\), where `a` is the real part and `b` is the imaginary part. The complex conjugate returned by this operation is of the form \(a - bj\). #### For example: ``` x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j]) tf.math.conj(x) <tf.Tensor: shape=(2,), dtype=complex128, numpy=array([-2.25-4.75j, 3.25-5.75j])> ``` If `x` is real, it is returned unchanged. #### For example: ``` x = tf.constant([-2.25, 3.25]) tf.math.conj(x) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([-2.25, 3.25], dtype=float32)> ``` | Args | | `x` | `Tensor` to conjugate. Must have numeric or variant type. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` that is the conjugate of `x` (with the same type). | | Raises | | `TypeError` | If `x` is not a numeric tensor. | numpy compatibility ------------------- Equivalent to numpy.conj. tensorflow tf.math.special.bessel_k0e tf.math.special.bessel\_k0e =========================== Computes the Bessel k0e function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_k0e`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_k0e) ``` tf.math.special.bessel_k0e( x, name=None ) ``` Modified Bessel function of order 0. ``` tf.math.special.bessel_k0e([0.5, 1., 2., 4.]).numpy() array([1.52410939, 1.14446308, 0.84156822, 0.60929767], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.k0e tensorflow tf.math.special.fresnel_sin tf.math.special.fresnel\_sin ============================ Computes Fresnel's sine integral of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.fresnel_sin`](https://www.tensorflow.org/api_docs/python/tf/math/special/fresnel_sin) ``` tf.math.special.fresnel_sin( x, name=None ) ``` The Fresnel sine integral is defined as the integral of `sin(t^2)` from `0` to `x`, with the domain of definition all real numbers. ``` tf.math.special.fresnel_sin([-1., -0.1, 0.1, 1.]).numpy() array([-0.43825912, -0.00052359, 0.00052359, 0.43825912], dtype=float32) ``` This implementation is based off of the Cephes math library. | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.fresnel first output. tensorflow tf.math.special.bessel_y1 tf.math.special.bessel\_y1 ========================== Computes the Bessel y1 function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_y1`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_y1) ``` tf.math.special.bessel_y1( x, name=None ) ``` Modified Bessel function of order 1. ``` tf.math.special.bessel_y1([0.5, 1., 2., 4.]).numpy() array([-1.47147239, -0.78121282, -0.10703243, 0.39792571], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.y1 tensorflow tf.math.special.bessel_k1e tf.math.special.bessel\_k1e =========================== Computes the Bessel k1e function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_k1e`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_k1e) ``` tf.math.special.bessel_k1e( x, name=None ) ``` Modified Bessel function of order 1. ``` tf.math.special.bessel_k1e([0.5, 1., 2., 4.]).numpy() array([2.73100971, 1.63615349, 1.03347685, 0.68157595], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.k1e tensorflow tf.math.special.spence tf.math.special.spence ====================== Computes Spence's integral of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.spence`](https://www.tensorflow.org/api_docs/python/tf/math/special/spence) ``` tf.math.special.spence( x, name=None ) ``` Spence's integral is defined as the integral of `log(t) / (1 - t)` from `1` to `x`, with the domain of definition all non-negative real numbers. ``` tf.math.special.spence([0.5, 1., 2., 3.]).numpy() array([ 0.58224034, 0. , -0.82246685, -1.4367464], dtype=float32) ``` This implementation is based off of the Cephes math library. | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.spence tensorflow tf.math.special.dawsn tf.math.special.dawsn ===================== Computes Dawson's integral of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.dawsn`](https://www.tensorflow.org/api_docs/python/tf/math/special/dawsn) ``` tf.math.special.dawsn( x, name=None ) ``` Dawson's integral is defined as `exp(-x**2)` times the integral of `exp(t**2)` from `0` to `x`, with the domain of definition all real numbers. Dawson's function is odd. ``` >>> tf.math.special.dawsn([-1., -0.5, 0.5, 1.]).numpy() array([-0.5380795, -0.4244364, 0.4244364, 0.5380795], dtype=float32) ``` This implementation is based off of the Cephes math library. | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.dawsn tensorflow tf.math.special.bessel_y0 tf.math.special.bessel\_y0 ========================== Computes the Bessel y0 function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_y0`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_y0) ``` tf.math.special.bessel_y0( x, name=None ) ``` Modified Bessel function of order 0. ``` tf.math.special.bessel_y0([0.5, 1., 2., 4.]).numpy() array([-0.44451873, 0.08825696, 0.51037567, -0.01694074], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.y0 tensorflow tf.math.special.bessel_j1 tf.math.special.bessel\_j1 ========================== Computes the Bessel j1 function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_j1`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_j1) ``` tf.math.special.bessel_j1( x, name=None ) ``` Modified Bessel function of order 1. ``` tf.math.special.bessel_j1([0.5, 1., 2., 4.]).numpy() array([ 0.24226846, 0.44005059, 0.57672481, -0.06604333], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.j1 tensorflow tf.math.special.bessel_j0 tf.math.special.bessel\_j0 ========================== Computes the Bessel j0 function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_j0`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_j0) ``` tf.math.special.bessel_j0( x, name=None ) ``` Modified Bessel function of order 0. ``` tf.math.special.bessel_j0([0.5, 1., 2., 4.]).numpy() array([ 0.93846981, 0.76519769, 0.22389078, -0.39714981], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.j0 tensorflow tf.math.special.bessel_k1 tf.math.special.bessel\_k1 ========================== Computes the Bessel k1 function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_k1`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_k1) ``` tf.math.special.bessel_k1( x, name=None ) ``` Modified Bessel function of order 1. It is preferable to use the numerically stabler function `k1e(x)` instead. ``` tf.math.special.bessel_k1([0.5, 1., 2., 4.]).numpy() array([1.65644112, 0.60190723, 0.13986588, 0.0124835 ], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.k1 tensorflow tf.math.special.bessel_k0 tf.math.special.bessel\_k0 ========================== Computes the Bessel k0 function of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.bessel_k0`](https://www.tensorflow.org/api_docs/python/tf/math/special/bessel_k0) ``` tf.math.special.bessel_k0( x, name=None ) ``` Modified Bessel function of order 0. It is preferable to use the numerically stabler function `k0e(x)` instead. ``` tf.math.special.bessel_k0([0.5, 1., 2., 4.]).numpy() array([0.92441907, 0.42102444, 0.11389387, 0.01115968], dtype=float32) ``` | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.k0 tensorflow tf.math.special.fresnel_cos tf.math.special.fresnel\_cos ============================ Computes Fresnel's cosine integral of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.fresnel_cos`](https://www.tensorflow.org/api_docs/python/tf/math/special/fresnel_cos) ``` tf.math.special.fresnel_cos( x, name=None ) ``` The Fresnel cosine integral is defined as the integral of `cos(t^2)` from `0` to `x`, with the domain of definition all real numbers. The Fresnel cosine integral is odd. ``` >>> tf.math.special.fresnel_cos([-1., -0.1, 0.1, 1.]).numpy() array([-0.7798934 , -0.09999753, 0.09999753, 0.7798934 ], dtype=float32) ``` This implementation is based off of the Cephes math library. | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.fresnel second output. tensorflow tf.math.special.expint tf.math.special.expint ====================== Computes the Exponential integral of `x` element-wise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.math.special.expint`](https://www.tensorflow.org/api_docs/python/tf/math/special/expint) ``` tf.math.special.expint( x, name=None ) ``` The Exponential integral is defined as the integral of `exp(t) / t` from `-inf` to `x`, with the domain of definition all positive real numbers. ``` tf.math.special.expint([1., 1.1, 2.1, 4.1]).numpy() array([ 1.8951179, 2.1673784, 5.3332353, 21.048464], dtype=float32) ``` This implementation is based off of the Cephes math library. | Args | | `x` | A `Tensor` or `SparseTensor`. Must be one of the following types: `float32`, `float64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. | scipy compatibility ------------------- Equivalent to scipy.special.expi
programming_docs
tensorflow tf.image.random_saturation tf.image.random\_saturation =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2970-L3019) | Adjust the saturation of RGB images by a random factor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_saturation`](https://www.tensorflow.org/api_docs/python/tf/image/random_saturation) ``` tf.image.random_saturation( image, lower, upper, seed=None ) ``` Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly picked in the interval `[lower, upper)`. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.random_saturation(x, 5, 10) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 0. , 1.5, 3. ], [ 0. , 3. , 6. ]], [[ 0. , 4.5, 9. ], [ 0. , 6. , 12. ]]], dtype=float32)> ``` For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_saturation`](stateless_random_saturation). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | RGB image or images. The size of the last dimension must be 3. | | `lower` | float. Lower bound for the random saturation factor. | | `upper` | float. Upper bound for the random saturation factor. | | `seed` | An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set\_random\_seed for its interaction with the graph-level random seed. | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `ValueError` | if `upper <= lower` or if `lower < 0`. | tensorflow tf.image.non_max_suppression tf.image.non\_max\_suppression ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3745-L3795) | Greedily selects a subset of bounding boxes in descending order of score. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.non_max_suppression`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression) ``` tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold=0.5, score_threshold=float('-inf'), name=None ) ``` Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the [`tf.gather`](../gather) operation. For example: ``` selected_indices = tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold) selected_boxes = tf.gather(boxes, selected_indices) ``` | Args | | `boxes` | A 2-D float `Tensor` of shape `[num_boxes, 4]`. | | `scores` | A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes). | | `max_output_size` | A scalar integer `Tensor` representing the maximum number of boxes to be selected by non-max suppression. | | `iou_threshold` | A 0-D float tensor representing the threshold for deciding whether boxes overlap too much with respect to IOU. | | `score_threshold` | A 0-D float tensor representing the threshold for deciding when to remove boxes based on score. | | `name` | A name for the operation (optional). | | Returns | | `selected_indices` | A 1-D integer `Tensor` of shape `[M]` representing the selected indices from the boxes tensor, where `M <= max_output_size`. | tensorflow tf.image.pad_to_bounding_box tf.image.pad\_to\_bounding\_box =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L990-L1054) | Pad `image` with zeros to the specified `height` and `width`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.pad_to_bounding_box`](https://www.tensorflow.org/api_docs/python/tf/image/pad_to_bounding_box) ``` tf.image.pad_to_bounding_box( image, offset_height, offset_width, target_height, target_width ) ``` Adds `offset_height` rows of zeros on top, `offset_width` columns of zeros on the left, and then pads the image on the bottom and right with zeros until it has dimensions `target_height`, `target_width`. This op does nothing if `offset_*` is zero and the image already has size `target_height` by `target_width`. #### Usage Example: ``` x = [[[1., 2., 3.], [4., 5., 6.]], [[7., 8., 9.], [10., 11., 12.]]] padded_image = tf.image.pad_to_bounding_box(x, 1, 1, 4, 4) padded_image <tf.Tensor: shape=(4, 4, 3), dtype=float32, numpy= array([[[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 1., 2., 3.], [ 4., 5., 6.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 7., 8., 9.], [10., 11., 12.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]], dtype=float32)> ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `offset_height` | Number of rows of zeros to add on top. | | `offset_width` | Number of columns of zeros to add on the left. | | `target_height` | Height of output image. | | `target_width` | Width of output image. | | Returns | | If `image` was 4-D, a 4-D float Tensor of shape `[batch, target_height, target_width, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[target_height, target_width, channels]` | | Raises | | `ValueError` | If the shape of `image` is incompatible with the `offset_*` or `target_*` arguments, or either `offset_height` or `offset_width` is negative. | tensorflow tf.image.extract_patches tf.image.extract\_patches ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L6441-L6560) | Extract `patches` from `images`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.extract_patches`](https://www.tensorflow.org/api_docs/python/tf/image/extract_patches) ``` tf.image.extract_patches( images, sizes, strides, rates, padding, name=None ) ``` This op collects patches from the input image, as if applying a convolution. All extracted patches are stacked in the depth (last) dimension of the output. Specifically, the op extracts patches of shape `sizes` which are `strides` apart in the input image. The output is subsampled using the `rates` argument, in the same manner as "atrous" or "dilated" convolutions. The result is a 4D tensor which is indexed by batch, row, and column. `output[i, x, y]` contains a flattened patch of size `sizes[1], sizes[2]` which is taken from the input starting at `images[i, x*strides[1], y*strides[2]]`. Each output patch can be reshaped to `sizes[1], sizes[2], depth`, where `depth` is `images.shape[3]`. The output elements are taken from the input at intervals given by the `rate` argument, as in dilated convolutions. The `padding` argument has no effect on the size of each patch, it determines how many patches are extracted. If `VALID`, only patches which are fully contained in the input image are included. If `SAME`, all patches whose starting point is inside the input are included, and areas outside the input default to zero. #### Example: ``` n = 10 # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]] # We generate two outputs as follows: # 1. 3x3 patches with stride length 5 # 2. Same as above, but the rate is increased to 2 tf.image.extract_patches(images=images, sizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID') # Yields: [[[[ 1 2 3 11 12 13 21 22 23] [ 6 7 8 16 17 18 26 27 28]] [[51 52 53 61 62 63 71 72 73] [56 57 58 66 67 68 76 77 78]]]] ``` If we mark the pixels in the input image which are taken for the output with `*`, we see the pattern: ``` * * * 4 5 * * * 9 10 * * * 14 15 * * * 19 20 * * * 24 25 * * * 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 * * * 54 55 * * * 59 60 * * * 64 65 * * * 69 70 * * * 74 75 * * * 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 ``` ``` tf.image.extract_patches(images=images, sizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID') # Yields: [[[[ 1 3 5 21 23 25 41 43 45] [ 6 8 10 26 28 30 46 48 50]] [[ 51 53 55 71 73 75 91 93 95] [ 56 58 60 76 78 80 96 98 100]]]] ``` We can again draw the effect, this time using the symbols `*`, `x`, `+` and `o` to distinguish the patches: ``` * 2 * 4 * x 7 x 9 x 11 12 13 14 15 16 17 18 19 20 * 22 * 24 * x 27 x 29 x 31 32 33 34 35 36 37 38 39 40 * 42 * 44 * x 47 x 49 x + 52 + 54 + o 57 o 59 o 61 62 63 64 65 66 67 68 69 70 + 72 + 74 + o 77 o 79 o 81 82 83 84 85 86 87 88 89 90 + 92 + 94 + o 97 o 99 o ``` | Args | | `images` | A 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`. | | `sizes` | The size of the extracted patches. Must be `[1, size_rows, size_cols, 1]`. | | `strides` | A 1-D Tensor of length 4. How far the centers of two consecutive patches are in the images. Must be: `[1, stride_rows, stride_cols, 1]`. | | `rates` | A 1-D Tensor of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling them spatially by a factor of `rates`. This is equivalent to `rate` in dilated (a.k.a. Atrous) convolutions. | | `padding` | The type of padding algorithm to use. | | `name` | A name for the operation (optional). | | Returns | | A 4-D Tensor of the same type as the input. | tensorflow tf.image.stateless_random_flip_up_down tf.image.stateless\_random\_flip\_up\_down ========================================== Randomly flip an image vertically (upside down) deterministically. ``` tf.image.stateless_random_flip_up_down( image, seed ) ``` Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). #### Example usage: ``` image = np.array([[[1], [2]], [[3], [4]]]) seed = (2, 3) tf.image.stateless_random_flip_up_down(image, seed).numpy().tolist() [[[3], [4]], [[1], [2]]] ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | Returns | | A tensor of the same type and shape as `image`. | tensorflow tf.image.yuv_to_rgb tf.image.yuv\_to\_rgb ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4035-L4081) | Converts one or more images from YUV to RGB. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.yuv_to_rgb`](https://www.tensorflow.org/api_docs/python/tf/image/yuv_to_rgb) ``` tf.image.yuv_to_rgb( images ) ``` Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the Y value in images are in [0,1], U and V value are in [-0.5,0.5]. As per the above description, you need to scale your YUV images if their pixel values are not in the required range. Below given example illustrates preprocessing of each channel of images before feeding them to `yuv_to_rgb`. ``` yuv_images = tf.random.uniform(shape=[100, 64, 64, 3], maxval=255) last_dimension_axis = len(yuv_images.shape) - 1 yuv_tensor_images = tf.truediv( tf.subtract( yuv_images, tf.reduce_min(yuv_images) ), tf.subtract( tf.reduce_max(yuv_images), tf.reduce_min(yuv_images) ) ) y, u, v = tf.split(yuv_tensor_images, 3, axis=last_dimension_axis) target_uv_min, target_uv_max = -0.5, 0.5 u = u * (target_uv_max - target_uv_min) + target_uv_min v = v * (target_uv_max - target_uv_min) + target_uv_min preprocessed_yuv_images = tf.concat([y, u, v], axis=last_dimension_axis) rgb_tensor_images = tf.image.yuv_to_rgb(preprocessed_yuv_images) ``` | Args | | `images` | 2-D or higher rank. Image data to convert. Last dimension must be size 3. | | Returns | | `images` | tensor with the same shape as `images`. | tensorflow tf.image.non_max_suppression_with_scores tf.image.non\_max\_suppression\_with\_scores ============================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3798-L3885) | Greedily selects a subset of bounding boxes in descending order of score. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.non_max_suppression_with_scores`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_with_scores) ``` tf.image.non_max_suppression_with_scores( boxes, scores, max_output_size, iou_threshold=0.5, score_threshold=float('-inf'), soft_nms_sigma=0.0, name=None ) ``` Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the [`tf.gather`](../gather) operation. For example: ``` selected_indices, selected_scores = tf.image.non_max_suppression_padded( boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1, soft_nms_sigma=0.5) selected_boxes = tf.gather(boxes, selected_indices) ``` This function generalizes the [`tf.image.non_max_suppression`](non_max_suppression) op by also supporting a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. Consequently, in contrast to [`tf.image.non_max_suppression`](non_max_suppression), [`tf.image.non_max_suppression_with_scores`](non_max_suppression_with_scores) returns the new scores of each input box in the second output, `selected_scores`. To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be larger than 0. When `soft_nms_sigma` equals 0, the behavior of [`tf.image.non_max_suppression_with_scores`](non_max_suppression_with_scores) is identical to that of [`tf.image.non_max_suppression`](non_max_suppression) (except for the extra output) both in function and in running time. Note that when `soft_nms_sigma` > 0, Soft-NMS is performed and `iou_threshold` is ignored. `iou_threshold` is only used for standard NMS. | Args | | `boxes` | A 2-D float `Tensor` of shape `[num_boxes, 4]`. | | `scores` | A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes). | | `max_output_size` | A scalar integer `Tensor` representing the maximum number of boxes to be selected by non-max suppression. | | `iou_threshold` | A 0-D float tensor representing the threshold for deciding whether boxes overlap too much with respect to IOU. | | `score_threshold` | A 0-D float tensor representing the threshold for deciding when to remove boxes based on score. | | `soft_nms_sigma` | A 0-D float tensor representing the sigma parameter for Soft NMS; see Bodla et al (c.f. <https://arxiv.org/abs/1704.04503>). When `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) NMS. | | `name` | A name for the operation (optional). | | Returns | | `selected_indices` | A 1-D integer `Tensor` of shape `[M]` representing the selected indices from the boxes tensor, where `M <= max_output_size`. | | `selected_scores` | A 1-D float tensor of shape `[M]` representing the corresponding scores for each selected box, where `M <= max_output_size`. Scores only differ from corresponding input scores when using Soft NMS (i.e. when `soft_nms_sigma>0`) | tensorflow tf.image.psnr tf.image.psnr ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4129-L4181) | Returns the Peak Signal-to-Noise Ratio between a and b. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.psnr`](https://www.tensorflow.org/api_docs/python/tf/image/psnr) ``` tf.image.psnr( a, b, max_val, name=None ) ``` This is intended to be used on signals (or images). Produces a PSNR value for each image in batch. The last three dimensions of input are expected to be [height, width, depth]. #### Example: ``` # Read images from file. im1 = tf.decode_png('path/to/im1.png') im2 = tf.decode_png('path/to/im2.png') # Compute PSNR over tf.uint8 Tensors. psnr1 = tf.image.psnr(im1, im2, max_val=255) # Compute PSNR over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) psnr2 = tf.image.psnr(im1, im2, max_val=1.0) # psnr1 and psnr2 both have type tf.float32 and are almost equal. ``` | Args | | `a` | First set of images. | | `b` | Second set of images. | | `max_val` | The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values). | | `name` | Namespace to embed the computation in. | | Returns | | The scalar PSNR between a and b. The returned tensor has type [`tf.float32`](../../tf#float32) and shape [batch\_size, 1]. |
programming_docs
tensorflow tf.image.non_max_suppression_padded tf.image.non\_max\_suppression\_padded ====================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L5321-L5411) | Greedily selects a subset of bounding boxes in descending order of score. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded) ``` tf.image.non_max_suppression_padded( boxes, scores, max_output_size, iou_threshold=0.5, score_threshold=float('-inf'), pad_to_max_output_size=False, name=None, sorted_input=False, canonicalized_coordinates=False, tile_size=512 ) ``` Performs algorithmically equivalent operation to tf.image.non\_max\_suppression, with the addition of an optional parameter which zero-pads the output to be of size `max_output_size`. The output of this operation is a tuple containing the set of integers indexing into the input collection of bounding boxes representing the selected boxes and the number of valid indices in the index set. The bounding box coordinates corresponding to the selected indices can then be obtained using the [`tf.slice`](../slice) and [`tf.gather`](../gather) operations. For example: ``` selected_indices_padded, num_valid = tf.image.non_max_suppression_padded( boxes, scores, max_output_size, iou_threshold, score_threshold, pad_to_max_output_size=True) selected_indices = tf.slice( selected_indices_padded, tf.constant([0]), num_valid) selected_boxes = tf.gather(boxes, selected_indices) ``` | Args | | `boxes` | a tensor of rank 2 or higher with a shape of [..., num\_boxes, 4]. Dimensions except the last two are batch dimensions. | | `scores` | a tensor of rank 1 or higher with a shape of [..., num\_boxes]. | | `max_output_size` | a scalar integer `Tensor` representing the maximum number of boxes to be selected by non max suppression. Note that setting this value to a large number may result in OOM error depending on the system workload. | | `iou_threshold` | a float representing the threshold for deciding whether boxes overlap too much with respect to IoU (intersection over union). | | `score_threshold` | a float representing the threshold for box scores. Boxes with a score that is not larger than this threshold will be suppressed. | | `pad_to_max_output_size` | whether to pad the output idx to max\_output\_size. Must be set to True when the input is a batch of images. | | `name` | name of operation. | | `sorted_input` | a boolean indicating whether the input boxes and scores are sorted in descending order by the score. | | `canonicalized_coordinates` | if box coordinates are given as `[y_min, x_min, y_max, x_max]`, setting to True eliminate redundant computation to canonicalize box coordinates. | | `tile_size` | an integer representing the number of boxes in a tile, i.e., the maximum number of boxes per image that can be used to suppress other boxes in parallel; larger tile\_size means larger parallelism and potentially more redundant work. | | Returns | | idx: a tensor with a shape of [..., num\_boxes] representing the indices selected by non-max suppression. The leading dimensions are the batch dimensions of the input boxes. All numbers are within [0, num\_boxes). For each image (i.e., idx[i]), only the first num\_valid[i] indices (i.e., idx[i][:num\_valid[i]]) are valid. num\_valid: a tensor of rank 0 or higher with a shape of [...] representing the number of valid indices in idx. Its dimensions are the batch dimensions of the input boxes. | | `Raises` | ValueError: When set pad\_to\_max\_output\_size to False for batched input. | tensorflow tf.image.adjust_jpeg_quality tf.image.adjust\_jpeg\_quality ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2892-L2967) | Adjust jpeg encoding quality of an image. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.adjust_jpeg_quality`](https://www.tensorflow.org/api_docs/python/tf/image/adjust_jpeg_quality) ``` tf.image.adjust_jpeg_quality( image, jpeg_quality, name=None ) ``` This is a convenience method that converts an image to uint8 representation, encodes it to jpeg with `jpeg_quality`, decodes it, and then converts back to the original data type. `jpeg_quality` must be in the interval `[0, 100]`. #### Usage Examples: ``` x = [[[0.01, 0.02, 0.03], [0.04, 0.05, 0.06]], [[0.07, 0.08, 0.09], [0.10, 0.11, 0.12]]] x_jpeg = tf.image.adjust_jpeg_quality(x, 75) x_jpeg.numpy() array([[[0.00392157, 0.01960784, 0.03137255], [0.02745098, 0.04313726, 0.05490196]], [[0.05882353, 0.07450981, 0.08627451], [0.08235294, 0.09803922, 0.10980393]]], dtype=float32) ``` Note that floating point values are expected to have values in the range [0,1) and values outside this range are clipped. ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_jpeg_quality(x, 75) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.]]], dtype=float32)> ``` Note that `jpeg_quality` 100 is still lossy compresson. ``` x = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], dtype=tf.uint8) tf.image.adjust_jpeg_quality(x, 100) <tf.Tensor: shape(2, 2, 3), dtype=uint8, numpy= array([[[ 0, 1, 3], [ 3, 4, 6]], [[ 6, 7, 9], [ 9, 10, 12]]], dtype=uint8)> ``` | Args | | `image` | 3D image. The size of the last dimension must be None, 1 or 3. | | `jpeg_quality` | Python int or Tensor of type int32. jpeg encoding quality. | | `name` | A name for this operation (optional). | | Returns | | Adjusted image, same shape and DType as `image`. | | Raises | | `InvalidArgumentError` | quality must be in [0,100] | | `InvalidArgumentError` | image must have 1 or 3 channels | tensorflow tf.image.stateless_random_crop tf.image.stateless\_random\_crop ================================ Randomly crops a tensor to a given size in a deterministic manner. ``` tf.image.stateless_random_crop( value, size, seed, name=None ) ``` Slices a shape `size` portion out of `value` at a uniformly chosen offset. Requires `value.shape >= size`. If a dimension should not be cropped, pass the full size of that dimension. For example, RGB images can be cropped with `size = [crop_height, crop_width, 3]`. Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). #### Usage Example: ``` image = [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]] seed = (1, 2) tf.image.stateless_random_crop(value=image, size=(1, 2, 3), seed=seed) <tf.Tensor: shape=(1, 2, 3), dtype=int32, numpy= array([[[1, 2, 3], [4, 5, 6]]], dtype=int32)> ``` | Args | | `value` | Input tensor to crop. | | `size` | 1-D tensor with size the rank of `value`. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `name` | A name for this operation (optional). | | Returns | | A cropped tensor of the same rank as `value` and shape `size`. | tensorflow tf.image.adjust_brightness tf.image.adjust\_brightness =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2174-L2224) | Adjust the brightness of RGB or Grayscale images. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.adjust_brightness`](https://www.tensorflow.org/api_docs/python/tf/image/adjust_brightness) ``` tf.image.adjust_brightness( image, delta ) ``` This is a convenience method that converts RGB images to float representation, adjusts their brightness, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions. The value `delta` is added to all components of the tensor `image`. `image` is converted to `float` and scaled appropriately if it is in fixed-point representation, and `delta` is converted to the same data type. For regular images, `delta` should be in the range `(-1,1)`, as it is added to the image in floating point representation, where pixel values are in the `[0,1)` range. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_brightness(x, delta=0.1) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 1.1, 2.1, 3.1], [ 4.1, 5.1, 6.1]], [[ 7.1, 8.1, 9.1], [10.1, 11.1, 12.1]]], dtype=float32)> ``` | Args | | `image` | RGB image or images to adjust. | | `delta` | A scalar. Amount to add to the pixel values. | | Returns | | A brightness-adjusted tensor of the same shape and type as `image`. | tensorflow tf.image.hsv_to_rgb tf.image.hsv\_to\_rgb ===================== Convert one or more images from HSV to RGB. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.hsv_to_rgb`](https://www.tensorflow.org/api_docs/python/tf/image/hsv_to_rgb) ``` tf.image.hsv_to_rgb( images, name=None ) ``` Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`. See `rgb_to_hsv` for a description of the HSV encoding. | Args | | `images` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. HSV data to convert. Last dimension must be size 3. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `images`. | tensorflow tf.image.central_crop tf.image.central\_crop ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L845-L987) | Crop the central region of the image(s). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.central_crop`](https://www.tensorflow.org/api_docs/python/tf/image/central_crop) ``` tf.image.central_crop( image, central_fraction ) ``` Remove the outer parts of an image but retain the central region of the image along each dimension. If we specify central\_fraction = 0.5, this function returns the region marked with "X" in the below diagram. ``` -------- | | | XXXX | | XXXX | | | where "X" is the central 50% of the image. -------- ``` This function works on either a single image (`image` is a 3-D Tensor), or a batch of images (`image` is a 4-D Tensor). #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0], [10.0, 11.0, 12.0]], [[13.0, 14.0, 15.0], [16.0, 17.0, 18.0], [19.0, 20.0, 21.0], [22.0, 23.0, 24.0]], [[25.0, 26.0, 27.0], [28.0, 29.0, 30.0], [31.0, 32.0, 33.0], [34.0, 35.0, 36.0]], [[37.0, 38.0, 39.0], [40.0, 41.0, 42.0], [43.0, 44.0, 45.0], [46.0, 47.0, 48.0]]] tf.image.central_crop(x, 0.5) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[16., 17., 18.], [19., 20., 21.]], [[28., 29., 30.], [31., 32., 33.]]], dtype=float32)> ``` | Args | | `image` | Either a 3-D float Tensor of shape [height, width, depth], or a 4-D Tensor of shape [batch\_size, height, width, depth]. | | `central_fraction` | float (0, 1], fraction of size to crop | | Raises | | `ValueError` | if central\_crop\_fraction is not within (0, 1]. | | Returns | | 3-D / 4-D float Tensor, as per the input. | tensorflow tf.image.random_jpeg_quality tf.image.random\_jpeg\_quality ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2782-L2833) | Randomly changes jpeg encoding quality for inducing jpeg noise. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_jpeg_quality`](https://www.tensorflow.org/api_docs/python/tf/image/random_jpeg_quality) ``` tf.image.random_jpeg_quality( image, min_jpeg_quality, max_jpeg_quality, seed=None ) ``` `min_jpeg_quality` must be in the interval `[0, 100]` and less than `max_jpeg_quality`. `max_jpeg_quality` must be in the interval `[0, 100]`. #### Usage Example: ``` x = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], dtype=tf.uint8) tf.image.random_jpeg_quality(x, 75, 95) <tf.Tensor: shape=(2, 2, 3), dtype=uint8, numpy=...> ``` For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_jpeg_quality`](stateless_random_jpeg_quality). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | 3D image. Size of the last dimension must be 1 or 3. | | `min_jpeg_quality` | Minimum jpeg encoding quality to use. | | `max_jpeg_quality` | Maximum jpeg encoding quality to use. | | `seed` | An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set\_random\_seed for its interaction with the graph-level random seed. | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `ValueError` | if `min_jpeg_quality` or `max_jpeg_quality` is invalid. | tensorflow tf.image.random_contrast tf.image.random\_contrast ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2081-L2124) | Adjust the contrast of an image or images by a random factor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_contrast`](https://www.tensorflow.org/api_docs/python/tf/image/random_contrast) ``` tf.image.random_contrast( image, lower, upper, seed=None ) ``` Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly picked in the interval `[lower, upper)`. For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_contrast`](stateless_random_contrast). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | An image tensor with 3 or more dimensions. | | `lower` | float. Lower bound for the random contrast factor. | | `upper` | float. Upper bound for the random contrast factor. | | `seed` | A Python integer. Used to create a random seed. See [`tf.compat.v1.set_random_seed`](../compat/v1/set_random_seed) for behavior. | #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.random_contrast(x, 0.2, 0.5) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=...> ``` | Returns | | The contrast-adjusted image(s). | | Raises | | `ValueError` | if `upper <= lower` or if `lower < 0`. | tensorflow tf.image.yiq_to_rgb tf.image.yiq\_to\_rgb ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3976-L3997) | Converts one or more images from YIQ to RGB. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.yiq_to_rgb`](https://www.tensorflow.org/api_docs/python/tf/image/yiq_to_rgb) ``` tf.image.yiq_to_rgb( images ) ``` Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the Y value in images are in [0,1], I value are in [-0.5957,0.5957] and Q value are in [-0.5226,0.5226]. | Args | | `images` | 2-D or higher rank. Image data to convert. Last dimension must be size 3. | | Returns | | `images` | tensor with the same shape as `images`. | tensorflow tf.image.non_max_suppression_overlaps tf.image.non\_max\_suppression\_overlaps ======================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3888-L3933) | Greedily selects a subset of bounding boxes in descending order of score. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.non_max_suppression_overlaps`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_overlaps) ``` tf.image.non_max_suppression_overlaps( overlaps, scores, max_output_size, overlap_threshold=0.5, score_threshold=float('-inf'), name=None ) ``` Prunes away boxes that have high overlap with previously selected boxes. N-by-n overlap values are supplied as square matrix. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the [`tf.gather`](../gather) operation. For example: ``` selected_indices = tf.image.non_max_suppression_overlaps( overlaps, scores, max_output_size, iou_threshold) selected_boxes = tf.gather(boxes, selected_indices) ``` | Args | | `overlaps` | A 2-D float `Tensor` of shape `[num_boxes, num_boxes]` representing the n-by-n box overlap values. | | `scores` | A 1-D float `Tensor` of shape `[num_boxes]` representing a single score corresponding to each box (each row of boxes). | | `max_output_size` | A scalar integer `Tensor` representing the maximum number of boxes to be selected by non-max suppression. | | `overlap_threshold` | A 0-D float tensor representing the threshold for deciding whether boxes overlap too much with respect to the provided overlap values. | | `score_threshold` | A 0-D float tensor representing the threshold for deciding when to remove boxes based on score. | | `name` | A name for the operation (optional). | | Returns | | `selected_indices` | A 1-D integer `Tensor` of shape `[M]` representing the selected indices from the overlaps tensor, where `M <= max_output_size`. | tensorflow tf.image.stateless_random_flip_left_right tf.image.stateless\_random\_flip\_left\_right ============================================= Randomly flip an image horizontally (left to right) deterministically. ``` tf.image.stateless_random_flip_left_right( image, seed ) ``` Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). #### Example usage: ``` image = np.array([[[1], [2]], [[3], [4]]]) seed = (2, 3) tf.image.stateless_random_flip_left_right(image, seed).numpy().tolist() [[[2], [1]], [[4], [3]]] ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | Returns | | A tensor of the same type and shape as `image`. |
programming_docs
tensorflow tf.image.rgb_to_yiq tf.image.rgb\_to\_yiq ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3942-L3969) | Converts one or more images from RGB to YIQ. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.rgb_to_yiq`](https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_yiq) ``` tf.image.rgb_to_yiq( images ) ``` Outputs a tensor of the same shape as the `images` tensor, containing the YIQ value of the pixels. The output is only well defined if the value in images are in [0,1]. #### Usage Example: ``` x = tf.constant([[[1.0, 2.0, 3.0]]]) tf.image.rgb_to_yiq(x) <tf.Tensor: shape=(1, 1, 3), dtype=float32, numpy=array([[[ 1.815 , -0.91724455, 0.09962624]]], dtype=float32)> ``` | Args | | `images` | 2-D or higher rank. Image data to convert. Last dimension must be size 3. | | Returns | | `images` | tensor with the same shape as `images`. | tensorflow tf.image.rgb_to_yuv tf.image.rgb\_to\_yuv ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4005-L4028) | Converts one or more images from RGB to YUV. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.rgb_to_yuv`](https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_yuv) ``` tf.image.rgb_to_yuv( images ) ``` Outputs a tensor of the same shape as the `images` tensor, containing the YUV value of the pixels. The output is only well defined if the value in images are in [0, 1]. There are two ways of representing an image: [0, 255] pixel values range or [0, 1](as%20float) pixel values range. Users need to convert the input image into a float [0, 1] range. | Args | | `images` | 2-D or higher rank. Image data to convert. Last dimension must be size 3. | | Returns | | `images` | tensor with the same shape as `images`. | tensorflow tf.image.adjust_gamma tf.image.adjust\_gamma ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2284-L2347) | Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.adjust_gamma`](https://www.tensorflow.org/api_docs/python/tf/image/adjust_gamma) ``` tf.image.adjust_gamma( image, gamma=1, gain=1 ) ``` on the input image. Also known as Power Law Transform. This function converts the input images at first to float representation, then transforms them pixelwise according to the equation `Out = gain * In**gamma`, and then converts the back to the original data type. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_gamma(x, 0.2) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[1. , 1.1486983, 1.2457309], [1.319508 , 1.3797297, 1.4309691]], [[1.4757731, 1.5157166, 1.5518456], [1.5848932, 1.6153942, 1.6437519]]], dtype=float32)> ``` | Args | | `image` | RGB image or images to adjust. | | `gamma` | A scalar or tensor. Non-negative real number. | | `gain` | A scalar or tensor. The constant multiplier. | | Returns | | A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`. | | Raises | | `ValueError` | If gamma is negative. | #### Notes: For gamma greater than 1, the histogram will shift towards left and the output image will be darker than the input image. For gamma less than 1, the histogram will shift towards right and the output image will be brighter than the input image. #### References: [Wikipedia](http://en.wikipedia.org/wiki/Gamma_correction) tensorflow tf.image.ResizeMethod tf.image.ResizeMethod ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L1418-L1427) | See [`tf.image.resize`](resize) for details. | Class Variables | | AREA | `'area'` | | BICUBIC | `'bicubic'` | | BILINEAR | `'bilinear'` | | GAUSSIAN | `'gaussian'` | | LANCZOS3 | `'lanczos3'` | | LANCZOS5 | `'lanczos5'` | | MITCHELLCUBIC | `'mitchellcubic'` | | NEAREST\_NEIGHBOR | `'nearest'` | tensorflow tf.image.stateless_random_saturation tf.image.stateless\_random\_saturation ====================================== Adjust the saturation of RGB images by a random factor deterministically. ``` tf.image.stateless_random_saturation( image, lower, upper, seed=None ) ``` Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly picked in the interval `[lower, upper)`. Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] seed = (1, 2) tf.image.stateless_random_saturation(x, 0.5, 1.0, seed) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 1.1559395, 2.0779698, 3. ], [ 4.1559396, 5.07797 , 6. ]], [[ 7.1559396, 8.07797 , 9. ], [10.155939 , 11.07797 , 12. ]]], dtype=float32)> ``` | Args | | `image` | RGB image or images. The size of the last dimension must be 3. | | `lower` | float. Lower bound for the random saturation factor. | | `upper` | float. Upper bound for the random saturation factor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `ValueError` | if `upper <= lower` or if `lower < 0`. | tensorflow tf.image.stateless_random_contrast tf.image.stateless\_random\_contrast ==================================== Adjust the contrast of images by a random factor deterministically. ``` tf.image.stateless_random_contrast( image, lower, upper, seed ) ``` Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). | Args | | `image` | An image tensor with 3 or more dimensions. | | `lower` | float. Lower bound for the random contrast factor. | | `upper` | float. Upper bound for the random contrast factor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] seed = (1, 2) tf.image.stateless_random_contrast(x, 0.2, 0.5, seed) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[3.4605184, 4.4605184, 5.4605184], [4.820173 , 5.820173 , 6.820173 ]], [[6.179827 , 7.179827 , 8.179828 ], [7.5394816, 8.539482 , 9.539482 ]]], dtype=float32)> ``` | Returns | | The contrast-adjusted image(s). | | Raises | | `ValueError` | if `upper <= lower` or if `lower < 0`. | tensorflow tf.image.adjust_contrast tf.image.adjust\_contrast ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2227-L2281) | Adjust contrast of RGB or grayscale images. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.adjust_contrast`](https://www.tensorflow.org/api_docs/python/tf/image/adjust_contrast) ``` tf.image.adjust_contrast( images, contrast_factor ) ``` This is a convenience method that converts RGB images to float representation, adjusts their contrast, and then converts them back to the original data type. If several adjustments are chained, it is advisable to minimize the number of redundant conversions. `images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].` Contrast is adjusted independently for each channel of each image. For each channel, this Op computes the mean of the image pixels in the channel and then adjusts each component `x` of each pixel to `(x - mean) * contrast_factor + mean`. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_contrast(x, 2.) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[-3.5, -2.5, -1.5], [ 2.5, 3.5, 4.5]], [[ 8.5, 9.5, 10.5], [14.5, 15.5, 16.5]]], dtype=float32)> ``` | Args | | `images` | Images to adjust. At least 3-D. | | `contrast_factor` | A float multiplier for adjusting contrast. | | Returns | | The contrast-adjusted image or images. | tensorflow tf.image.ssim tf.image.ssim ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4341-L4426) | Computes SSIM index between img1 and img2. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.ssim`](https://www.tensorflow.org/api_docs/python/tf/image/ssim) ``` tf.image.ssim( img1, img2, max_val, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03 ) ``` This function is based on the standard SSIM implementation from: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing. > > **Note:** The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If the input is already YUV, then it will compute YUV SSIM average.) > #### Details: * 11x11 Gaussian filter of width 1.5 is used. * k1 = 0.01, k2 = 0.03 as in the original paper. The image sizes must be at least 11x11 because of the filter size. #### Example: ``` # Read images (of size 255 x 255) from file. im1 = tf.image.decode_image(tf.io.read_file('path/to/im1.png')) im2 = tf.image.decode_image(tf.io.read_file('path/to/im2.png')) tf.shape(im1) # `img1.png` has 3 channels; shape is `(255, 255, 3)` tf.shape(im2) # `img2.png` has 3 channels; shape is `(255, 255, 3)` # Add an outer batch for each image. im1 = tf.expand_dims(im1, axis=0) im2 = tf.expand_dims(im2, axis=0) # Compute SSIM over tf.uint8 Tensors. ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # Compute SSIM over tf.float32 Tensors. im1 = tf.image.convert_image_dtype(im1, tf.float32) im2 = tf.image.convert_image_dtype(im2, tf.float32) ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03) # ssim1 and ssim2 both have type tf.float32 and are almost equal. ``` | Args | | `img1` | First image batch. 4-D Tensor of shape `[batch, height, width, channels]` with only Positive Pixel Values. | | `img2` | Second image batch. 4-D Tensor of shape `[batch, height, width, channels]` with only Positive Pixel Values. | | `max_val` | The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values). | | `filter_size` | Default value 11 (size of gaussian filter). | | `filter_sigma` | Default value 1.5 (width of gaussian filter). | | `k1` | Default value 0.01 | | `k2` | Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we took the values in the range of 0 < K2 < 0.4). | | Returns | | A tensor containing an SSIM value for each image in batch. Returned SSIM values are in range (-1, 1], when pixel values are non-negative. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]). | tensorflow tf.image.rot90 tf.image.rot90 ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L651-L709) | Rotate image(s) counter-clockwise by 90 degrees. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.rot90`](https://www.tensorflow.org/api_docs/python/tf/image/rot90) ``` tf.image.rot90( image, k=1, name=None ) ``` #### For example: ``` a=tf.constant([[[1],[2]], [[3],[4]]]) # rotating `a` counter clockwise by 90 degrees a_rot=tf.image.rot90(a) print(a_rot[...,0].numpy()) [[2 4] [1 3]] # rotating `a` counter clockwise by 270 degrees a_rot=tf.image.rot90(a, k=3) print(a_rot[...,0].numpy()) [[3 1] [4 2]] ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `k` | A scalar integer tensor. The number of times the image(s) are rotated by 90 degrees. | | `name` | A name for this operation (optional). | | Returns | | A rotated tensor of the same type and shape as `image`. | | Raises | | `ValueError` | if the shape of `image` not supported. | tensorflow tf.image.stateless_random_jpeg_quality tf.image.stateless\_random\_jpeg\_quality ========================================= Deterministically radomize jpeg encoding quality for inducing jpeg noise. ``` tf.image.stateless_random_jpeg_quality( image, min_jpeg_quality, max_jpeg_quality, seed ) ``` Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). `min_jpeg_quality` must be in the interval `[0, 100]` and less than `max_jpeg_quality`. `max_jpeg_quality` must be in the interval `[0, 100]`. #### Usage Example: ``` x = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], dtype=tf.uint8) seed = (1, 2) tf.image.stateless_random_jpeg_quality(x, 75, 95, seed) <tf.Tensor: shape=(2, 2, 3), dtype=uint8, numpy= array([[[ 0, 4, 5], [ 1, 5, 6]], [[ 5, 9, 10], [ 5, 9, 10]]], dtype=uint8)> ``` | Args | | `image` | 3D image. Size of the last dimension must be 1 or 3. | | `min_jpeg_quality` | Minimum jpeg encoding quality to use. | | `max_jpeg_quality` | Maximum jpeg encoding quality to use. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `ValueError` | if `min_jpeg_quality` or `max_jpeg_quality` is invalid. | tensorflow tf.image.resize tf.image.resize =============== Resize `images` to `size` using the specified `method`. ``` tf.image.resize( images, size, method=ResizeMethod.BILINEAR, preserve_aspect_ratio=False, antialias=False, name=None ) ``` Resized images will be distorted if their original aspect ratio is not the same as `size`. To avoid distortions see [`tf.image.resize_with_pad`](resize_with_pad). ``` image = tf.constant([ [1,0,0,0,0], [0,1,0,0,0], [0,0,1,0,0], [0,0,0,1,0], [0,0,0,0,1], ]) # Add "batch" and "channels" dimensions image = image[tf.newaxis, ..., tf.newaxis] image.shape.as_list() # [batch, height, width, channels] [1, 5, 5, 1] tf.image.resize(image, [3,5])[0,...,0].numpy() array([[0.6666667, 0.3333333, 0. , 0. , 0. ], [0. , 0. , 1. , 0. , 0. ], [0. , 0. , 0. , 0.3333335, 0.6666665]], dtype=float32) ``` It works equally well with a single image instead of a batch of images: ``` tf.image.resize(image[0], [3,5]).shape.as_list() [3, 5, 1] ``` When `antialias` is true, the sampling filter will anti-alias the input image as well as interpolate. When downsampling an image with [anti-aliasing](https://en.wikipedia.org/wiki/Spatial_anti-aliasing) the sampling filter kernel is scaled in order to properly anti-alias the input image signal. `antialias` has no effect when upsampling an image: ``` a = tf.image.resize(image, [5,10]) b = tf.image.resize(image, [5,10], antialias=True) tf.reduce_max(abs(a - b)).numpy() 0.0 ``` The `method` argument expects an item from the [`image.ResizeMethod`](resizemethod) enum, or the string equivalent. The options are: * **`bilinear`**: [Bilinear interpolation.](https://en.wikipedia.org/wiki/Bilinear_interpolation) If `antialias` is true, becomes a hat/tent filter function with radius 1 when downsampling. * **`lanczos3`**: [Lanczos kernel](https://en.wikipedia.org/wiki/Lanczos_resampling) with radius 3. High-quality practical filter but may have some ringing, especially on synthetic images. * **`lanczos5`**: [Lanczos kernel](https://en.wikipedia.org/wiki/Lanczos_resampling) with radius 5. Very-high-quality filter but may have stronger ringing. * **`bicubic`**: [Cubic interpolant](https://en.wikipedia.org/wiki/Bicubic_interpolation) of Keys. Equivalent to Catmull-Rom kernel. Reasonably good quality and faster than Lanczos3Kernel, particularly when upsampling. * **`gaussian`**: [Gaussian kernel](https://en.wikipedia.org/wiki/Gaussian_filter) with radius 3, sigma = 1.5 / 3.0. * **`nearest`**: [Nearest neighbor interpolation.](https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) `antialias` has no effect when used with nearest neighbor interpolation. * **`area`**: Anti-aliased resampling with area interpolation. `antialias` has no effect when used with area interpolation; it always anti-aliases. * **`mitchellcubic`**: Mitchell-Netravali Cubic non-interpolating filter. For synthetic images (especially those lacking proper prefiltering), less ringing than Keys cubic kernel but less sharp. > > **Note:** Near image edges the filtering kernel may be partially outside the image boundaries. For these pixels, only input pixels inside the image will be included in the filter sum, and the output value will be appropriately normalized. > The return value has type `float32`, unless the `method` is [`ResizeMethod.NEAREST_NEIGHBOR`](resizemethod#NEAREST_NEIGHBOR), then the return dtype is the dtype of `images`: ``` nn = tf.image.resize(image, [5,7], method='nearest') nn[0,...,0].numpy() array([[1, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 1, 1, 0], [0, 0, 0, 0, 0, 0, 1]], dtype=int32) ``` With `preserve_aspect_ratio=True`, the aspect ratio is preserved, so `size` is the maximum for each dimension: ``` max_10_20 = tf.image.resize(image, [10,20], preserve_aspect_ratio=True) max_10_20.shape.as_list() [1, 10, 10, 1] ``` | Args | | `images` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `size` | A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images. | | `method` | An [`image.ResizeMethod`](resizemethod), or string equivalent. Defaults to `bilinear`. | | `preserve_aspect_ratio` | Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False. | | `antialias` | Whether to use an anti-aliasing filter when downsampling an image. | | `name` | A name for this operation (optional). | | Raises | | `ValueError` | if the shape of `images` is incompatible with the shape arguments to this function | | `ValueError` | if `size` has an invalid shape or type. | | `ValueError` | if an unsupported resize method is specified. | | Returns | | If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`. |
programming_docs
tensorflow tf.image.rgb_to_hsv tf.image.rgb\_to\_hsv ===================== Converts one or more images from RGB to HSV. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.rgb_to_hsv`](https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_hsv) ``` tf.image.rgb_to_hsv( images, name=None ) ``` Outputs a tensor of the same shape as the `images` tensor, containing the HSV value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`. `output[..., 0]` contains hue, `output[..., 1]` contains saturation, and `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue. #### Usage Example: ``` blue_image = tf.stack([ tf.zeros([5,5]), tf.zeros([5,5]), tf.ones([5,5])], axis=-1) blue_hsv_image = tf.image.rgb_to_hsv(blue_image) blue_hsv_image[0,0].numpy() array([0.6666667, 1. , 1. ], dtype=float32) ``` | Args | | `images` | A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. 1-D or higher rank. RGB data to convert. Last dimension must be size 3. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `images`. | tensorflow tf.image.stateless_random_brightness tf.image.stateless\_random\_brightness ====================================== Adjust the brightness of images by a random factor deterministically. ``` tf.image.stateless_random_brightness( image, max_delta, seed ) ``` Equivalent to `adjust_brightness()` using a `delta` randomly picked in the interval `[-max_delta, max_delta)`. Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] seed = (1, 2) tf.image.stateless_random_brightness(x, 0.2, seed) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 1.1376241, 2.1376243, 3.1376243], [ 4.1376243, 5.1376243, 6.1376243]], [[ 7.1376243, 8.137624 , 9.137624 ], [10.137624 , 11.137624 , 12.137624 ]]], dtype=float32)> ``` | Args | | `image` | An image or images to adjust. | | `max_delta` | float, must be non-negative. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | Returns | | The brightness-adjusted image(s). | | Raises | | `ValueError` | if `max_delta` is negative. | tensorflow tf.image.random_crop tf.image.random\_crop ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L360-L412) | Randomly crops a tensor to a given size. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_crop`](https://www.tensorflow.org/api_docs/python/tf/image/random_crop), [`tf.compat.v1.random_crop`](https://www.tensorflow.org/api_docs/python/tf/image/random_crop) ``` tf.image.random_crop( value, size, seed=None, name=None ) ``` Slices a shape `size` portion out of `value` at a uniformly chosen offset. Requires `value.shape >= size`. If a dimension should not be cropped, pass the full size of that dimension. For example, RGB images can be cropped with `size = [crop_height, crop_width, 3]`. #### Example usage: ``` image = [[1, 2, 3], [4, 5, 6]] result = tf.image.random_crop(value=image, size=(1, 3)) result.shape.as_list() [1, 3] ``` For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_crop`](stateless_random_crop). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `value` | Input tensor to crop. | | `size` | 1-D tensor with size the rank of `value`. | | `seed` | Python integer. Used to create a random seed. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `name` | A name for this operation (optional). | | Returns | | A cropped tensor of the same rank as `value` and shape `size`. | tensorflow tf.image.combined_non_max_suppression tf.image.combined\_non\_max\_suppression ======================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L5056-L5138) | Greedily selects a subset of bounding boxes in descending order of score. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.combined_non_max_suppression`](https://www.tensorflow.org/api_docs/python/tf/image/combined_non_max_suppression) ``` tf.image.combined_non_max_suppression( boxes, scores, max_output_size_per_class, max_total_size, iou_threshold=0.5, score_threshold=float('-inf'), pad_per_class=False, clip_boxes=True, name=None ) ``` This operation performs non\_max\_suppression on the inputs per batch, across all classes. Prunes away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Also note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is the final boxes, scores and classes tensor returned after performing non\_max\_suppression. | Args | | `boxes` | A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then same boxes are used for all classes otherwise, if `q` is equal to number of classes, class-specific boxes are used. | | `scores` | A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` representing a single score corresponding to each box (each row of boxes). | | `max_output_size_per_class` | A scalar integer `Tensor` representing the maximum number of boxes to be selected by non-max suppression per class | | `max_total_size` | A int32 scalar representing maximum number of boxes retained over all classes. Note that setting this value to a large number may result in OOM error depending on the system workload. | | `iou_threshold` | A float representing the threshold for deciding whether boxes overlap too much with respect to IOU. | | `score_threshold` | A float representing the threshold for deciding when to remove boxes based on score. | | `pad_per_class` | If false, the output nmsed boxes, scores and classes are padded/clipped to `max_total_size`. If true, the output nmsed boxes, scores and classes are padded to be of length `max_size_per_class`\*`num_classes`, unless it exceeds `max_total_size` in which case it is clipped to `max_total_size`. Defaults to false. | | `clip_boxes` | If true, the coordinates of output nmsed boxes will be clipped to [0, 1]. If false, output the box coordinates as it is. Defaults to true. | | `name` | A name for the operation (optional). | | Returns | | `'nmsed_boxes'` | A [batch\_size, max\_detections, 4] float32 tensor containing the non-max suppressed boxes. | | `'nmsed_scores'` | A [batch\_size, max\_detections] float32 tensor containing the scores for the boxes. | | `'nmsed_classes'` | A [batch\_size, max\_detections] float32 tensor containing the class for boxes. | | `'valid_detections'` | A [batch\_size] int32 tensor indicating the number of valid detections per batch item. Only the top valid\_detections[i] entries in nms\_boxes[i], nms\_scores[i] and nms\_class[i] are valid. The rest of the entries are zero paddings. | tensorflow tf.image.image_gradients tf.image.image\_gradients ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4562-L4633) | Returns image gradients (dy, dx) for each color channel. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.image_gradients`](https://www.tensorflow.org/api_docs/python/tf/image/image_gradients) ``` tf.image.image_gradients( image ) ``` Both output tensors have the same shape as the input: [batch\_size, h, w, d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in location (x, y). That means that dy will always have zeros in the last row, and dx will always have zeros in the last column. #### Usage Example: ``` BATCH_SIZE = 1 IMAGE_HEIGHT = 5 IMAGE_WIDTH = 5 CHANNELS = 1 image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS, delta=1, dtype=tf.float32), shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) dy, dx = tf.image.image_gradients(image) print(image[0, :,:,0]) tf.Tensor( [[ 0. 1. 2. 3. 4.] [ 5. 6. 7. 8. 9.] [10. 11. 12. 13. 14.] [15. 16. 17. 18. 19.] [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32) print(dy[0, :,:,0]) tf.Tensor( [[5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [5. 5. 5. 5. 5.] [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32) print(dx[0, :,:,0]) tf.Tensor( [[1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.] [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32) ``` | Args | | `image` | Tensor with shape [batch\_size, h, w, d]. | | Returns | | Pair of tensors (dy, dx) holding the vertical and horizontal image gradients (1-step finite difference). | | Raises | | `ValueError` | If `image` is not a 4D tensor. | tensorflow tf.image.crop_to_bounding_box tf.image.crop\_to\_bounding\_box ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L1150-L1248) | Crops an `image` to a specified bounding box. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.crop_to_bounding_box`](https://www.tensorflow.org/api_docs/python/tf/image/crop_to_bounding_box) ``` tf.image.crop_to_bounding_box( image, offset_height, offset_width, target_height, target_width ) ``` This op cuts a rectangular bounding box out of `image`. The top-left corner of the bounding box is at `offset_height, offset_width` in `image`, and the lower-right corner is at `offset_height + target_height, offset_width + target_width`. #### Example Usage: ``` image = tf.constant(np.arange(1, 28, dtype=np.float32), shape=[3, 3, 3]) image[:,:,0] # print the first channel of the 3-D tensor <tf.Tensor: shape=(3, 3), dtype=float32, numpy= array([[ 1., 4., 7.], [10., 13., 16.], [19., 22., 25.]], dtype=float32)> cropped_image = tf.image.crop_to_bounding_box(image, 0, 0, 2, 2) cropped_image[:,:,0] # print the first channel of the cropped 3-D tensor <tf.Tensor: shape=(2, 2), dtype=float32, numpy= array([[ 1., 4.], [10., 13.]], dtype=float32)> ``` | Args | | `image` | 4-D `Tensor` of shape `[batch, height, width, channels]` or 3-D `Tensor` of shape `[height, width, channels]`. | | `offset_height` | Vertical coordinate of the top-left corner of the bounding box in `image`. | | `offset_width` | Horizontal coordinate of the top-left corner of the bounding box in `image`. | | `target_height` | Height of the bounding box. | | `target_width` | Width of the bounding box. | | Returns | | If `image` was 4-D, a 4-D `Tensor` of shape `[batch, target_height, target_width, channels]`. If `image` was 3-D, a 3-D `Tensor` of shape `[target_height, target_width, channels]`. It has the same dtype with `image`. | | Raises | | `ValueError` | `image` is not a 3-D or 4-D `Tensor`. | | `ValueError` | `offset_width < 0` or `offset_height < 0`. | | `ValueError` | `target_width <= 0` or `target_width <= 0`. | | `ValueError` | `width < offset_width + target_width` or `height < offset_height + target_height`. | tensorflow tf.image.transpose tf.image.transpose ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L775-L842) | Transpose image(s) by swapping the height and width dimension. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.transpose`](https://www.tensorflow.org/api_docs/python/tf/image/transpose), [`tf.compat.v1.image.transpose_image`](https://www.tensorflow.org/api_docs/python/tf/image/transpose) ``` tf.image.transpose( image, name=None ) ``` #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.transpose(x) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 1., 2., 3.], [ 7., 8., 9.]], [[ 4., 5., 6.], [10., 11., 12.]]], dtype=float32)> ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `name` | A name for this operation (optional). | | Returns | | If `image` was 4-D, a 4-D float Tensor of shape `[batch, width, height, channels]` If `image` was 3-D, a 3-D float Tensor of shape `[width, height, channels]` | | Raises | | `ValueError` | if the shape of `image` not supported. | #### Usage Example: ``` image = [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]] image = tf.constant(image) tf.image.transpose(image) <tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy= array([[[ 1, 2], [ 5, 6], [ 9, 10]], [[ 3, 4], [ 7, 8], [11, 12]]], dtype=int32)> ``` tensorflow tf.image.per_image_standardization tf.image.per\_image\_standardization ==================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L1932-L1988) | Linearly scales each image in `image` to have mean 0 and variance 1. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.per_image_standardization`](https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization) ``` tf.image.per_image_standardization( image ) ``` For each 3-D image `x` in `image`, computes `(x - mean) / adjusted_stddev`, where * `mean` is the average of all values in `x` * `adjusted_stddev = max(stddev, 1.0/sqrt(N))` is capped away from 0 to protect against division by 0 when handling uniform images + `N` is the number of elements in `x` + `stddev` is the standard deviation of all values in `x` #### Example Usage: ``` image = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) image # 3-D tensor <tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy= array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]], dtype=int32)> new_image = tf.image.per_image_standardization(image) new_image # 3-D tensor with mean ~= 0 and variance ~= 1 <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[-1.593255 , -1.3035723 , -1.0138896 ], [-0.7242068 , -0.4345241 , -0.14484136]], [[ 0.14484136, 0.4345241 , 0.7242068 ], [ 1.0138896 , 1.3035723 , 1.593255 ]]], dtype=float32)> ``` | Args | | `image` | An n-D `Tensor` with at least 3 dimensions, the last 3 of which are the dimensions of each image. | | Returns | | A `Tensor` with the same shape as `image` and its dtype is `float32`. | | Raises | | `ValueError` | The shape of `image` has fewer than 3 dimensions. | tensorflow tf.image.random_flip_up_down tf.image.random\_flip\_up\_down =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L328-L372) | Randomly flips an image vertically (upside down). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_flip_up_down`](https://www.tensorflow.org/api_docs/python/tf/image/random_flip_up_down) ``` tf.image.random_flip_up_down( image, seed=None ) ``` With a 1 in 2 chance, outputs the contents of `image` flipped along the first dimension, which is `height`. Otherwise, output the image as-is. When passing a batch of images, each image will be randomly flipped independent of other images. #### Example usage: ``` image = np.array([[[1], [2]], [[3], [4]]]) tf.image.random_flip_up_down(image, 3).numpy().tolist() [[[3], [4]], [[1], [2]]] ``` Randomly flip multiple images. ``` images = np.array( [ [[[1], [2]], [[3], [4]]], [[[5], [6]], [[7], [8]]] ]) tf.image.random_flip_up_down(images, 4).numpy().tolist() [[[[3], [4]], [[1], [2]]], [[[5], [6]], [[7], [8]]]] ``` For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_flip_up_down`](stateless_random_flip_up_down). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `seed` | A Python integer. Used to create a random seed. See [`tf.compat.v1.set_random_seed`](../compat/v1/set_random_seed) for behavior. | | Returns | | A tensor of the same type and shape as `image`. | | Raises | | `ValueError` | if the shape of `image` not supported. | tensorflow tf.image.crop_and_resize tf.image.crop\_and\_resize ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4776-L4857) | Extracts crops from the input image tensor and resizes them. ``` tf.image.crop_and_resize( image, boxes, box_indices, crop_size, method='bilinear', extrapolation_value=0.0, name=None ) ``` Extracts crops from the input image tensor and resizes them using bilinear sampling or nearest neighbor sampling (possibly with aspect ratio change) to a common output size specified by `crop_size`. This is more general than the `crop_to_bounding_box` op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change. Returns a tensor with `crops` from the input `image` at positions defined at the bounding box locations in `boxes`. The cropped boxes are all resized (with bilinear or nearest neighbor interpolation) to a fixed `size = [crop_height, crop_width]`. The result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical results to using [`tf.compat.v1.image.resize_bilinear()`](../compat/v1/image/resize_bilinear) or [`tf.compat.v1.image.resize_nearest_neighbor()`](../compat/v1/image/resize_nearest_neighbor)(depends on the `method` argument) with `align_corners=True`. | Args | | `image` | A 4-D tensor of shape `[batch, image_height, image_width, depth]`. Both `image_height` and `image_width` need to be positive. | | `boxes` | A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values. | | `box_indices` | A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. The value of `box_ind[i]` specifies the image that the `i`-th box refers to. | | `crop_size` | A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both `crop_height` and `crop_width` need to be positive. | | `method` | An optional string specifying the sampling method for resizing. It can be either `"bilinear"` or `"nearest"` and default to `"bilinear"`. Currently two sampling methods are supported: Bilinear and Nearest Neighbor. | | `extrapolation_value` | An optional `float`. Defaults to `0.0`. Value used for extrapolation, when applicable. | | `name` | A name for the operation (optional). | | Returns | | A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`. | #### Example: ``` import tensorflow as tf BATCH_SIZE = 1 NUM_BOXES = 5 IMAGE_HEIGHT = 256 IMAGE_WIDTH = 256 CHANNELS = 3 CROP_SIZE = (24, 24) image = tf.random.normal(shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS) ) boxes = tf.random.uniform(shape=(NUM_BOXES, 4)) box_indices = tf.random.uniform(shape=(NUM_BOXES,), minval=0, maxval=BATCH_SIZE, dtype=tf.int32) output = tf.image.crop_and_resize(image, boxes, box_indices, CROP_SIZE) output.shape #=> (5, 24, 24, 3) ```
programming_docs
tensorflow tf.image.flip_up_down tf.image.flip\_up\_down ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L574-L606) | Flip an image vertically (upside down). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.flip_up_down`](https://www.tensorflow.org/api_docs/python/tf/image/flip_up_down) ``` tf.image.flip_up_down( image ) ``` Outputs the contents of `image` flipped along the height dimension. See also `reverse()`. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.flip_up_down(x) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 7., 8., 9.], [10., 11., 12.]], [[ 1., 2., 3.], [ 4., 5., 6.]]], dtype=float32)> ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | Returns | | A `Tensor` of the same type and shape as `image`. | | Raises | | `ValueError` | if the shape of `image` not supported. | tensorflow tf.image.ssim_multiscale tf.image.ssim\_multiscale ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4433-L4559) | Computes the MS-SSIM between img1 and img2. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.ssim_multiscale`](https://www.tensorflow.org/api_docs/python/tf/image/ssim_multiscale) ``` tf.image.ssim_multiscale( img1, img2, max_val, power_factors=_MSSSIM_WEIGHTS, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03 ) ``` This function assumes that `img1` and `img2` are image batches, i.e. the last three dimensions are [height, width, channels]. > > **Note:** The true SSIM is only defined on grayscale. This function does not perform any colorspace transform. (If the input is already YUV, then it will compute YUV SSIM average.) > Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." Signals, Systems and Computers, 2004. | Args | | `img1` | First image batch with only Positive Pixel Values. | | `img2` | Second image batch with only Positive Pixel Values. Must have the same rank as img1. | | `max_val` | The dynamic range of the images (i.e., the difference between the maximum the and minimum allowed values). | | `power_factors` | Iterable of weights for each of the scales. The number of scales used is the length of the list. Index 0 is the unscaled resolution's weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper. | | `filter_size` | Default value 11 (size of gaussian filter). | | `filter_sigma` | Default value 1.5 (width of gaussian filter). | | `k1` | Default value 0.01 | | `k2` | Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so it would be better if we took the values in the range of 0 < K2 < 0.4). | | Returns | | A tensor containing an MS-SSIM value for each image in batch. The values are in range [0, 1]. Returns a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]). | tensorflow tf.image.stateless_random_hue tf.image.stateless\_random\_hue =============================== Adjust the hue of RGB images by a random factor deterministically. ``` tf.image.stateless_random_hue( image, max_delta, seed ) ``` Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the interval `[-max_delta, max_delta)`. Guarantees the same results given the same `seed` independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). `max_delta` must be in the interval `[0, 0.5]`. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] seed = (1, 2) tf.image.stateless_random_hue(x, 0.2, seed) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 1.6514902, 1. , 3. ], [ 4.65149 , 4. , 6. ]], [[ 7.65149 , 7. , 9. ], [10.65149 , 10. , 12. ]]], dtype=float32)> ``` | Args | | `image` | RGB image or images. The size of the last dimension must be 3. | | `max_delta` | float. The maximum value for the random delta. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `ValueError` | if `max_delta` is invalid. | tensorflow tf.image.stateless_sample_distorted_bounding_box tf.image.stateless\_sample\_distorted\_bounding\_box ==================================================== Generate a randomly distorted bounding box for an image deterministically. ``` tf.image.stateless_sample_distorted_bounding_box( image_size, bounding_boxes, seed, min_object_covered=0.1, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None ) ``` Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. *data augmentation*. This Op, given the same `seed`, deterministically outputs a randomly distorted localization of an object, i.e. bounding box, given an `image_size`, `bounding_boxes` and a series of constraints. The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: `begin`, `size` and `bboxes`. The first 2 tensors can be fed directly into [`tf.slice`](../slice) to crop the image. The latter may be supplied to [`tf.image.draw_bounding_boxes`](draw_bounding_boxes) to visualize what the bounding box looks like. Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and the height of the underlying image. The output of this Op is guaranteed to be the same given the same `seed` and is independent of how many times the function is called, and independent of global seed settings (e.g. [`tf.random.set_seed`](../random/set_seed)). #### Example usage: ``` image = np.array([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]]) bbox = tf.constant( [0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4]) seed = (1, 2) # Generate a single distorted bounding box. bbox_begin, bbox_size, bbox_draw = ( tf.image.stateless_sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bbox, seed=seed)) # Employ the bounding box to distort the image. tf.slice(image, bbox_begin, bbox_size) <tf.Tensor: shape=(2, 2, 1), dtype=int64, numpy= array([[[1], [2]], [[4], [5]]])> # Draw the bounding box in an image summary. colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]]) tf.image.draw_bounding_boxes( tf.expand_dims(tf.cast(image, tf.float32),0), bbox_draw, colors) <tf.Tensor: shape=(1, 3, 3, 1), dtype=float32, numpy= array([[[[1.], [1.], [3.]], [[1.], [1.], [6.]], [[7.], [8.], [9.]]]], dtype=float32)> ``` Note that if no bounding box information is available, setting `use_image_if_no_bounding_boxes = true` will assume there is a single implicit bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is false and no bounding boxes are supplied, an error is raised. | Args | | `image_size` | A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`. | | `bounding_boxes` | A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` describing the N bounding boxes associated with the image. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `min_object_covered` | A Tensor of type `float32`. Defaults to `0.1`. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied. | | `aspect_ratio_range` | An optional list of `floats`. Defaults to `[0.75, 1.33]`. The cropped area of the image must have an aspect `ratio = width / height` within this range. | | `area_range` | An optional list of `floats`. Defaults to `[0.05, 1]`. The cropped area of the image must contain a fraction of the supplied image within this range. | | `max_attempts` | An optional `int`. Defaults to `100`. Number of attempts at generating a cropped region of the image of the specified constraints. After `max_attempts` failures, return the entire image. | | `use_image_if_no_bounding_boxes` | An optional `bool`. Defaults to `False`. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (begin, size, bboxes). | | `begin` | A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to [`tf.slice`](../slice). | | `size` | A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to [`tf.slice`](../slice). | | `bboxes` | A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box. Provide as input to [`tf.image.draw_bounding_boxes`](draw_bounding_boxes). | tensorflow tf.image.grayscale_to_rgb tf.image.grayscale\_to\_rgb =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2569-L2602) | Converts one or more images from Grayscale to RGB. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.grayscale_to_rgb`](https://www.tensorflow.org/api_docs/python/tf/image/grayscale_to_rgb) ``` tf.image.grayscale_to_rgb( images, name=None ) ``` Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 3, containing the RGB value of the pixels. The input images' last dimension must be size 1. ``` original = tf.constant([[[1.0], [2.0], [3.0]]]) converted = tf.image.grayscale_to_rgb(original) print(converted.numpy()) [[[1. 1. 1.] [2. 2. 2.] [3. 3. 3.]]] ``` | Args | | `images` | The Grayscale tensor to convert. The last dimension must be size 1. | | `name` | A name for the operation (optional). | | Returns | | The converted grayscale image(s). | tensorflow tf.image.flip_left_right tf.image.flip\_left\_right ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L539-L571) | Flip an image horizontally (left to right). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.flip_left_right`](https://www.tensorflow.org/api_docs/python/tf/image/flip_left_right) ``` tf.image.flip_left_right( image ) ``` Outputs the contents of `image` flipped along the width dimension. See also [`tf.reverse`](../reverse). #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.flip_left_right(x) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 4., 5., 6.], [ 1., 2., 3.]], [[10., 11., 12.], [ 7., 8., 9.]]], dtype=float32)> ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | Returns | | A tensor of the same type and shape as `image`. | | Raises | | `ValueError` | if the shape of `image` not supported. | tensorflow tf.image.total_variation tf.image.total\_variation ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3299-L3368) | Calculate and return the total variation for one or more images. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.total_variation`](https://www.tensorflow.org/api_docs/python/tf/image/total_variation) ``` tf.image.total_variation( images, name=None ) ``` The total variation is the sum of the absolute differences for neighboring pixel-values in the input images. This measures how much noise is in the images. This can be used as a loss-function during optimization so as to suppress noise in images. If you have a batch of images, then you should calculate the scalar loss-value as the sum: `loss = tf.reduce_sum(tf.image.total_variation(images))` This implements the anisotropic 2-D version of the formula described here: <https://en.wikipedia.org/wiki/Total_variation_denoising> | Args | | `images` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `name` | A name for the operation (optional). | | Raises | | `ValueError` | if images.shape is not a 3-D or 4-D vector. | | Returns | | The total variation of `images`. If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the total variation for each image in the batch. If `images` was 3-D, return a scalar float with the total variation for that image. | tensorflow tf.image.resize_with_pad tf.image.resize\_with\_pad ========================== Resizes and pads an image to a target width and height. ``` tf.image.resize_with_pad( image, target_height, target_width, method=ResizeMethod.BILINEAR, antialias=False ) ``` Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions. | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `target_height` | Target height. | | `target_width` | Target width. | | `method` | Method to use for resizing image. See [`image.resize()`](resize) | | `antialias` | Whether to use anti-aliasing when resizing. See 'image.resize()'. | | Raises | | `ValueError` | if `target_height` or `target_width` are zero or negative. | | Returns | | Resized and padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`. | tensorflow tf.image.extract_glimpse tf.image.extract\_glimpse ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4969-L5053) | Extracts a glimpse from the input tensor. ``` tf.image.extract_glimpse( input, size, offsets, centered=True, normalized=True, noise='uniform', name=None ) ``` Returns a set of windows called glimpses extracted at location `offsets` from the input tensor. If the windows only partially overlaps the inputs, the non-overlapping areas will be filled with random noise. The result is a 4-D tensor of shape `[batch_size, glimpse_height, glimpse_width, channels]`. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the `size` parameter. The argument `normalized` and `centered` controls how the windows are built: * If the coordinates are normalized but not centered, 0.0 and 1.0 correspond to the minimum and maximum of each height and width dimension. * If the coordinates are both normalized and centered, they range from -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the lower right corner is located at (1.0, 1.0) and the center is at (0, 0). * If the coordinates are not normalized they are interpreted as numbers of pixels. #### Usage Example: ``` x = [[[[0.0], [1.0], [2.0]], [[3.0], [4.0], [5.0]], [[6.0], [7.0], [8.0]]]] tf.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]], centered=False, normalized=False) <tf.Tensor: shape=(1, 2, 2, 1), dtype=float32, numpy= array([[[[4.], [5.]], [[7.], [8.]]]], dtype=float32)> ``` | Args | | `input` | A `Tensor` of type `float32`. A 4-D float tensor of shape `[batch_size, height, width, channels]`. | | `size` | A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width. | | `offsets` | A `Tensor` of type `float32`. A 2-D integer tensor of shape `[batch_size, 2]` containing the y, x locations of the center of each window. | | `centered` | An optional `bool`. Defaults to `True`. indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the input images. If false, the (0,0) offset corresponds to the upper left corner of the input images. | | `normalized` | An optional `bool`. Defaults to `True`. indicates if the offset coordinates are normalized. | | `noise` | An optional `string`. Defaults to `uniform`. indicates if the noise should be `uniform` (uniform distribution), `gaussian` (gaussian distribution), or `zero` (zero padding). | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `float32`. | tensorflow tf.image.rgb_to_grayscale tf.image.rgb\_to\_grayscale =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2533-L2566) | Converts one or more images from RGB to Grayscale. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.rgb_to_grayscale`](https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_grayscale) ``` tf.image.rgb_to_grayscale( images, name=None ) ``` Outputs a tensor of the same `DType` and rank as `images`. The size of the last dimension of the output is 1, containing the Grayscale value of the pixels. ``` original = tf.constant([[[1.0, 2.0, 3.0]]]) converted = tf.image.rgb_to_grayscale(original) print(converted.numpy()) [[[1.81...]]] ``` | Args | | `images` | The RGB tensor to convert. The last dimension must have size 3 and should contain RGB values. | | `name` | A name for the operation (optional). | | Returns | | The converted grayscale image(s). |
programming_docs
tensorflow tf.image.random_brightness tf.image.random\_brightness =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L1991-L2031) | Adjust the brightness of images by a random factor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_brightness`](https://www.tensorflow.org/api_docs/python/tf/image/random_brightness) ``` tf.image.random_brightness( image, max_delta, seed=None ) ``` Equivalent to `adjust_brightness()` using a `delta` randomly picked in the interval `[-max_delta, max_delta)`. For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_brightness`](stateless_random_brightness). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | An image or images to adjust. | | `max_delta` | float, must be non-negative. | | `seed` | A Python integer. Used to create a random seed. See [`tf.compat.v1.set_random_seed`](../compat/v1/set_random_seed) for behavior. | #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.random_brightness(x, 0.2) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=...> ``` | Returns | | The brightness-adjusted image(s). | | Raises | | `ValueError` | if `max_delta` is negative. | tensorflow tf.image.convert_image_dtype tf.image.convert\_image\_dtype ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2350-L2530) | Convert `image` to `dtype`, scaling its values if needed. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.convert_image_dtype`](https://www.tensorflow.org/api_docs/python/tf/image/convert_image_dtype) ``` tf.image.convert_image_dtype( image, dtype, saturate=False, name=None ) ``` The operation supports data types (for `image` and `dtype`) of `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`, `float16`, `float32`, `float64`, `bfloat16`. Images that are represented using floating point values are expected to have values in the range [0,1). Image data stored in integer data types are expected to have values in the range `[0,MAX]`, where `MAX` is the largest positive representable number for the data type. This op converts between data types, scaling the values appropriately before casting. #### Usage Example: ``` x = [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]] x_int8 = tf.convert_to_tensor(x, dtype=tf.int8) tf.image.convert_image_dtype(x_int8, dtype=tf.float16, saturate=False) <tf.Tensor: shape=(2, 2, 3), dtype=float16, numpy= array([[[0.00787, 0.01575, 0.02362], [0.0315 , 0.03937, 0.04724]], [[0.0551 , 0.063 , 0.07086], [0.07874, 0.0866 , 0.0945 ]]], dtype=float16)> ``` Converting integer types to floating point types returns normalized floating point values in the range [0, 1); the values are normalized by the `MAX` value of the input dtype. Consider the following two examples: ``` a = [[[1], [2]], [[3], [4]]] a_int8 = tf.convert_to_tensor(a, dtype=tf.int8) tf.image.convert_image_dtype(a_int8, dtype=tf.float32) <tf.Tensor: shape=(2, 2, 1), dtype=float32, numpy= array([[[0.00787402], [0.01574803]], [[0.02362205], [0.03149606]]], dtype=float32)> ``` ``` a_int32 = tf.convert_to_tensor(a, dtype=tf.int32) tf.image.convert_image_dtype(a_int32, dtype=tf.float32) <tf.Tensor: shape=(2, 2, 1), dtype=float32, numpy= array([[[4.6566129e-10], [9.3132257e-10]], [[1.3969839e-09], [1.8626451e-09]]], dtype=float32)> ``` Despite having identical values of `a` and output dtype of `float32`, the outputs differ due to the different input dtypes (`int8` vs. `int32`). This is, again, because the values are normalized by the `MAX` value of the input dtype. Note that converting floating point values to integer type may lose precision. In the example below, an image tensor `b` of dtype `float32` is converted to `int8` and back to `float32`. The final output, however, is different from the original input `b` due to precision loss. ``` b = [[[0.12], [0.34]], [[0.56], [0.78]]] b_float32 = tf.convert_to_tensor(b, dtype=tf.float32) b_int8 = tf.image.convert_image_dtype(b_float32, dtype=tf.int8) tf.image.convert_image_dtype(b_int8, dtype=tf.float32) <tf.Tensor: shape=(2, 2, 1), dtype=float32, numpy= array([[[0.11811024], [0.33858266]], [[0.5590551 ], [0.77952754]]], dtype=float32)> ``` Scaling up from an integer type (input dtype) to another integer type (output dtype) will not map input dtype's `MAX` to output dtype's `MAX` but converting back and forth should result in no change. For example, as shown below, the `MAX` value of int8 (=127) is not mapped to the `MAX` value of int16 (=32,767) but, when scaled back, we get the same, original values of `c`. ``` c = [[[1], [2]], [[127], [127]]] c_int8 = tf.convert_to_tensor(c, dtype=tf.int8) c_int16 = tf.image.convert_image_dtype(c_int8, dtype=tf.int16) print(c_int16) tf.Tensor( [[[ 256] [ 512]] [[32512] [32512]]], shape=(2, 2, 1), dtype=int16) c_int8_back = tf.image.convert_image_dtype(c_int16, dtype=tf.int8) print(c_int8_back) tf.Tensor( [[[ 1] [ 2]] [[127] [127]]], shape=(2, 2, 1), dtype=int8) ``` Scaling down from an integer type to another integer type can be a lossy conversion. Notice in the example below that converting `int16` to `uint8` and back to `int16` has lost precision. ``` d = [[[1000], [2000]], [[3000], [4000]]] d_int16 = tf.convert_to_tensor(d, dtype=tf.int16) d_uint8 = tf.image.convert_image_dtype(d_int16, dtype=tf.uint8) d_int16_back = tf.image.convert_image_dtype(d_uint8, dtype=tf.int16) print(d_int16_back) tf.Tensor( [[[ 896] [1920]] [[2944] [3968]]], shape=(2, 2, 1), dtype=int16) ``` Note that converting from floating point inputs to integer types may lead to over/underflow problems. Set saturate to `True` to avoid such problem in problematic conversions. If enabled, saturation will clip the output into the allowed range before performing a potentially dangerous cast (and only before performing such a cast, i.e., when casting from a floating point to an integer type, and when casting from a signed to an unsigned type; `saturate` has no effect on casts between floats, or on casts that increase the type's range). | Args | | `image` | An image. | | `dtype` | A `DType` to convert `image` to. | | `saturate` | If `True`, clip the input before casting (if necessary). | | `name` | A name for this operation (optional). | | Returns | | `image`, converted to `dtype`. | | Raises | | `AttributeError` | Raises an attribute error when dtype is neither float nor integer | tensorflow tf.image.adjust_hue tf.image.adjust\_hue ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2706-L2778) | Adjust hue of RGB images. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.adjust_hue`](https://www.tensorflow.org/api_docs/python/tf/image/adjust_hue) ``` tf.image.adjust_hue( image, delta, name=None ) ``` This is a convenience method that converts an RGB image to float representation, converts it to HSV, adds an offset to the hue channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions. `image` is an RGB image. The image hue is adjusted by converting the image(s) to HSV and rotating the hue channel (H) by `delta`. The image is then converted back to RGB. `delta` must be in the interval `[-1, 1]`. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_hue(x, 0.2) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 2.3999996, 1. , 3. ], [ 5.3999996, 4. , 6. ]], [[ 8.4 , 7. , 9. ], [11.4 , 10. , 12. ]]], dtype=float32)> ``` | Args | | `image` | RGB image or images. The size of the last dimension must be 3. | | `delta` | float. How much to add to the hue channel. | | `name` | A name for this operation (optional). | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `InvalidArgumentError` | image must have at least 3 dimensions. | | `InvalidArgumentError` | The size of the last dimension must be 3. | | `ValueError` | if `delta` is not in the interval of `[-1, 1]`. | #### Usage Example: ``` image = [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]] image = tf.constant(image) tf.image.adjust_hue(image, 0.2) <tf.Tensor: shape=(3, 2, 3), dtype=int32, numpy= array([[[ 2, 1, 3], [ 5, 4, 6]], [[ 8, 7, 9], [11, 10, 12]], [[14, 13, 15], [17, 16, 18]]], dtype=int32)> ``` tensorflow tf.image.resize_with_crop_or_pad tf.image.resize\_with\_crop\_or\_pad ==================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L1251-L1405) | Crops and/or pads an image to a target width and height. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.resize_image_with_crop_or_pad`](https://www.tensorflow.org/api_docs/python/tf/image/resize_with_crop_or_pad), [`tf.compat.v1.image.resize_with_crop_or_pad`](https://www.tensorflow.org/api_docs/python/tf/image/resize_with_crop_or_pad) ``` tf.image.resize_with_crop_or_pad( image, target_height, target_width ) ``` Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros. If `width` or `height` is greater than the specified `target_width` or `target_height` respectively, this op centrally crops along that dimension. #### For example: ``` image = np.arange(75).reshape(5, 5, 3) # create 3-D image input image[:,:,0] # print first channel just for demo purposes array([[ 0, 3, 6, 9, 12], [15, 18, 21, 24, 27], [30, 33, 36, 39, 42], [45, 48, 51, 54, 57], [60, 63, 66, 69, 72]]) image = tf.image.resize_with_crop_or_pad(image, 3, 3) # crop # print first channel for demo purposes; centrally cropped output image[:,:,0] <tf.Tensor: shape=(3, 3), dtype=int64, numpy= array([[18, 21, 24], [33, 36, 39], [48, 51, 54]])> ``` If `width` or `height` is smaller than the specified `target_width` or `target_height` respectively, this op centrally pads with 0 along that dimension. #### For example: ``` image = np.arange(1, 28).reshape(3, 3, 3) # create 3-D image input image[:,:,0] # print first channel just for demo purposes array([[ 1, 4, 7], [10, 13, 16], [19, 22, 25]]) image = tf.image.resize_with_crop_or_pad(image, 5, 5) # pad # print first channel for demo purposes; we should see 0 paddings image[:,:,0] <tf.Tensor: shape=(5, 5), dtype=int64, numpy= array([[ 0, 0, 0, 0, 0], [ 0, 1, 4, 7, 0], [ 0, 10, 13, 16, 0], [ 0, 19, 22, 25, 0], [ 0, 0, 0, 0, 0]])> ``` | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `target_height` | Target height. | | `target_width` | Target width. | | Raises | | `ValueError` | if `target_height` or `target_width` are zero or negative. | | Returns | | Cropped and/or padded image. If `images` was 4-D, a 4-D float Tensor of shape `[batch, new_height, new_width, channels]`. If `images` was 3-D, a 3-D float Tensor of shape `[new_height, new_width, channels]`. | tensorflow tf.image.random_flip_left_right tf.image.random\_flip\_left\_right ================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L375-L420) | Randomly flip an image horizontally (left to right). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_flip_left_right`](https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right) ``` tf.image.random_flip_left_right( image, seed=None ) ``` With a 1 in 2 chance, outputs the contents of `image` flipped along the second dimension, which is `width`. Otherwise output the image as-is. When passing a batch of images, each image will be randomly flipped independent of other images. #### Example usage: ``` image = np.array([[[1], [2]], [[3], [4]]]) tf.image.random_flip_left_right(image, 5).numpy().tolist() [[[2], [1]], [[4], [3]]] ``` Randomly flip multiple images. ``` images = np.array( [ [[[1], [2]], [[3], [4]]], [[[5], [6]], [[7], [8]]] ]) tf.image.random_flip_left_right(images, 6).numpy().tolist() [[[[2], [1]], [[4], [3]]], [[[5], [6]], [[7], [8]]]] ``` For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_flip_left_right`](stateless_random_flip_left_right). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`. | | `seed` | A Python integer. Used to create a random seed. See [`tf.compat.v1.set_random_seed`](../compat/v1/set_random_seed) for behavior. | | Returns | | A tensor of the same type and shape as `image`. | | Raises | | `ValueError` | if the shape of `image` not supported. | tensorflow tf.image.sample_distorted_bounding_box tf.image.sample\_distorted\_bounding\_box ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3371-L3494) | Generate a single randomly distorted bounding box for an image. ``` tf.image.sample_distorted_bounding_box( image_size, bounding_boxes, seed=0, min_object_covered=0.1, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None ) ``` Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. *data augmentation*. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an `image_size`, `bounding_boxes` and a series of constraints. The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: `begin`, `size` and `bboxes`. The first 2 tensors can be fed directly into [`tf.slice`](../slice) to crop the image. The latter may be supplied to [`tf.image.draw_bounding_boxes`](draw_bounding_boxes) to visualize what the bounding box looks like. Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and the height of the underlying image. For example, ``` # Generate a single distorted bounding box. begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bounding_boxes, min_object_covered=0.1) # Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.compat.v1.summary.image('images_with_box', image_with_box) # Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size) ``` Note that if no bounding box information is available, setting `use_image_if_no_bounding_boxes = true` will assume there is a single implicit bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is false and no bounding boxes are supplied, an error is raised. For producing deterministic results given a `seed` value, use [`tf.image.stateless_sample_distorted_bounding_box`](stateless_sample_distorted_bounding_box). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image_size` | A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`. | | `bounding_boxes` | A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` describing the N bounding boxes associated with the image. | | `seed` | An optional `int`. Defaults to `0`. If `seed` is set to non-zero, the random number generator is seeded by the given `seed`. Otherwise, it is seeded by a random seed. | | `min_object_covered` | A Tensor of type `float32`. Defaults to `0.1`. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied. | | `aspect_ratio_range` | An optional list of `floats`. Defaults to `[0.75, 1.33]`. The cropped area of the image must have an aspect `ratio = width / height` within this range. | | `area_range` | An optional list of `floats`. Defaults to `[0.05, 1]`. The cropped area of the image must contain a fraction of the supplied image within this range. | | `max_attempts` | An optional `int`. Defaults to `100`. Number of attempts at generating a cropped region of the image of the specified constraints. After `max_attempts` failures, return the entire image. | | `use_image_if_no_bounding_boxes` | An optional `bool`. Defaults to `False`. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (begin, size, bboxes). | | `begin` | A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to [`tf.slice`](../slice). | | `size` | A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to [`tf.slice`](../slice). | | `bboxes` | A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box. Provide as input to [`tf.image.draw_bounding_boxes`](draw_bounding_boxes). | | Raises | | `ValueError` | If no seed is specified and op determinism is enabled. | tensorflow tf.image.generate_bounding_box_proposals tf.image.generate\_bounding\_box\_proposals =========================================== Generate bounding box proposals from encoded bounding boxes. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.generate_bounding_box_proposals`](https://www.tensorflow.org/api_docs/python/tf/image/generate_bounding_box_proposals) ``` tf.image.generate_bounding_box_proposals( scores, bbox_deltas, image_info, anchors, nms_threshold=0.7, pre_nms_topn=6000, min_size=16, post_nms_topn=300, name=None ) ``` | Args | | `scores` | A 4-D float `Tensor` of shape `[num_images, height, width, num_achors]` containing scores of the boxes for given anchors, can be unsorted. | | `bbox_deltas` | A 4-D float `Tensor` of shape `[num_images, height, width, 4 x num_anchors]` encoding boxes with respect to each anchor. Coordinates are given in the form `[dy, dx, dh, dw]`. | | `image_info` | A 2-D float `Tensor` of shape `[num_images, 5]` containing image information Height, Width, Scale. | | `anchors` | A 2-D float `Tensor` of shape `[num_anchors, 4]` describing the anchor boxes. Boxes are formatted in the form `[y1, x1, y2, x2]`. | | `nms_threshold` | A scalar float `Tensor` for non-maximal-suppression threshold. Defaults to 0.7. | | `pre_nms_topn` | A scalar int `Tensor` for the number of top scoring boxes to be used as input. Defaults to 6000. | | `min_size` | A scalar float `Tensor`. Any box that has a smaller size than min\_size will be discarded. Defaults to 16. | | `post_nms_topn` | An integer. Maximum number of rois in the output. | | `name` | A name for this operation (optional). | | Returns | | `rois` | Region of interest boxes sorted by their scores. | | `roi_probabilities` | scores of the ROI boxes in the ROIs' `Tensor`. |
programming_docs
tensorflow tf.image.sobel_edges tf.image.sobel\_edges ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L4636-L4702) | Returns a tensor holding Sobel edge maps. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.sobel_edges`](https://www.tensorflow.org/api_docs/python/tf/image/sobel_edges) ``` tf.image.sobel_edges( image ) ``` #### Example usage: For general usage, `image` would be loaded from a file as below: ``` image_bytes = tf.io.read_file(path_to_image_file) image = tf.image.decode_image(image_bytes) image = tf.cast(image, tf.float32) image = tf.expand_dims(image, 0) ``` But for demo purposes, we are using randomly generated values for `image`: ``` image = tf.random.uniform( maxval=255, shape=[1, 28, 28, 3], dtype=tf.float32) sobel = tf.image.sobel_edges(image) sobel_y = np.asarray(sobel[0, :, :, :, 0]) # sobel in y-direction sobel_x = np.asarray(sobel[0, :, :, :, 1]) # sobel in x-direction ``` For displaying the sobel results, PIL's [Image Module](https://pillow.readthedocs.io/en/stable/reference/Image.html) can be used: ``` # Display edge maps for the first channel (at index 0) Image.fromarray(sobel_y[..., 0] / 4 + 0.5).show() Image.fromarray(sobel_x[..., 0] / 4 + 0.5).show() ``` | Args | | `image` | Image tensor with shape [batch\_size, h, w, d] and type float32 or float64. The image(s) must be 2x2 or larger. | | Returns | | Tensor holding edge maps for each channel. Returns a tensor with shape [batch\_size, h, w, d, 2] where the last two dimensions hold [[dy[0], dx[0]], [dy[1], dx[1]], ..., [dy[d-1], dx[d-1]]] calculated using the Sobel filter. | tensorflow tf.image.random_hue tf.image.random\_hue ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L2606-L2652) | Adjust the hue of RGB images by a random factor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.random_hue`](https://www.tensorflow.org/api_docs/python/tf/image/random_hue) ``` tf.image.random_hue( image, max_delta, seed=None ) ``` Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the interval `[-max_delta, max_delta)`. `max_delta` must be in the interval `[0, 0.5]`. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.random_hue(x, 0.2) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=...> ``` For producing deterministic results given a `seed` value, use [`tf.image.stateless_random_hue`](stateless_random_hue). Unlike using the `seed` param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same results given the same seed independent of how many times the function is called, and independent of global seed settings (e.g. tf.random.set\_seed). | Args | | `image` | RGB image or images. The size of the last dimension must be 3. | | `max_delta` | float. The maximum value for the random delta. | | `seed` | An operation-specific seed. It will be used in conjunction with the graph-level seed to determine the real seeds that will be used in this operation. Please see the documentation of set\_random\_seed for its interaction with the graph-level random seed. | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `ValueError` | if `max_delta` is invalid. | tensorflow tf.image.draw_bounding_boxes tf.image.draw\_bounding\_boxes ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L5711-L5763) | Draw bounding boxes on a batch of images. ``` tf.image.draw_bounding_boxes( images, boxes, colors, name=None ) ``` Outputs a copy of `images` but draws on top of the pixels zero or more bounding boxes specified by the locations in `boxes`. The coordinates of the each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and the height of the underlying image. For example, if an image is 100 x 200 pixels (height x width) and the bounding box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates). Parts of the bounding box may fall outside the image. | Args | | `images` | A `Tensor`. Must be one of the following types: `float32`, `half`. 4-D with shape `[batch, height, width, depth]`. A batch of images. | | `boxes` | A `Tensor` of type `float32`. 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding boxes. | | `colors` | A `Tensor` of type `float32`. 2-D. A list of RGBA colors to cycle through for the boxes. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `images`. | #### Usage Example: ``` # create an empty image img = tf.zeros([1, 3, 3, 3]) # draw a box around the image box = np.array([0, 0, 1, 1]) boxes = box.reshape([1, 1, 4]) # alternate between red and blue colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]]) tf.image.draw_bounding_boxes(img, boxes, colors) <tf.Tensor: shape=(1, 3, 3, 3), dtype=float32, numpy= array([[[[1., 0., 0.], [1., 0., 0.], [1., 0., 0.]], [[1., 0., 0.], [0., 0., 0.], [1., 0., 0.]], [[1., 0., 0.], [1., 0., 0.], [1., 0., 0.]]]], dtype=float32)> ``` tensorflow tf.image.adjust_saturation tf.image.adjust\_saturation =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/image_ops_impl.py#L3072-L3122) | Adjust saturation of RGB images. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.image.adjust_saturation`](https://www.tensorflow.org/api_docs/python/tf/image/adjust_saturation) ``` tf.image.adjust_saturation( image, saturation_factor, name=None ) ``` This is a convenience method that converts RGB images to float representation, converts them to HSV, adds an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions. `image` is an RGB image or images. The image saturation is adjusted by converting the images to HSV and multiplying the saturation (S) channel by `saturation_factor` and clipping. The images are then converted back to RGB. #### Usage Example: ``` x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_saturation(x, 0.5) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[ 2. , 2.5, 3. ], [ 5. , 5.5, 6. ]], [[ 8. , 8.5, 9. ], [11. , 11.5, 12. ]]], dtype=float32)> ``` | Args | | `image` | RGB image or images. The size of the last dimension must be 3. | | `saturation_factor` | float. Factor to multiply the saturation by. | | `name` | A name for this operation (optional). | | Returns | | Adjusted image(s), same shape and DType as `image`. | | Raises | | `InvalidArgumentError` | input must have 3 channels | tensorflow tf.dtypes.saturate_cast tf.dtypes.saturate\_cast ======================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1008-L1039) | Performs a safe saturating cast of `value` to `dtype`. #### View aliases **Main aliases** [`tf.saturate_cast`](https://www.tensorflow.org/api_docs/python/tf/dtypes/saturate_cast) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.dtypes.saturate_cast`](https://www.tensorflow.org/api_docs/python/tf/dtypes/saturate_cast), [`tf.compat.v1.saturate_cast`](https://www.tensorflow.org/api_docs/python/tf/dtypes/saturate_cast) ``` tf.dtypes.saturate_cast( value, dtype, name=None ) ``` This function casts the input to `dtype` without applying any scaling. If there is a danger that values would over or underflow in the cast, this op applies the appropriate clamping before the cast. | Args | | `value` | A `Tensor`. | | `dtype` | The desired output `DType`. | | `name` | A name for the operation (optional). | | Returns | | `value` safely cast to `dtype`. | tensorflow tf.dtypes.DType tf.dtypes.DType =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/dtypes.py#L33-L199) | Represents the type of the elements in a `Tensor`. #### View aliases **Main aliases** [`tf.DType`](https://www.tensorflow.org/api_docs/python/tf/dtypes/DType) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.DType`](https://www.tensorflow.org/api_docs/python/tf/dtypes/DType), [`tf.compat.v1.dtypes.DType`](https://www.tensorflow.org/api_docs/python/tf/dtypes/DType) ``` tf.dtypes.DType() ``` `DType`'s are used to specify the output data type for operations which require it, or to inspect the data type of existing `Tensor`'s. #### Examples: ``` tf.constant(1, dtype=tf.int64) <tf.Tensor: shape=(), dtype=int64, numpy=1> tf.constant(1.0).dtype tf.float32 ``` See [`tf.dtypes`](../dtypes) for a complete list of `DType`'s defined. | Attributes | | `as_datatype_enum` | Returns a `types_pb2.DataType` enum value based on this data type. | | `as_numpy_dtype` | Returns a Python `type` object based on this `DType`. | | `base_dtype` | Returns a non-reference `DType` based on this `DType`. | | `is_bool` | Returns whether this is a boolean data type. | | `is_complex` | Returns whether this is a complex floating point type. | | `is_floating` | Returns whether this is a (non-quantized, real) floating point type. | | `is_integer` | Returns whether this is a (non-quantized) integer type. | | `is_numpy_compatible` | Returns whether this data type has a compatible NumPy data type. | | `is_quantized` | Returns whether this is a quantized data type. | | `is_unsigned` | Returns whether this type is unsigned. Non-numeric, unordered, and quantized types are not considered unsigned, and this function returns `False`. | | `limits` | Return intensity limits, i.e. (min, max) tuple, of the dtype. Args: clip\_negative : bool, optional If True, clip the negative range (i.e. return 0 for min intensity) even if the image dtype allows negative values. Returns min, max : tuple Lower and upper intensity limits. | | `max` | Returns the maximum representable value in this data type. | | `min` | Returns the minimum representable value in this data type. | | `name` | | | `real_dtype` | Returns the `DType` corresponding to this `DType`'s real part. | | `size` | | Methods ------- ### `is_compatible_with` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/dtypes.py#L155-L173) ``` is_compatible_with( other ) ``` Returns True if the `other` DType will be converted to this DType. The conversion rules are as follows: ``` DType(T) .is_compatible_with(DType(T)) == True ``` | Args | | `other` | A `DType` (or object that may be converted to a `DType`). | | Returns | | True if a Tensor of the `other` `DType` will be implicitly converted to this `DType`. | ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/dtypes.py#L175-L186) ``` __eq__( other ) ``` Returns True iff this DType refers to the same type as `other`. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/dtypes.py#L188-L190) ``` __ne__( other ) ``` Returns True iff self != other. tensorflow tf.dtypes.complex tf.dtypes.complex ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L698-L743) | Converts two real numbers to a complex number. #### View aliases **Main aliases** [`tf.complex`](https://www.tensorflow.org/api_docs/python/tf/dtypes/complex) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.complex`](https://www.tensorflow.org/api_docs/python/tf/dtypes/complex), [`tf.compat.v1.dtypes.complex`](https://www.tensorflow.org/api_docs/python/tf/dtypes/complex) ``` tf.dtypes.complex( real, imag, name=None ) ``` Given a tensor `real` representing the real part of a complex number, and a tensor `imag` representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \(a + bj\), where *a* represents the `real` part and *b* represents the `imag` part. The input tensors `real` and `imag` must have the same shape. #### For example: ``` real = tf.constant([2.25, 3.25]) imag = tf.constant([4.75, 5.75]) tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]] ``` | Args | | `real` | A `Tensor`. Must be one of the following types: `float32`, `float64`. | | `imag` | A `Tensor`. Must have the same type as `real`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `complex64` or `complex128`. | | Raises | | `TypeError` | Real and imag must be correct types | tensorflow tf.dtypes.as_dtype tf.dtypes.as\_dtype =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/dtypes.py#L678-L722) | Converts the given `type_value` to a `DType`. #### View aliases **Main aliases** [`tf.as_dtype`](https://www.tensorflow.org/api_docs/python/tf/dtypes/as_dtype) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.as_dtype`](https://www.tensorflow.org/api_docs/python/tf/dtypes/as_dtype), [`tf.compat.v1.dtypes.as_dtype`](https://www.tensorflow.org/api_docs/python/tf/dtypes/as_dtype) ``` tf.dtypes.as_dtype( type_value ) ``` > > **Note:** `DType` values are interned. When passed a new `DType` object, `as_dtype` always returns the interned value. > | Args | | `type_value` | A value that can be converted to a [`tf.DType`](dtype) object. This may currently be a [`tf.DType`](dtype) object, a [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), a string type name, or a [`numpy.dtype`](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html). | | Returns | | A `DType` corresponding to `type_value`. | | Raises | | `TypeError` | If `type_value` cannot be converted to a `DType`. | tensorflow tf.quantization.fake_quant_with_min_max_vars_gradient tf.quantization.fake\_quant\_with\_min\_max\_vars\_gradient =========================================================== Compute gradients for a FakeQuantWithMinMaxVars operation. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fake_quant_with_min_max_vars_gradient`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_gradient), [`tf.compat.v1.quantization.fake_quant_with_min_max_vars_gradient`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_gradient) ``` tf.quantization.fake_quant_with_min_max_vars_gradient( gradients, inputs, min, max, num_bits=8, narrow_range=False, name=None ) ``` | Args | | `gradients` | A `Tensor` of type `float32`. Backpropagated gradients above the FakeQuantWithMinMaxVars operation. | | `inputs` | A `Tensor` of type `float32`. Values passed as inputs to the FakeQuantWithMinMaxVars operation. min, max: Quantization interval, scalar floats. | | `min` | A `Tensor` of type `float32`. | | `max` | A `Tensor` of type `float32`. | | `num_bits` | An optional `int`. Defaults to `8`. The bitwidth of the quantization; between 2 and 8, inclusive. | | `narrow_range` | An optional `bool`. Defaults to `False`. Whether to quantize into 2^num\_bits - 1 distinct values. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (backprops\_wrt\_input, backprop\_wrt\_min, backprop\_wrt\_max). | | `backprops_wrt_input` | A `Tensor` of type `float32`. | | `backprop_wrt_min` | A `Tensor` of type `float32`. | | `backprop_wrt_max` | A `Tensor` of type `float32`. | tensorflow tf.quantization.fake_quant_with_min_max_args tf.quantization.fake\_quant\_with\_min\_max\_args ================================================= Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fake_quant_with_min_max_args`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args), [`tf.compat.v1.quantization.fake_quant_with_min_max_args`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args) ``` tf.quantization.fake_quant_with_min_max_args( inputs, min=-6, max=6, num_bits=8, narrow_range=False, name=None ) ``` Attributes * `[min; max]` define the clamping range for the `inputs` data. * `inputs` values are quantized into the quantization range ( `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. Before quantization, `min` and `max` values are adjusted with the following logic. It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, the behavior can be unexpected: * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1)`, `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. Quantization is called fake since the output is still in floating point. | Args | | `inputs` | A `Tensor` of type `float32`. | | `min` | An optional `float`. Defaults to `-6`. | | `max` | An optional `float`. Defaults to `6`. | | `num_bits` | An optional `int`. Defaults to `8`. | | `narrow_range` | An optional `bool`. Defaults to `False`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `float32`. | tensorflow tf.quantization.quantized_concat tf.quantization.quantized\_concat ================================= Concatenates quantized tensors along one dimension. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.quantization.quantized_concat`](https://www.tensorflow.org/api_docs/python/tf/quantization/quantized_concat), [`tf.compat.v1.quantized_concat`](https://www.tensorflow.org/api_docs/python/tf/quantization/quantized_concat) ``` tf.quantization.quantized_concat( concat_dim, values, input_mins, input_maxes, name=None ) ``` | Args | | `concat_dim` | A `Tensor` of type `int32`. 0-D. The dimension along which to concatenate. Must be in the range [0, rank(values)). | | `values` | A list of at least 2 `Tensor` objects with the same type. The `N` Tensors to concatenate. Their ranks and types must match, and their sizes must match in all dimensions except `concat_dim`. | | `input_mins` | A list with the same length as `values` of `Tensor` objects with type `float32`. The minimum scalar values for each of the input tensors. | | `input_maxes` | A list with the same length as `values` of `Tensor` objects with type `float32`. The maximum scalar values for each of the input tensors. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (output, output\_min, output\_max). | | `output` | A `Tensor`. Has the same type as `values`. | | `output_min` | A `Tensor` of type `float32`. | | `output_max` | A `Tensor` of type `float32`. |
programming_docs
tensorflow tf.quantization.quantize tf.quantization.quantize ======================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L6125-L6161) | Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.quantization.quantize`](https://www.tensorflow.org/api_docs/python/tf/quantization/quantize), [`tf.compat.v1.quantize`](https://www.tensorflow.org/api_docs/python/tf/quantization/quantize) ``` tf.quantization.quantize( input, min_range, max_range, T, mode='MIN_COMBINED', round_mode='HALF_AWAY_FROM_ZERO', name=None, narrow_range=False, axis=None, ensure_minimum_range=0.01 ) ``` [min\_range, max\_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents. The 'round\_mode' attribute controls which rounding tie-breaking algorithm is used when rounding float values to their quantized equivalents. In 'MIN\_COMBINED' mode, each value of the tensor will undergo the following: ``` out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) if T == qint8: out[i] -= (range(T) + 1) / 2.0 ``` here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()` *MIN\_COMBINED Mode Example* Assume the input is type float and has a possible range of [0.0, 6.0] and the output type is quint8 ([0, 255]). The min\_range and max\_range values should be specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each value of the input by 255/6 and cast to quint8. If the output type was qint8 ([-128, 127]), the operation will additionally subtract each value by 128 prior to casting, so that the range of values aligns with the range of qint8. If the mode is 'MIN\_FIRST', then this approach is used: ``` num_discrete_values = 1 << (# of bits in T) range_adjust = num_discrete_values / (num_discrete_values - 1) range = (range_max - range_min) * range_adjust range_scale = num_discrete_values / range quantized = round(input * range_scale) - round(range_min * range_scale) + numeric_limits<T>::min() quantized = max(quantized, numeric_limits<T>::min()) quantized = min(quantized, numeric_limits<T>::max()) ``` The biggest difference between this and MIN\_COMBINED is that the minimum range is rounded first, before it's subtracted from the rounded value. With MIN\_COMBINED, a small bias is introduced where repeated iterations of quantizing and dequantizing will introduce a larger and larger error. *SCALED mode Example* `SCALED` mode matches the quantization approach used in `QuantizeAndDequantize{V2|V3}`. If the mode is `SCALED`, the quantization is performed by multiplying each input value by a scaling\_factor. The scaling\_factor is determined from `min_range` and `max_range` to be as large as possible such that the range from `min_range` to `max_range` is representable within values of type T. ``` const int min_T = std::numeric_limits<T>::min(); const int max_T = std::numeric_limits<T>::max(); const float max_float = std::numeric_limits<float>::max(); const float scale_factor_from_min_side = (min_T * min_range > 0) ? min_T / min_range : max_float; const float scale_factor_from_max_side = (max_T * max_range > 0) ? max_T / max_range : max_float; const float scale_factor = std::min(scale_factor_from_min_side, scale_factor_from_max_side); ``` We next use the scale\_factor to adjust min\_range and max\_range as follows: ``` min_range = min_T / scale_factor; max_range = max_T / scale_factor; ``` e.g. if T = qint8, and initially min\_range = -10, and max\_range = 9, we would compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling\_factor = 12.8 In this case, min\_range would remain -10, but max\_range would be adjusted to 127 / 12.8 = 9.921875 So we will quantize input values in the range (-10, 9.921875) to (-128, 127). The input tensor can now be quantized by clipping values to the range `min_range` to `max_range`, then multiplying by scale\_factor as follows: ``` result = round(min(max_range, max(min_range, input)) * scale_factor) ``` The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of this operation. These outputs should be used as the range for any further calculations. *narrow\_range (bool) attribute* If true, we do not use the minimum quantized value. i.e. for int8 the quantized output, it would be restricted to the range -127..127 instead of the full -128..127 range. This is provided for compatibility with certain inference backends. (Only applies to SCALED mode) *axis (int) attribute* An optional `axis` attribute can specify a dimension index of the input tensor, such that quantization ranges will be calculated and applied separately for each slice of the tensor along that dimension. This is useful for per-channel quantization. If axis is specified, min\_range and max\_range if `axis`=None, per-tensor quantization is performed as normal. *ensure\_minimum\_range (float) attribute* Ensures the minimum quantization range is at least this value. The legacy default value for this is 0.01, but it is strongly suggested to set it to 0 for new uses. | Args | | `input` | A `Tensor` of type `float32`. | | `min_range` | A `Tensor` of type `float32`. The minimum value of the quantization range. This value may be adjusted by the op depending on other parameters. The adjusted value is written to `output_min`. If the `axis` attribute is specified, this must be a 1-D tensor whose size matches the `axis` dimension of the input and output tensors. | | `max_range` | A `Tensor` of type `float32`. The maximum value of the quantization range. This value may be adjusted by the op depending on other parameters. The adjusted value is written to `output_max`. If the `axis` attribute is specified, this must be a 1-D tensor whose size matches the `axis` dimension of the input and output tensors. | | `T` | A [`tf.DType`](../dtypes/dtype) from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. | | `mode` | An optional `string` from: `"MIN_COMBINED", "MIN_FIRST", "SCALED"`. Defaults to `"MIN_COMBINED"`. | | `round_mode` | An optional `string` from: `"HALF_AWAY_FROM_ZERO", "HALF_TO_EVEN"`. Defaults to `"HALF_AWAY_FROM_ZERO"`. | | `narrow_range` | An optional `bool`. Defaults to `False`. | | `axis` | An optional `int`. Defaults to `-1`. | | `ensure_minimum_range` | An optional `float`. Defaults to `0.01`. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (output, output\_min, output\_max). | | `output` | A `Tensor` of type `T`. | | `output_min` | A `Tensor` of type `float32`. | | `output_max` | A `Tensor` of type `float32`. | tensorflow tf.quantization.fake_quant_with_min_max_vars tf.quantization.fake\_quant\_with\_min\_max\_vars ================================================= Fake-quantize the 'inputs' tensor of type float via global float scalars #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fake_quant_with_min_max_vars`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars), [`tf.compat.v1.quantization.fake_quant_with_min_max_vars`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars) ``` tf.quantization.fake_quant_with_min_max_vars( inputs, min, max, num_bits=8, narrow_range=False, name=None ) ``` Fake-quantize the `inputs` tensor of type float via global float scalars `min` and `max` to `outputs` tensor of same shape as `inputs`. Attributes * `[min; max]` define the clamping range for the `inputs` data. * `inputs` values are quantized into the quantization range ( `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. Before quantization, `min` and `max` values are adjusted with the following logic. It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, the behavior can be unexpected: * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1)`, `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. This operation has a gradient and thus allows for training `min` and `max` values. | Args | | `inputs` | A `Tensor` of type `float32`. | | `min` | A `Tensor` of type `float32`. | | `max` | A `Tensor` of type `float32`. | | `num_bits` | An optional `int`. Defaults to `8`. | | `narrow_range` | An optional `bool`. Defaults to `False`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `float32`. | tensorflow tf.quantization.fake_quant_with_min_max_args_gradient tf.quantization.fake\_quant\_with\_min\_max\_args\_gradient =========================================================== Compute gradients for a FakeQuantWithMinMaxArgs operation. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fake_quant_with_min_max_args_gradient`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args_gradient), [`tf.compat.v1.quantization.fake_quant_with_min_max_args_gradient`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args_gradient) ``` tf.quantization.fake_quant_with_min_max_args_gradient( gradients, inputs, min=-6, max=6, num_bits=8, narrow_range=False, name=None ) ``` | Args | | `gradients` | A `Tensor` of type `float32`. Backpropagated gradients above the FakeQuantWithMinMaxArgs operation. | | `inputs` | A `Tensor` of type `float32`. Values passed as inputs to the FakeQuantWithMinMaxArgs operation. | | `min` | An optional `float`. Defaults to `-6`. | | `max` | An optional `float`. Defaults to `6`. | | `num_bits` | An optional `int`. Defaults to `8`. | | `narrow_range` | An optional `bool`. Defaults to `False`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `float32`. | tensorflow tf.quantization.quantize_and_dequantize_v2 tf.quantization.quantize\_and\_dequantize\_v2 ============================================= Quantizes then dequantizes a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.quantization.quantize_and_dequantize_v2`](https://www.tensorflow.org/api_docs/python/tf/quantization/quantize_and_dequantize_v2) ``` tf.quantization.quantize_and_dequantize_v2( input, input_min, input_max, signed_input=True, num_bits=8, range_given=False, round_mode='HALF_TO_EVEN', name=None, narrow_range=False, axis=None ) ``` Updates the gradient definition for quantization that is outside the range to be 0.To simulate the V1 the behavior of tf.quantization.quantize\_and\_dequantize(...) use tf.grad\_pass\_through(tf.quantization.quantize\_and\_dequantize\_v2)(...). #### Example usage: ``` def getQuantizeOp(input): input_tensor = tf.placeholder(tf.float32, shape=[4, 4]) net = tf.quantization.quantize_and_dequantize(input, input_min=min_threshold, input_max=max_threshold, range_given=True) To simulate v1 behavior: def testDecomposeQuantizeDequantize(self): def f(input_tensor): return tf.quantization.quantize_and_dequantize_v2(input_tensor, input_min = 5.0, input_max= -10.0, range_given=True) input_tensor = tf.placeholder(tf.float32, shape=[4, 4]) net = tf.grad_pass_through(f)(input_tensor) ``` | Args | | `input` | A `Tensor` to quantize and dequantize. | | `input_min` | If range\_given=True, the minimum input value, that needs to be represented in the quantized representation. If axis is specified, this should be a vector of minimum values for each slice along axis. | | `input_max` | If range\_given=True, the maximum input value that needs to be represented in the quantized representation. If axis is specified, this should be a vector of maximum values for each slice along axis. | | `signed_input` | True if the quantization is signed or unsigned. | | `num_bits` | The bitwidth of the quantization. | | `range_given` | If true use `input_min` and `input_max` for the range of the input, otherwise determine min and max from the input `Tensor`. | | `round_mode` | Rounding mode when rounding from float values to quantized ones. one of ['HALF\_TO\_EVEN', 'HALF\_UP'] | | `name` | Optional name for the operation. | | `narrow_range` | If true, then the absolute value of the quantized minimum value is the same as the quantized maximum value, instead of 1 greater. i.e. for 8 bit quantization, the minimum value is -127 instead of -128. | | `axis` | Integer. If specified, refers to a dimension of the input tensor, such that quantization will be per slice along that dimension. | | Returns | | A `Tensor`. Each element is the result of quantizing and dequantizing the corresponding element of `input`. | tensorflow tf.quantization.fake_quant_with_min_max_vars_per_channel tf.quantization.fake\_quant\_with\_min\_max\_vars\_per\_channel =============================================================== Fake-quantize the 'inputs' tensor of type float via per-channel floats #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fake_quant_with_min_max_vars_per_channel`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel), [`tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel) ``` tf.quantization.fake_quant_with_min_max_vars_per_channel( inputs, min, max, num_bits=8, narrow_range=False, name=None ) ``` Fake-quantize the `inputs` tensor of type float per-channel and one of the shapes: `[d]`, `[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]` to `outputs` tensor of same shape as `inputs`. Attributes * `[min; max]` define the clamping range for the `inputs` data. * `inputs` values are quantized into the quantization range ( `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. Before quantization, `min` and `max` values are adjusted with the following logic. It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, the behavior can be unexpected: * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1)`, `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. This operation has a gradient and thus allows for training `min` and `max` values. | Args | | `inputs` | A `Tensor` of type `float32`. | | `min` | A `Tensor` of type `float32`. | | `max` | A `Tensor` of type `float32`. | | `num_bits` | An optional `int`. Defaults to `8`. | | `narrow_range` | An optional `bool`. Defaults to `False`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `float32`. | tensorflow tf.quantization.dequantize tf.quantization.dequantize ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L6164-L6195) | Dequantize the 'input' tensor into a float or bfloat16 Tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.dequantize`](https://www.tensorflow.org/api_docs/python/tf/quantization/dequantize), [`tf.compat.v1.quantization.dequantize`](https://www.tensorflow.org/api_docs/python/tf/quantization/dequantize) ``` tf.quantization.dequantize( input, min_range, max_range, mode='MIN_COMBINED', name=None, axis=None, narrow_range=False, dtype=tf.dtypes.float32 ) ``` [min\_range, max\_range] are scalar floats that specify the range for the output. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents. In 'MIN\_COMBINED' mode, each value of the tensor will undergo the following: ``` if T == qint8: in[i] += (range(T) + 1)/ 2.0 out[i] = min_range + (in[i]* (max_range - min_range) / range(T)) ``` here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()` *MIN\_COMBINED Mode Example* If the input comes from a QuantizedRelu6, the output type is quint8 (range of 0-255) but the possible range of QuantizedRelu6 is 0-6. The min\_range and max\_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting. If the mode is 'MIN\_FIRST', then this approach is used: ``` num_discrete_values = 1 << (# of bits in T) range_adjust = num_discrete_values / (num_discrete_values - 1) range = (range_max - range_min) * range_adjust range_scale = range / num_discrete_values const double offset_input = static_cast<double>(input) - lowest_quantized; result = range_min + ((input - numeric_limits<T>::min()) * range_scale) ``` If the mode is `SCALED`, dequantization is performed by multiplying each input value by a scaling\_factor. (Thus an input of 0 always maps to 0.0). The scaling\_factor is determined from `min_range`, `max_range`, and `narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}` and `QuantizeV2`, using the following algorithm: ``` const int min_expected_T = std::numeric_limits<T>::min() + (narrow_range ? 1 : 0); const int max_expected_T = std::numeric_limits<T>::max(); const float max_expected_T = std::numeric_limits<float>::max(); const float scale_factor = (std::numeric_limits<T>::min() == 0) ? (max_range / max_expected_T) : std::max(min_range / min_expected_T, max_range / max_expected_T); ``` | Args | | `input` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. | | `min_range` | A `Tensor` of type `float32`. The minimum scalar value possibly produced for the input. | | `max_range` | A `Tensor` of type `float32`. The maximum scalar value possibly produced for the input. | | `mode` | An optional `string` from: `"MIN_COMBINED", "MIN_FIRST", "SCALED"`. Defaults to `"MIN_COMBINED"`. | | `narrow_range` | An optional `bool`. Defaults to `False`. | | `axis` | An optional `int`. Defaults to `-1`. | | `dtype` | An optional [`tf.DType`](../dtypes/dtype) from: `tf.bfloat16, tf.float32`. Defaults to [`tf.float32`](../../tf#float32). Type of the output tensor. Currently Dequantize supports float and bfloat16. If 'dtype' is 'bfloat16', it only supports 'MIN\_COMBINED' mode. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `dtype`. |
programming_docs
tensorflow tf.quantization.quantize_and_dequantize tf.quantization.quantize\_and\_dequantize ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L6201-L6265) | Quantizes then dequantizes a tensor. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.quantization.quantize_and_dequantize`](https://www.tensorflow.org/api_docs/python/tf/quantization/quantize_and_dequantize) ``` tf.quantization.quantize_and_dequantize( input, input_min, input_max, signed_input=True, num_bits=8, range_given=False, round_mode='HALF_TO_EVEN', name=None, narrow_range=False, axis=None ) ``` | Args | | `input` | A `Tensor` to quantize and dequantize. | | `input_min` | If range\_given=True, the minimum input value, that needs to be represented in the quantized representation. If axis is specified, this should be a vector of minimum values for each slice along axis. | | `input_max` | If range\_given=True, the maximum input value that needs to be represented in the quantized representation. If axis is specified, this should be a vector of maximum values for each slice along axis. | | `signed_input` | True if the quantization is signed or unsigned. | | `num_bits` | The bitwidth of the quantization. | | `range_given` | If true use `input_min` and `input_max` for the range of the input, otherwise determine min and max from the input `Tensor`. | | `round_mode` | Rounding mode when rounding from float values to quantized ones. one of ['HALF\_TO\_EVEN', 'HALF\_UP'] | | `name` | Optional name for the operation. | | `narrow_range` | If true, then the absolute value of the quantized minimum value is the same as the quantized maximum value, instead of 1 greater. i.e. for 8 bit quantization, the minimum value is -127 instead of -128. | | `axis` | Integer. If specified, refers to a dimension of the input tensor, such that quantization will be per slice along that dimension. | | Returns | | A `Tensor`. Each element is the result of quantizing and dequantizing the corresponding element of `input`. | tensorflow tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient tf.quantization.fake\_quant\_with\_min\_max\_vars\_per\_channel\_gradient ========================================================================= Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fake_quant_with_min_max_vars_per_channel_gradient`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient), [`tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel_gradient`](https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient) ``` tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient( gradients, inputs, min, max, num_bits=8, narrow_range=False, name=None ) ``` | Args | | `gradients` | A `Tensor` of type `float32`. Backpropagated gradients above the FakeQuantWithMinMaxVars operation, shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`. | | `inputs` | A `Tensor` of type `float32`. Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape same as `gradients`. min, max: Quantization interval, floats of shape `[d]`. | | `min` | A `Tensor` of type `float32`. | | `max` | A `Tensor` of type `float32`. | | `num_bits` | An optional `int`. Defaults to `8`. The bitwidth of the quantization; between 2 and 16, inclusive. | | `narrow_range` | An optional `bool`. Defaults to `False`. Whether to quantize into 2^num\_bits - 1 distinct values. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (backprops\_wrt\_input, backprop\_wrt\_min, backprop\_wrt\_max). | | `backprops_wrt_input` | A `Tensor` of type `float32`. | | `backprop_wrt_min` | A `Tensor` of type `float32`. | | `backprop_wrt_max` | A `Tensor` of type `float32`. | tensorflow Module: tf.mlir.experimental Module: tf.mlir.experimental ============================ Public API for tf.mlir.experimental namespace. Functions --------- [`convert_function(...)`](experimental/convert_function): Import a ConcreteFunction and convert it to a textual MLIR module. [`convert_graph_def(...)`](experimental/convert_graph_def): Import a GraphDef and convert it to a textual MLIR module. tensorflow tf.mlir.experimental.convert_graph_def tf.mlir.experimental.convert\_graph\_def ======================================== Import a GraphDef and convert it to a textual MLIR module. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.mlir.experimental.convert_graph_def`](https://www.tensorflow.org/api_docs/python/tf/mlir/experimental/convert_graph_def) ``` tf.mlir.experimental.convert_graph_def( graph_def, pass_pipeline='tf-standard-pipeline', show_debug_info=False ) ``` This API is only intended for inspecting the internals of TensorFlow and the string returned is at the moment intended for debugging purposes. | Args | | `graph_def` | An object of type graph\_pb2.GraphDef or a textual proto representation of a valid GraphDef. | | `pass_pipeline` | A textual description of an MLIR Pass Pipeline to run on the module, see MLIR documentation for the [textual pass pipeline syntax](https://mlir.llvm.org/docs/PassManagement/#textual-pass-pipeline-specification). | | `show_debug_info` | Whether to include locations in the emitted textual form. | | Returns | | A textual representation of the MLIR module corresponding to the graphdef. | | Raises | | `InvalidArgumentError` | if graph\_def is invalid or cannot be converted to MLIR. | tensorflow tf.mlir.experimental.convert_function tf.mlir.experimental.convert\_function ====================================== Import a ConcreteFunction and convert it to a textual MLIR module. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.mlir.experimental.convert_function`](https://www.tensorflow.org/api_docs/python/tf/mlir/experimental/convert_function) ``` tf.mlir.experimental.convert_function( concrete_function, pass_pipeline='tf-standard-pipeline', show_debug_info=False ) ``` This API is only intended for inspecting the internals of TensorFlow and the string returned is at the moment intended for debugging purposes. A [tf.function](https://www.tensorflow.org/api_docs/python/tf/function) can be imported and converted from TensorFlow to TensorFlow MLIR with this API by extracting its ConcreteFunction (eagerly-executing wrapper around a [tf.Graph](https://www.tensorflow.org/api_docs/python/tf/Graph)). #### For example: ``` @tf.function def add(a, b): return a + b ``` ``` concrete_function = add.get_concrete_function( tf.TensorSpec(None, tf.dtypes.float32), tf.TensorSpec(None, tf.dtypes.float32)) tf.mlir.experimental.convert_function(concrete_function) '...module attributes {...} {...}...' ``` | Args | | `concrete_function` | An object of type ConcreteFunction. | | `pass_pipeline` | A textual description of an MLIR Pass Pipeline to run on the module, see MLIR documentation for the [textual pass pipeline syntax](https://mlir.llvm.org/docs/PassManagement/#textual-pass-pipeline-specification). | | `show_debug_info` | Whether to include locations in the emitted textual form. | | Returns | | A textual representation of the MLIR module corresponding to the ConcreteFunction. | | Raises | | `InvalidArgumentError` | if concrete\_function is invalid or cannot be converted to MLIR. | tensorflow tf.distribute.has_strategy tf.distribute.has\_strategy =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribution_strategy_context.py#L253-L266) | Return if there is a current non-default [`tf.distribute.Strategy`](strategy). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.has_strategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/has_strategy) ``` tf.distribute.has_strategy() ``` ``` assert not tf.distribute.has_strategy() with strategy.scope(): assert tf.distribute.has_strategy() ``` | Returns | | True if inside a `with strategy.scope():`. | tensorflow tf.distribute.Strategy tf.distribute.Strategy ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1673-L1858) | A state & compute distribution policy on a list of devices. ``` tf.distribute.Strategy( extended ) ``` See [the guide](https://www.tensorflow.org/guide/distributed_training) for overview and examples. See [`tf.distribute.StrategyExtended`](strategyextended) and [`tf.distribute`](https://www.tensorflow.org/api_docs/python/tf/distribute) for a glossary of concepts mentioned on this page such as "per-replica", *replica*, and *reduce*. #### In short: * To use it with Keras `compile`/`fit`, [please read](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_keras). * You may pass descendant of [`tf.distribute.Strategy`](strategy) to [`tf.estimator.RunConfig`](../estimator/runconfig) to specify how a [`tf.estimator.Estimator`](../estimator/estimator) should distribute its computation. See [guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_estimator_limited_support). * Otherwise, use [`tf.distribute.Strategy.scope`](strategy#scope) to specify that a strategy should be used when building an executing your model. (This puts you in the "cross-replica context" for this strategy, which means the strategy is put in control of things like variable placement.) * If you are writing a custom training loop, you will need to call a few more methods, [see the guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_custom_training_loops): + Start by creating a [`tf.data.Dataset`](../data/dataset) normally. + Use [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) to convert a [`tf.data.Dataset`](../data/dataset) to something that produces "per-replica" values. If you want to manually specify how the dataset should be partitioned across replicas, use [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) instead. + Use [`tf.distribute.Strategy.run`](strategy#run) to run a function once per replica, taking values that may be "per-replica" (e.g. from a [`tf.distribute.DistributedDataset`](distributeddataset) object) and returning "per-replica" values. This function is executed in "replica context", which means each operation is performed separately on each replica. + Finally use a method (such as [`tf.distribute.Strategy.reduce`](strategy#reduce)) to convert the resulting "per-replica" values into ordinary `Tensor`s. A custom training loop can be as simple as: ``` with my_strategy.scope(): @tf.function def distribute_train_epoch(dataset): def replica_fn(input): # process input and return result return result total_result = 0 for x in dataset: per_replica_result = my_strategy.run(replica_fn, args=(x,)) total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_result, axis=None) return total_result dist_dataset = my_strategy.experimental_distribute_dataset(dataset) for _ in range(EPOCHS): train_result = distribute_train_epoch(dist_dataset) ``` This takes an ordinary `dataset` and `replica_fn` and runs it distributed using a particular [`tf.distribute.Strategy`](strategy) named `my_strategy` above. Any variables created in `replica_fn` are created using `my_strategy`'s policy, and library functions called by `replica_fn` can use the `get_replica_context()` API to implement distributed-specific behavior. You can use the `reduce` API to aggregate results across replicas and use this as a return value from one iteration over a [`tf.distribute.DistributedDataset`](distributeddataset). Or you can use [`tf.keras.metrics`](../keras/metrics) (such as loss, accuracy, etc.) to accumulate metrics across steps in a given epoch. See the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training) for a more detailed example. > > **Note:** [`tf.distribute.Strategy`](strategy) currently does not support TensorFlow's partitioned variables (where a single variable is split across multiple devices) at this time. > | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker [`tf.distribute`](../distribute) strategy such as [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy()`](tpustrategy), there is a [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) must set the relevant attribute, or override this property; otherwise, `None` is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver), and in those cases this property will return `None`. The [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, ``` os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. ``` For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver)'s API docstring. | | `extended` | [`tf.distribute.StrategyExtended`](strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](inputcontext) argument and returns a [`tf.data.Dataset`](../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](distributeddataset) returned by this API to query the [`tf.TypeSpec`](../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../function). Follow [`tf.distribute.DistributedDataset.element_spec`](distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](inputcontext) instance and returning a [`tf.data.Dataset`](../data/dataset). | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.data.Dataset`](../data/dataset). The returned [`tf.distribute.DistributedDataset`](distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../data/tfrecorddataset), [`tf.data.TextLineDataset`](../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](distributedvalues) or [`tf.Tensor`](../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce): This differs from [`Strategy.reduce`](mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1197-L1312) ``` run( fn, args=(), kwargs=None, options=None ) ``` Invokes `fn` on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes `fn` on each replica. If `args` or `kwargs` have [`tf.distribute.DistributedValues`](distributedvalues), such as those produced by a [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function), when `fn` is executed on a particular replica, it will be executed with the component of [`tf.distribute.DistributedValues`](distributedvalues) that correspond to that replica. `fn` is invoked under a replica context. `fn` may call [`tf.distribute.get_replica_context()`](get_replica_context) to access members such as `all_reduce`. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in `args` or `kwargs` can be a nested structure of tensors, e.g. a list of tensors, in which case `args` and `kwargs` will be passed to the `fn` invoked on each replica. Or `args` or `kwargs` can be [`tf.distribute.DistributedValues`](distributedvalues) containing tensors or composite tensors, i.e. [`tf.compat.v1.TensorInfo.CompositeTensor`](../compat/v1/tensorinfo/compositetensor), in which case each `fn` call will get the component of a [`tf.distribute.DistributedValues`](distributedvalues) corresponding to its replica. Note that arbitrary Python values that are not of the types above are not supported. #### Example usage: 1. Constant tensor input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } ``` 1. DistributedValues input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> ``` 1. Use [`tf.distribute.ReplicaContext`](replicacontext) to allreduce values. ``` strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } ``` | Args | | `fn` | The function to run on each replica. | | `args` | Optional positional arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](distributedvalues). | | `kwargs` | Optional keyword arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](distributedvalues). | | `options` | An optional instance of [`tf.distribute.RunOptions`](runoptions) specifying the options to run `fn`. | | Returns | | Merged return value of `fn` across replicas. The structure of the return value is the same as the return value from `fn`. Each element in the structure can either be [`tf.distribute.DistributedValues`](distributedvalues), `Tensor` objects, or `Tensor`s (for example, if running on a single replica). | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.DistributedValues tf.distribute.DistributedValues =============================== Base class for representing distributed values. ``` tf.distribute.DistributedValues( values ) ``` A subclass instance of [`tf.distribute.DistributedValues`](distributedvalues) is created when creating variables within a distribution strategy, iterating a [`tf.distribute.DistributedDataset`](distributeddataset) or through [`tf.distribute.Strategy.run`](strategy#run). This base class should never be instantiated directly. [`tf.distribute.DistributedValues`](distributedvalues) contains a value per replica. Depending on the subclass, the values could either be synced on update, synced on demand, or never synced. [`tf.distribute.DistributedValues`](distributedvalues) can be reduced to obtain single value across replicas, as input into [`tf.distribute.Strategy.run`](strategy#run) or the per-replica values inspected using [`tf.distribute.Strategy.experimental_local_results`](strategy#experimental_local_results). #### Example usage: 1. Created from a [`tf.distribute.DistributedDataset`](distributeddataset): ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) distributed_values = next(dataset_iterator) ``` 1. Returned by `run`: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): ctx = tf.distribute.get_replica_context() return ctx.replica_id_in_sync_group distributed_values = strategy.run(run) ``` 1. As input into `run`: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) distributed_values = next(dataset_iterator) @tf.function def run(input): return input + 1.0 updated_value = strategy.run(run, args=(distributed_values,)) ``` 1. Reduce value: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) distributed_values = next(dataset_iterator) reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM, distributed_values, axis = 0) ``` 1. Inspect local replica values: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) per_replica_values = strategy.experimental_local_results( distributed_values) per_replica_values (<tf.Tensor: shape=(1,), dtype=float32, numpy=array([5.], dtype=float32)>, <tf.Tensor: shape=(1,), dtype=float32, numpy=array([6.], dtype=float32)>) ``` tensorflow tf.distribute.ReduceOp tf.distribute.ReduceOp ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/reduce_util.py#L24-L47) | Indicates how a set of values should be reduced. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.ReduceOp`](https://www.tensorflow.org/api_docs/python/tf/distribute/ReduceOp) * `SUM`: Add all the values. * `MEAN`: Take the arithmetic mean ("average") of the values. | Class Variables | | MEAN | `<ReduceOp.MEAN: 'MEAN'>` | | SUM | `<ReduceOp.SUM: 'SUM'>` | tensorflow Module: tf.distribute.cluster_resolver Module: tf.distribute.cluster\_resolver ======================================= Library imports for ClusterResolvers. This library contains all implementations of ClusterResolvers. ClusterResolvers are a way of specifying cluster information for distributed execution. Built on top of existing `ClusterSpec` framework, ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...). Classes ------- [`class ClusterResolver`](cluster_resolver/clusterresolver): Abstract class for all implementations of ClusterResolvers. [`class GCEClusterResolver`](cluster_resolver/gceclusterresolver): ClusterResolver for Google Compute Engine. [`class KubernetesClusterResolver`](cluster_resolver/kubernetesclusterresolver): ClusterResolver for Kubernetes. [`class SimpleClusterResolver`](cluster_resolver/simpleclusterresolver): Simple implementation of ClusterResolver that accepts all attributes. [`class SlurmClusterResolver`](cluster_resolver/slurmclusterresolver): ClusterResolver for system with Slurm workload manager. [`class TFConfigClusterResolver`](cluster_resolver/tfconfigclusterresolver): Implementation of a ClusterResolver which reads the TF\_CONFIG EnvVar. [`class TPUClusterResolver`](cluster_resolver/tpuclusterresolver): Cluster Resolver for Google Cloud TPUs. [`class UnionResolver`](cluster_resolver/unionresolver): Performs a union on underlying ClusterResolvers. tensorflow tf.distribute.TPUStrategy tf.distribute.TPUStrategy ========================= Synchronous training on TPUs and TPU Pods. Inherits From: [`Strategy`](strategy) ``` tf.distribute.TPUStrategy( tpu_cluster_resolver=None, experimental_device_assignment=None, experimental_spmd_xla_partitioning=False ) ``` To construct a TPUStrategy object, you need to run the initialization code as below: ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) strategy = tf.distribute.TPUStrategy(resolver) ``` While using distribution strategies, the variables created within the strategy's scope will be replicated across all the replicas and can be kept in sync using all-reduce algorithms. To run TF2 programs on TPUs, you can either use `.compile` and `.fit` APIs in [`tf.keras`](../keras) with TPUStrategy, or write your own customized training loop by calling `strategy.run` directly. Note that TPUStrategy doesn't support pure eager execution, so please make sure the function passed into `strategy.run` is a [`tf.function`](../function) or `strategy.run` is called inside a [`tf.function`](../function) if eager behavior is enabled. See more details in https://www.tensorflow.org/guide/tpu. `distribute_datasets_from_function` and `experimental_distribute_dataset` APIs can be used to distribute the dataset across the TPU workers when writing your own training loop. If you are using `fit` and `compile` methods available in [`tf.keras.Model`](../keras/model), then Keras will handle the distribution for you. An example of writing customized training loop on TPUs: ``` with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Dense(2, input_shape=(5,)), ]) optimizer = tf.keras.optimizers.SGD(learning_rate=0.1) ``` ``` def dataset_fn(ctx): x = np.random.random((2, 5)).astype(np.float32) y = np.random.randint(2, size=(2, 1)) dataset = tf.data.Dataset.from_tensor_slices((x, y)) return dataset.repeat().batch(1, drop_remainder=True) dist_dataset = strategy.distribute_datasets_from_function( dataset_fn) iterator = iter(dist_dataset) ``` ``` @tf.function() def train_step(iterator): def step_fn(inputs): features, labels = inputs with tf.GradientTape() as tape: logits = model(features, training=True) loss = tf.keras.losses.sparse_categorical_crossentropy( labels, logits) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) strategy.run(step_fn, args=(next(iterator),)) ``` ``` train_step(iterator) ``` For the advanced use cases like model parallelism, you can set `experimental_device_assignment` argument when creating TPUStrategy to specify number of replicas and number of logical devices. Below is an example to initialize TPU system with 2 logical devices and 1 replica. ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 1, 1, 2], num_replicas=1) strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment) ``` Then you can run a [`tf.add`](../math/add) operation only on logical device 0. ``` @tf.function() def step_fn(inputs): features, _ = inputs output = tf.add(features, features) # Add operation will be executed on logical device 0. output = strategy.experimental_assign_to_logical_device(output, 0) return output dist_dataset = strategy.distribute_datasets_from_function( dataset_fn) iterator = iter(dist_dataset) strategy.run(step_fn, args=(next(iterator),)) ``` `experimental_spmd_xla_partitioning` enables the experimental XLA SPMD feature for model parallelism. This flag can reduce the compilation time and HBM requirements. When running in this mode, every input tensor must either be partitioned (via `strategy.experimental_split_to_logical_devices`) or fully replicated (via `strategy.experimental_replicate_to_logical_devices`) to all logical devices. And calling `strategy.experimental_assign_to_logical_device` will result in a ValueError in this mode. | Args | | `tpu_cluster_resolver` | A [`tf.distribute.cluster_resolver.TPUClusterResolver`](cluster_resolver/tpuclusterresolver) instance, which provides information about the TPU cluster. If None, it will assume running on a local TPU worker. | | `experimental_device_assignment` | Optional [`tf.tpu.experimental.DeviceAssignment`](../tpu/experimental/deviceassignment) to specify the placement of replicas on the TPU cluster. | | `experimental_spmd_xla_partitioning` | If True, enable the SPMD (Single Program Multiple Data) mode in XLA compiler. This flag only affects the performance of XLA compilation and the HBM requirement of the compiled TPU program. Ceveat: if this flag is True, calling [`tf.distribute.TPUStrategy.experimental_assign_to_logical_device`](tpustrategy#experimental_assign_to_logical_device) will result in a ValueError. | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. [`tf.distribute.TPUStrategy`](tpustrategy) provides the associated [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver). If the user provides one in `__init__`, that instance is returned; if the user does not, a default [`tf.distribute.cluster_resolver.TPUClusterResolver`](cluster_resolver/tpuclusterresolver) is provided. | | `extended` | [`tf.distribute.StrategyExtended`](strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](inputcontext) argument and returns a [`tf.data.Dataset`](../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](distributeddataset) returned by this API to query the [`tf.TypeSpec`](../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../function). Follow [`tf.distribute.DistributedDataset.element_spec`](distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](inputcontext) instance and returning a [`tf.data.Dataset`](../data/dataset). | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_assign_to_logical_device` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/tpu_strategy.py#L441-L502) ``` experimental_assign_to_logical_device( tensor, logical_device_id ) ``` Adds annotation that `tensor` will be assigned to a logical device. This adds an annotation to `tensor` specifying that operations on `tensor` will be invoked on logical core device id `logical_device_id`. When model parallelism is used, the default behavior is that all ops are placed on zero-th logical device. ``` # Initializing TPU system with 2 logical devices and 4 replicas. resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 1, 1, 2], num_replicas=4) strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment) iterator = iter(inputs) @tf.function() def step_fn(inputs): output = tf.add(inputs, inputs) # Add operation will be executed on logical device 0. output = strategy.experimental_assign_to_logical_device(output, 0) return output strategy.run(step_fn, args=(next(iterator),)) ``` | Args | | `tensor` | Input tensor to annotate. | | `logical_device_id` | Id of the logical core to which the tensor will be assigned. | | Raises | | `ValueError` | The logical device id presented is not consistent with total number of partitions specified by the device assignment or the TPUStrategy is constructed with `experimental_spmd_xla_partitioning=True`. | | Returns | | Annotated tensor with identical value as `tensor`. | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.data.Dataset`](../data/dataset). The returned [`tf.distribute.DistributedDataset`](distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../data/tfrecorddataset), [`tf.data.TextLineDataset`](../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `experimental_replicate_to_logical_devices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/tpu_strategy.py#L605-L651) ``` experimental_replicate_to_logical_devices( tensor ) ``` Adds annotation that `tensor` will be replicated to all logical devices. This adds an annotation to tensor `tensor` specifying that operations on `tensor` will be invoked on all logical devices. ``` # Initializing TPU system with 2 logical devices and 4 replicas. resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 1, 1, 2], num_replicas=4) strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment) iterator = iter(inputs) @tf.function() def step_fn(inputs): images, labels = inputs images = strategy.experimental_split_to_logical_devices( inputs, [1, 2, 4, 1]) # model() function will be executed on 8 logical devices with `inputs` # split 2 * 4 ways. output = model(inputs) # For loss calculation, all logical devices share the same logits # and labels. labels = strategy.experimental_replicate_to_logical_devices(labels) output = strategy.experimental_replicate_to_logical_devices(output) loss = loss_fn(labels, output) return loss strategy.run(step_fn, args=(next(iterator),)) ``` Args: tensor: Input tensor to annotate. | Returns | | Annotated tensor with identical value as `tensor`. | ### `experimental_split_to_logical_devices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/tpu_strategy.py#L504-L603) ``` experimental_split_to_logical_devices( tensor, partition_dimensions ) ``` Adds annotation that `tensor` will be split across logical devices. This adds an annotation to tensor `tensor` specifying that operations on `tensor` will be split among multiple logical devices. Tensor `tensor` will be split across dimensions specified by `partition_dimensions`. The dimensions of `tensor` must be divisible by corresponding value in `partition_dimensions`. For example, for system with 8 logical devices, if `tensor` is an image tensor with shape (batch\_size, width, height, channel) and `partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split 2 in width dimension and 4 way in height dimension and the split tensor values will be fed into 8 logical devices. ``` # Initializing TPU system with 8 logical devices and 1 replica. resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 2, 2, 2], num_replicas=1) # Construct the TPUStrategy. Since we are going to split the image across # logical devices, here we set `experimental_spmd_xla_partitioning=True` # so that the partitioning can be compiled in SPMD mode, which usually # results in faster compilation and smaller HBM requirement if the size of # input and activation tensors are much bigger than that of the model # parameters. Note that this flag is suggested but not a hard requirement # for `experimental_split_to_logical_devices`. strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment, experimental_spmd_xla_partitioning=True) iterator = iter(inputs) @tf.function() def step_fn(inputs): inputs = strategy.experimental_split_to_logical_devices( inputs, [1, 2, 4, 1]) # model() function will be executed on 8 logical devices with `inputs` # split 2 * 4 ways. output = model(inputs) return output strategy.run(step_fn, args=(next(iterator),)) ``` Args: tensor: Input tensor to annotate. partition\_dimensions: An unnested list of integers with the size equal to rank of `tensor` specifying how `tensor` will be partitioned. The product of all elements in `partition_dimensions` must be equal to the total number of logical devices per replica. | Raises | | `ValueError` | 1) If the size of partition\_dimensions does not equal to rank of `tensor` or 2) if product of elements of `partition_dimensions` does not match the number of logical devices per replica defined by the implementing DistributionStrategy's device specification or 3) if a known size of `tensor` is not divisible by corresponding value in `partition_dimensions`. | | Returns | | Annotated tensor with identical value as `tensor`. | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](distributedvalues) or [`tf.Tensor`](../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce): This differs from [`Strategy.reduce`](mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/tpu_strategy.py#L372-L428) ``` run( fn, args=(), kwargs=None, options=None ) ``` Run the computation defined by `fn` on each TPU replica. Executes ops specified by `fn` on each replica. If `args` or `kwargs` have [`tf.distribute.DistributedValues`](distributedvalues), such as those produced by a [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function), when `fn` is executed on a particular replica, it will be executed with the component of [`tf.distribute.DistributedValues`](distributedvalues) that correspond to that replica. `fn` may call [`tf.distribute.get_replica_context()`](get_replica_context) to access members such as `all_reduce`. All arguments in `args` or `kwargs` should either be nest of tensors or [`tf.distribute.DistributedValues`](distributedvalues) containing tensors or composite tensors. #### Example usage: ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) strategy = tf.distribute.TPUStrategy(resolver) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function(value_fn)) def replica_fn(input): return input * 2 return strategy.run(replica_fn, args=(distributed_values,)) result = run() ``` | Args | | `fn` | The function to run. The output must be a [`tf.nest`](../nest) of `Tensor`s. | | `args` | (Optional) Positional arguments to `fn`. | | `kwargs` | (Optional) Keyword arguments to `fn`. | | `options` | (Optional) An instance of [`tf.distribute.RunOptions`](runoptions) specifying the options to run `fn`. | | Returns | | Merged return value of `fn` across replicas. The structure of the return value is the same as the return value from `fn`. Each element in the structure can either be [`tf.distribute.DistributedValues`](distributedvalues), `Tensor` objects, or `Tensor`s (for example, if running on a single replica). | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.CrossDeviceOps tf.distribute.CrossDeviceOps ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L248-L575) | Base class for cross-device reduction and broadcasting algorithms. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.CrossDeviceOps`](https://www.tensorflow.org/api_docs/python/tf/distribute/CrossDeviceOps) ``` tf.distribute.CrossDeviceOps() ``` The main purpose of this class is to be passed to [`tf.distribute.MirroredStrategy`](mirroredstrategy) in order to choose among different cross device communication implementations. Prefer using the methods of [`tf.distribute.Strategy`](strategy) instead of the ones of this class. #### Implementations: * [`tf.distribute.ReductionToOneDevice`](reductiontoonedevice) * [`tf.distribute.NcclAllReduce`](ncclallreduce) * [`tf.distribute.HierarchicalCopyAllReduce`](hierarchicalcopyallreduce) Methods ------- ### `batch_reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L397-L444) ``` batch_reduce( reduce_op, value_destination_pairs, options=None ) ``` Reduce values to destinations in batches. See [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `value_destination_pairs` | a sequence of (value, destinations) pairs. See [`tf.distribute.CrossDeviceOps.reduce`](crossdeviceops#reduce) for descriptions. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A list of [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues), one per pair in `value_destination_pairs`. | | Raises | | `ValueError` | if `value_destination_pairs` is not an iterable of tuples of [`tf.distribute.DistributedValues`](distributedvalues) and destinations. | ### `batch_reduce_implementation` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L497-L522) ``` batch_reduce_implementation( reduce_op, value_destination_pairs, options ) ``` Implementation of `batch_reduce`. Overriding this method is useful for subclass implementers. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `value_destination_pairs` | a sequence of (value, destinations) pairs. See `reduce` for descriptions. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A list of [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues), one per pair in `value_destination_pairs`. | | Raises | | `ValueError` | if `value_destination_pairs` is not an iterable of tuples of [`tf.distribute.DistributedValues`](distributedvalues) and destinations. | ### `broadcast` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L446-L463) ``` broadcast( tensor, destinations ) ``` Broadcast `tensor` to `destinations`. This can only be called in the cross-replica context. | Args | | `tensor` | a [`tf.Tensor`](../tensor) like object. The value to broadcast. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a [`tf.Variable`](../variable), the value is broadcasted to the devices of that variable, this method doesn't update the variable. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | ### `broadcast_implementation` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L524-L544) ``` broadcast_implementation( tensor, destinations ) ``` Implementation of `broadcast`. | Args | | `tensor` | a [`tf.Tensor`](../tensor) like object. The value to broadcast. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to broadcast to. `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is broadcasted to the devices of that variable, this method doesn't update the variable. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L271-L317) ``` reduce( reduce_op, per_replica_value, destinations, options=None ) ``` Reduce `per_replica_value` to `destinations`. See [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `per_replica_value` | a [`tf.distribute.DistributedValues`](distributedvalues), or a [`tf.Tensor`](../tensor) like object. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to `value` and `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is reduced to the devices of that variable, and this method doesn't update the variable. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | | Raises | | `ValueError` | if per\_replica\_value can't be converted to a [`tf.distribute.DistributedValues`](distributedvalues) or if destinations is not a string, [`tf.Variable`](../variable) or [`tf.distribute.DistributedValues`](distributedvalues). | ### `reduce_implementation` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L465-L495) ``` reduce_implementation( reduce_op, per_replica_value, destinations, options ) ``` Implementation of `reduce`. Overriding this method is useful for subclass implementers. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `per_replica_value` | a [`tf.distribute.DistributedValues`](distributedvalues), or a [`tf.Tensor`](../tensor) like object. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to `value` and `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is reduced to the devices of that variable, this method doesn't update the variable. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | | Raises | | `ValueError` | if per\_replica\_value can't be converted to a [`tf.distribute.DistributedValues`](distributedvalues) or if destinations is not a string, [`tf.Variable`](../variable) or [`tf.distribute.DistributedValues`](distributedvalues). | tensorflow tf.distribute.get_replica_context tf.distribute.get\_replica\_context =================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribution_strategy_context.py#L144-L189) | Returns the current [`tf.distribute.ReplicaContext`](replicacontext) or `None`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.get_replica_context`](https://www.tensorflow.org/api_docs/python/tf/distribute/get_replica_context) ``` tf.distribute.get_replica_context() ``` Returns `None` if in a cross-replica context. #### Note that execution: 1. starts in the default (single-replica) replica context (this function will return the default `ReplicaContext` object); 2. switches to cross-replica context (in which case this will return `None`) when entering a `with tf.distribute.Strategy.scope():` block; 3. switches to a (non-default) replica context inside `strategy.run(fn, ...)`; 4. if `fn` calls `get_replica_context().merge_call(merge_fn, ...)`, then inside `merge_fn` you are back in the cross-replica context (and again this function will return `None`). Most [`tf.distribute.Strategy`](strategy) methods may only be executed in a cross-replica context, in a replica context you should use the API of the [`tf.distribute.ReplicaContext`](replicacontext) object returned by this method instead. ``` assert tf.distribute.get_replica_context() is not None # default with strategy.scope(): assert tf.distribute.get_replica_context() is None def f(): replica_context = tf.distribute.get_replica_context() # for strategy assert replica_context is not None tf.print("Replica id: ", replica_context.replica_id_in_sync_group, " of ", replica_context.num_replicas_in_sync) strategy.run(f) ``` | Returns | | The current [`tf.distribute.ReplicaContext`](replicacontext) object when in a replica context scope, else `None`. Within a particular block, exactly one of these two things will be true:* `get_replica_context()` returns non-`None`, or * `tf.distribute.is_cross_replica_context()` returns True. | tensorflow tf.distribute.OneDeviceStrategy tf.distribute.OneDeviceStrategy =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L38-L235) | A distribution strategy for running on a single device. Inherits From: [`Strategy`](strategy) ``` tf.distribute.OneDeviceStrategy( device ) ``` Using this strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via `strategy.run` will also be placed on the specified device as well. Typical usage of this strategy could be testing your code with the tf.distribute.Strategy API before switching to other strategies which actually distribute to multiple devices/machines. #### For example: ``` strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") with strategy.scope(): v = tf.Variable(1.0) print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0 def step_fn(x): return x * 2 result = 0 for i in range(10): result += strategy.run(step_fn, args=(i,)) print(result) # 90 ``` | Args | | `device` | Device string identifier for the device on which the variables should be placed. See class docs for more details on how the device is used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0" | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker [`tf.distribute`](../distribute) strategy such as [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy()`](tpustrategy), there is a [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) must set the relevant attribute, or override this property; otherwise, `None` is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver), and in those cases this property will return `None`. The [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, ``` os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. ``` For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver)'s API docstring. | | `extended` | [`tf.distribute.StrategyExtended`](strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L110-L152) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../data/dataset) instances created by calls to `dataset_fn`. `dataset_fn` will be called once for each worker in the strategy. In this case, we only have one worker and one device so `dataset_fn` is called once. The `dataset_fn` should take an [`tf.distribute.InputContext`](inputcontext) instance where information about batching and input replication can be accessed: ``` def dataset_fn(input_context): batch_size = input_context.get_per_replica_batch_size(global_batch_size) d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) return d.shard( input_context.num_input_pipelines, input_context.input_pipeline_id) inputs = strategy.distribute_datasets_from_function(dataset_fn) for batch in inputs: replica_results = strategy.run(replica_fn, args=(batch,)) ``` | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](inputcontext) instance and returning a [`tf.data.Dataset`](../data/dataset). | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A "distributed `Dataset`", which the caller can iterate over like regular datasets. | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L81-L108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Distributes a tf.data.Dataset instance provided via dataset. In this case, there is only one device, so this is only a thin wrapper around the input dataset. It will, however, prefetch the input data to the specified device. The returned distributed dataset can be iterated over similar to how regular datasets can. > > **Note:** Currently, the user cannot add any more transformations to a distributed dataset. > #### Example: ``` strategy = tf.distribute.OneDeviceStrategy() dataset = tf.data.Dataset.range(10).batch(2) dist_dataset = strategy.experimental_distribute_dataset(dataset) for x in dist_dataset: print(x) # [0, 1], [2, 3],... ``` Args: dataset: [`tf.data.Dataset`](../data/dataset) to be prefetched to device. options: [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. Returns: A "distributed `Dataset`" that the caller can iterate over. ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L154-L168) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. In `OneDeviceStrategy`, the `value` is always expected to be a single value, so the result is just the value in a tuple. | Args | | `value` | A value returned by `experimental_run()`, `run()`, `extended.call_for_each_replica()`, or a variable created in `scope`. | | Returns | | A tuple of values contained in `value`. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](distributedvalues) or [`tf.Tensor`](../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L188-L219) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas. In `OneDeviceStrategy`, there is only one replica, so if axis=None, value is simply returned. If axis is specified as something other than None, such as axis=0, value is reduced along that axis and returned. #### Example: ``` t = tf.range(10) result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=None).numpy() # result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=0).numpy() # result: 45 ``` | Args | | `reduce_op` | A [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. | | `value` | A "per replica" value, e.g. returned by `run` to be combined into a single tensor. | | `axis` | Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L170-L186) ``` run( fn, args=(), kwargs=None, options=None ) ``` Run `fn` on each replica, with the given arguments. In `OneDeviceStrategy`, `fn` is simply called within a device scope for the given device, with the provided arguments. | Args | | `fn` | The function to run. The output must be a [`tf.nest`](../nest) of `Tensor`s. | | `args` | (Optional) Positional arguments to `fn`. | | `kwargs` | (Optional) Keyword arguments to `fn`. | | `options` | (Optional) An instance of [`tf.distribute.RunOptions`](runoptions) specifying the options to run `fn`. | | Returns | | Return value from running `fn`. | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/one_device_strategy.py#L221-L235) ``` scope() ``` Returns a context manager selecting this Strategy as current. Inside a `with strategy.scope():` code block, this thread will use a variable creator set by `strategy`, and will enter its "cross-replica context". In `OneDeviceStrategy`, all variables created inside `strategy.scope()` will be on `device` specified at strategy construction time. See example in the docs for this class. | Returns | | A context manager to use for creating variables with this strategy. |
programming_docs
tensorflow tf.distribute.MultiWorkerMirroredStrategy tf.distribute.MultiWorkerMirroredStrategy ========================================= A distribution strategy for synchronous training on multiple workers. Inherits From: [`Strategy`](strategy) ``` tf.distribute.MultiWorkerMirroredStrategy( cluster_resolver=None, communication_options=None ) ``` This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to [`tf.distribute.MirroredStrategy`](mirroredstrategy), it replicates all variables and computations to each local device. The difference is that it uses a distributed collective implementation (e.g. all-reduce), so that multiple workers can work together. You need to launch your program on each worker and configure `cluster_resolver` correctly. For example, if you are using [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](cluster_resolver/tfconfigclusterresolver), each worker needs to have its corresponding `task_type` and `task_id` set in the `TF_CONFIG` environment variable. An example TF\_CONFIG on worker-0 of a two worker cluster is: ``` TF_CONFIG = '{"cluster": {"worker": ["localhost:12345", "localhost:23456"]}, "task": {"type": "worker", "index": 0} }' ``` Your program runs on each worker as-is. Note that collectives require each worker to participate. All [`tf.distribute`](../distribute) and non [`tf.distribute`](../distribute) API may use collectives internally, e.g. checkpointing and saving since reading a [`tf.Variable`](../variable) with [`tf.VariableSynchronization.ON_READ`](../variablesynchronization#ON_READ) all-reduces the value. Therefore it's recommended to run exactly the same program on each worker. Dispatching based on `task_type` or `task_id` of the worker is error-prone. `cluster_resolver.num_accelerators()` determines the number of GPUs the strategy uses. If it's zero, the strategy uses the CPU. All workers need to use the same number of devices, otherwise the behavior is undefined. This strategy is not intended for TPU. Use [`tf.distribute.TPUStrategy`](tpustrategy) instead. After setting up TF\_CONFIG, using this strategy is similar to using [`tf.distribute.MirroredStrategy`](mirroredstrategy) and [`tf.distribute.TPUStrategy`](tpustrategy). ``` strategy = tf.distribute.MultiWorkerMirroredStrategy() with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Dense(2, input_shape=(5,)), ]) optimizer = tf.keras.optimizers.SGD(learning_rate=0.1) def dataset_fn(ctx): x = np.random.random((2, 5)).astype(np.float32) y = np.random.randint(2, size=(2, 1)) dataset = tf.data.Dataset.from_tensor_slices((x, y)) return dataset.repeat().batch(1, drop_remainder=True) dist_dataset = strategy.distribute_datasets_from_function(dataset_fn) model.compile() model.fit(dist_dataset) ``` You can also write your own training loop: ``` @tf.function def train_step(iterator): def step_fn(inputs): features, labels = inputs with tf.GradientTape() as tape: logits = model(features, training=True) loss = tf.keras.losses.sparse_categorical_crossentropy( labels, logits) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) strategy.run(step_fn, args=(next(iterator),)) for _ in range(NUM_STEP): train_step(iterator) ``` See [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for a detailed tutorial. **Saving** You need to save and checkpoint on all workers instead of just one. This is because variables whose synchronization=ON\_READ triggers aggregation during saving. It's recommended to save to a different path on each worker to avoid race conditions. Each worker saves the same thing. See [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#model_saving_and_loading) tutorial for examples. **Known Issues** * [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](cluster_resolver/tfconfigclusterresolver) does not return the correct number of accelerators. The strategy uses all available GPUs if `cluster_resolver` is [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](cluster_resolver/tfconfigclusterresolver) or `None`. * In eager mode, the strategy needs to be created before calling any other Tensorflow API. | Args | | `cluster_resolver` | optional [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver). If `None`, [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](cluster_resolver/tfconfigclusterresolver) is used. | | `communication_options` | optional [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). This configures the default options for cross device communications. It can be overridden by options provided to the communication APIs like [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. As a multi-worker strategy, [`tf.distribute.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy) provides the associated [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver). If the user provides one in `__init__`, that instance is returned; if the user does not, a default `TFConfigClusterResolver` is provided. | | `extended` | [`tf.distribute.StrategyExtended`](strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](inputcontext) argument and returns a [`tf.data.Dataset`](../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](distributeddataset) returned by this API to query the [`tf.TypeSpec`](../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../function). Follow [`tf.distribute.DistributedDataset.element_spec`](distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](inputcontext) instance and returning a [`tf.data.Dataset`](../data/dataset). | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.data.Dataset`](../data/dataset). The returned [`tf.distribute.DistributedDataset`](distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../data/tfrecorddataset), [`tf.data.TextLineDataset`](../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](distributedvalues) or [`tf.Tensor`](../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce): This differs from [`Strategy.reduce`](mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1197-L1312) ``` run( fn, args=(), kwargs=None, options=None ) ``` Invokes `fn` on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes `fn` on each replica. If `args` or `kwargs` have [`tf.distribute.DistributedValues`](distributedvalues), such as those produced by a [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function), when `fn` is executed on a particular replica, it will be executed with the component of [`tf.distribute.DistributedValues`](distributedvalues) that correspond to that replica. `fn` is invoked under a replica context. `fn` may call [`tf.distribute.get_replica_context()`](get_replica_context) to access members such as `all_reduce`. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in `args` or `kwargs` can be a nested structure of tensors, e.g. a list of tensors, in which case `args` and `kwargs` will be passed to the `fn` invoked on each replica. Or `args` or `kwargs` can be [`tf.distribute.DistributedValues`](distributedvalues) containing tensors or composite tensors, i.e. [`tf.compat.v1.TensorInfo.CompositeTensor`](../compat/v1/tensorinfo/compositetensor), in which case each `fn` call will get the component of a [`tf.distribute.DistributedValues`](distributedvalues) corresponding to its replica. Note that arbitrary Python values that are not of the types above are not supported. #### Example usage: 1. Constant tensor input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } ``` 1. DistributedValues input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> ``` 1. Use [`tf.distribute.ReplicaContext`](replicacontext) to allreduce values. ``` strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } ``` | Args | | `fn` | The function to run on each replica. | | `args` | Optional positional arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](distributedvalues). | | `kwargs` | Optional keyword arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](distributedvalues). | | `options` | An optional instance of [`tf.distribute.RunOptions`](runoptions) specifying the options to run `fn`. | | Returns | | Merged return value of `fn` across replicas. The structure of the return value is the same as the return value from `fn`. Each element in the structure can either be [`tf.distribute.DistributedValues`](distributedvalues), `Tensor` objects, or `Tensor`s (for example, if running on a single replica). | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.HierarchicalCopyAllReduce tf.distribute.HierarchicalCopyAllReduce ======================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L993-L1032) | Hierarchical copy all-reduce implementation of CrossDeviceOps. Inherits From: [`CrossDeviceOps`](crossdeviceops) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.HierarchicalCopyAllReduce`](https://www.tensorflow.org/api_docs/python/tf/distribute/HierarchicalCopyAllReduce) ``` tf.distribute.HierarchicalCopyAllReduce( num_packs=1 ) ``` It reduces to one GPU along edges in some hierarchy and broadcasts back to each GPU along the same path. For the batch API, tensors will be repacked or aggregated for more efficient cross-device transportation. This is a reduction created for Nvidia DGX-1 which assumes GPUs connects like that on DGX-1 machine. If you have different GPU inter-connections, it is likely that it would be slower than [`tf.distribute.ReductionToOneDevice`](reductiontoonedevice). For reduces that are not all-reduce, it falls back to [`tf.distribute.ReductionToOneDevice`](reductiontoonedevice). Here is how you can use `HierarchicalCopyAllReduce` in [`tf.distribute.MirroredStrategy`](mirroredstrategy): ``` strategy = tf.distribute.MirroredStrategy( cross_device_ops=tf.distribute.HierarchicalCopyAllReduce()) ``` | Args | | `num_packs` | a non-negative integer. The number of packs to split values into. If zero, no packing will be done. | | Raises | | ValueError if `num_packs` is negative. | Methods ------- ### `batch_reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L397-L444) ``` batch_reduce( reduce_op, value_destination_pairs, options=None ) ``` Reduce values to destinations in batches. See [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `value_destination_pairs` | a sequence of (value, destinations) pairs. See [`tf.distribute.CrossDeviceOps.reduce`](crossdeviceops#reduce) for descriptions. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A list of [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues), one per pair in `value_destination_pairs`. | | Raises | | `ValueError` | if `value_destination_pairs` is not an iterable of tuples of [`tf.distribute.DistributedValues`](distributedvalues) and destinations. | ### `broadcast` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L446-L463) ``` broadcast( tensor, destinations ) ``` Broadcast `tensor` to `destinations`. This can only be called in the cross-replica context. | Args | | `tensor` | a [`tf.Tensor`](../tensor) like object. The value to broadcast. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a [`tf.Variable`](../variable), the value is broadcasted to the devices of that variable, this method doesn't update the variable. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L271-L317) ``` reduce( reduce_op, per_replica_value, destinations, options=None ) ``` Reduce `per_replica_value` to `destinations`. See [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `per_replica_value` | a [`tf.distribute.DistributedValues`](distributedvalues), or a [`tf.Tensor`](../tensor) like object. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to `value` and `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is reduced to the devices of that variable, and this method doesn't update the variable. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | | Raises | | `ValueError` | if per\_replica\_value can't be converted to a [`tf.distribute.DistributedValues`](distributedvalues) or if destinations is not a string, [`tf.Variable`](../variable) or [`tf.distribute.DistributedValues`](distributedvalues). | tensorflow tf.distribute.get_strategy tf.distribute.get\_strategy =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribution_strategy_context.py#L233-L250) | Returns the current [`tf.distribute.Strategy`](strategy) object. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.get_strategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/get_strategy) ``` tf.distribute.get_strategy() ``` Typically only used in a cross-replica context: ``` if tf.distribute.in_cross_replica_context(): strategy = tf.distribute.get_strategy() ... ``` | Returns | | A [`tf.distribute.Strategy`](strategy) object. Inside a `with strategy.scope()` block, it returns `strategy`, otherwise it returns the default (single-replica) [`tf.distribute.Strategy`](strategy) object. | tensorflow tf.distribute.MirroredStrategy tf.distribute.MirroredStrategy ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/mirrored_strategy.py#L200-L290) | Synchronous training across multiple replicas on one machine. Inherits From: [`Strategy`](strategy) ``` tf.distribute.MirroredStrategy( devices=None, cross_device_ops=None ) ``` This strategy is typically used for training on one machine with multiple GPUs. For TPUs, use [`tf.distribute.TPUStrategy`](tpustrategy). To use `MirroredStrategy` with multiple workers, please refer to [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy). For example, a variable created under a `MirroredStrategy` is a `MirroredVariable`. If no devices are specified in the constructor argument of the strategy then it will use all the available GPUs. If no GPUs are found, it will use the available CPUs. Note that TensorFlow treats all CPUs on a machine as a single device, and uses threads internally for parallelism. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) with strategy.scope(): x = tf.Variable(1.) x MirroredVariable:{ 0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable ... shape=() dtype=float32, numpy=1.0> } ``` While using distribution strategies, all the variable creation should be done within the strategy's scope. This will replicate the variables across all the replicas and keep them in sync using an all-reduce algorithm. Variables created inside a `MirroredStrategy` which is wrapped with a [`tf.function`](../function) are still `MirroredVariables`. ``` x = [] @tf.function # Wrap the function with tf.function. def create_variable(): if not x: x.append(tf.Variable(1.)) return x[0] strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) with strategy.scope(): _ = create_variable() print(x[0]) MirroredVariable:{ 0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable ... shape=() dtype=float32, numpy=1.0> } ``` `experimental_distribute_dataset` can be used to distribute the dataset across the replicas when writing your own training loop. If you are using `.fit` and `.compile` methods available in [`tf.keras`](../keras), then [`tf.keras`](../keras) will handle the distribution for you. #### For example: ``` my_strategy = tf.distribute.MirroredStrategy() with my_strategy.scope(): @tf.function def distribute_train_epoch(dataset): def replica_fn(input): # process input and return result return result total_result = 0 for x in dataset: per_replica_result = my_strategy.run(replica_fn, args=(x,)) total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_result, axis=None) return total_result dist_dataset = my_strategy.experimental_distribute_dataset(dataset) for _ in range(EPOCHS): train_result = distribute_train_epoch(dist_dataset) ``` | Args | | `devices` | a list of device strings such as `['/gpu:0', '/gpu:1']`. If `None`, all available GPUs are used. If no GPUs are found, CPU is used. | | `cross_device_ops` | optional, a descedant of `CrossDeviceOps`. If this is not set, `NcclAllReduce()` will be used by default. One would customize this if NCCL isn't available or if a special implementation that exploits the particular hardware is available. | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker [`tf.distribute`](../distribute) strategy such as [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy()`](tpustrategy), there is a [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) must set the relevant attribute, or override this property; otherwise, `None` is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver), and in those cases this property will return `None`. The [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver) may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, ``` os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. ``` For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](cluster_resolver/clusterresolver)'s API docstring. | | `extended` | [`tf.distribute.StrategyExtended`](strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](inputcontext) argument and returns a [`tf.data.Dataset`](../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](distributeddataset) returned by this API to query the [`tf.TypeSpec`](../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../function). Follow [`tf.distribute.DistributedDataset.element_spec`](distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](inputcontext) instance and returning a [`tf.data.Dataset`](../data/dataset). | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.data.Dataset`](../data/dataset). The returned [`tf.distribute.DistributedDataset`](distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../data/tfrecorddataset), [`tf.data.TextLineDataset`](../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](distributedvalues) or [`tf.Tensor`](../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce): This differs from [`Strategy.reduce`](mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues) instance, e.g. returned by [`Strategy.run`](mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1197-L1312) ``` run( fn, args=(), kwargs=None, options=None ) ``` Invokes `fn` on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes `fn` on each replica. If `args` or `kwargs` have [`tf.distribute.DistributedValues`](distributedvalues), such as those produced by a [`tf.distribute.DistributedDataset`](distributeddataset) from [`tf.distribute.Strategy.experimental_distribute_dataset`](strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](strategy#distribute_datasets_from_function), when `fn` is executed on a particular replica, it will be executed with the component of [`tf.distribute.DistributedValues`](distributedvalues) that correspond to that replica. `fn` is invoked under a replica context. `fn` may call [`tf.distribute.get_replica_context()`](get_replica_context) to access members such as `all_reduce`. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in `args` or `kwargs` can be a nested structure of tensors, e.g. a list of tensors, in which case `args` and `kwargs` will be passed to the `fn` invoked on each replica. Or `args` or `kwargs` can be [`tf.distribute.DistributedValues`](distributedvalues) containing tensors or composite tensors, i.e. [`tf.compat.v1.TensorInfo.CompositeTensor`](../compat/v1/tensorinfo/compositetensor), in which case each `fn` call will get the component of a [`tf.distribute.DistributedValues`](distributedvalues) corresponding to its replica. Note that arbitrary Python values that are not of the types above are not supported. #### Example usage: 1. Constant tensor input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } ``` 1. DistributedValues input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> ``` 1. Use [`tf.distribute.ReplicaContext`](replicacontext) to allreduce values. ``` strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } ``` | Args | | `fn` | The function to run on each replica. | | `args` | Optional positional arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](distributedvalues). | | `kwargs` | Optional keyword arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](distributedvalues). | | `options` | An optional instance of [`tf.distribute.RunOptions`](runoptions) specifying the options to run `fn`. | | Returns | | Merged return value of `fn` across replicas. The structure of the return value is the same as the return value from `fn`. Each element in the structure can either be [`tf.distribute.DistributedValues`](distributedvalues), `Tensor` objects, or `Tensor`s (for example, if running on a single replica). | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.ReductionToOneDevice tf.distribute.ReductionToOneDevice ================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L579-L643) | A CrossDeviceOps implementation that copies values to one device to reduce. Inherits From: [`CrossDeviceOps`](crossdeviceops) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.ReductionToOneDevice`](https://www.tensorflow.org/api_docs/python/tf/distribute/ReductionToOneDevice) ``` tf.distribute.ReductionToOneDevice( reduce_to_device=None, accumulation_fn=None ) ``` This implementation always copies values to one device to reduce them, then broadcast reduced values to the destinations. It doesn't support efficient batching. Here is how you can use `ReductionToOneDevice` in [`tf.distribute.MirroredStrategy`](mirroredstrategy): ``` strategy = tf.distribute.MirroredStrategy( cross_device_ops=tf.distribute.ReductionToOneDevice()) ``` | Args | | `reduce_to_device` | the intermediate device to reduce to. If None, reduce to the first device in `destinations` of the `reduce` method. | | `accumulation_fn` | a function that does accumulation. If None, [`tf.math.add_n`](../math/add_n) is used. | Methods ------- ### `batch_reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L397-L444) ``` batch_reduce( reduce_op, value_destination_pairs, options=None ) ``` Reduce values to destinations in batches. See [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `value_destination_pairs` | a sequence of (value, destinations) pairs. See [`tf.distribute.CrossDeviceOps.reduce`](crossdeviceops#reduce) for descriptions. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A list of [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues), one per pair in `value_destination_pairs`. | | Raises | | `ValueError` | if `value_destination_pairs` is not an iterable of tuples of [`tf.distribute.DistributedValues`](distributedvalues) and destinations. | ### `broadcast` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L446-L463) ``` broadcast( tensor, destinations ) ``` Broadcast `tensor` to `destinations`. This can only be called in the cross-replica context. | Args | | `tensor` | a [`tf.Tensor`](../tensor) like object. The value to broadcast. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a [`tf.Variable`](../variable), the value is broadcasted to the devices of that variable, this method doesn't update the variable. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L271-L317) ``` reduce( reduce_op, per_replica_value, destinations, options=None ) ``` Reduce `per_replica_value` to `destinations`. See [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `per_replica_value` | a [`tf.distribute.DistributedValues`](distributedvalues), or a [`tf.Tensor`](../tensor) like object. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to `value` and `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is reduced to the devices of that variable, and this method doesn't update the variable. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | | Raises | | `ValueError` | if per\_replica\_value can't be converted to a [`tf.distribute.DistributedValues`](distributedvalues) or if destinations is not a string, [`tf.Variable`](../variable) or [`tf.distribute.DistributedValues`](distributedvalues). | tensorflow Module: tf.distribute.coordinator Module: tf.distribute.coordinator ================================= Public API for tf.distribute.coordinator namespace. Classes ------- [`class ClusterCoordinator`](experimental/coordinator/clustercoordinator): An object to schedule and coordinate remote function execution. [`class PerWorkerValue`](experimental/coordinator/perworkervalues): A container that holds a list of values, one value per worker. [`class RemoteValue`](experimental/coordinator/remotevalue): An asynchronously available value of a scheduled function. tensorflow tf.distribute.InputOptions tf.distribute.InputOptions ========================== Run options for `experimental_distribute_dataset(s_from_function)`. ``` tf.distribute.InputOptions( experimental_fetch_to_device=None, experimental_replication_mode=tf.distribute.InputReplicationMode.PER_WORKER, experimental_place_dataset_on_device=False, experimental_per_replica_buffer_size=1 ) ``` This can be used to hold some strategy specific configs. ``` # Setup TPUStrategy resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) strategy = tf.distribute.TPUStrategy(resolver) dataset = tf.data.Dataset.range(16) distributed_dataset_on_host = ( strategy.experimental_distribute_dataset( dataset, tf.distribute.InputOptions( experimental_replication_mode= experimental_replication_mode.PER_WORKER, experimental_place_dataset_on_device=False, experimental_per_replica_buffer_size=1))) ``` | Attributes | | `experimental_fetch_to_device` | Boolean. If True, dataset elements will be prefetched to accelerator device memory. When False, dataset elements are prefetched to host device memory. Must be False when using TPUEmbedding API. experimental\_fetch\_to\_device can only be used with experimental\_replication\_mode=PER\_WORKER. Default behavior is same as setting it to True. | | `experimental_replication_mode` | Replication mode for the input function. Currently, the InputReplicationMode.PER\_REPLICA is only supported with tf.distribute.MirroredStrategy. experimental\_distribute\_datasets\_from\_function. The default value is InputReplicationMode.PER\_WORKER. | | `experimental_place_dataset_on_device` | Boolean. Default to False. When True, dataset will be placed on the device, otherwise it will remain on the host. experimental\_place\_dataset\_on\_device=True can only be used with experimental\_replication\_mode=PER\_REPLICA | | `experimental_per_replica_buffer_size` | Integer. Default to 1. Indicates the prefetch buffer size in the replica device memory. Users can set it to 0 to completely disable prefetching behavior, or a number greater than 1 to enable larger buffer size. Note that this option is still valid with `experimental_fetch_to_device=False`. | tensorflow tf.distribute.StrategyExtended tf.distribute.StrategyExtended ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2038-L2737) | Additional APIs for algorithms that need to be distribution-aware. ``` tf.distribute.StrategyExtended( container_strategy ) ``` > > **Note:** For most usage of [`tf.distribute.Strategy`](strategy), there should be no need to call these methods, since TensorFlow libraries (such as optimizers) already call these methods when needed on your behalf. > Some common use cases of functions on this page: * *Locality* [`tf.distribute.DistributedValues`](distributedvalues) can have the same *locality* as a *distributed variable*, which leads to a mirrored value residing on the same devices as the variable (as opposed to the compute devices). Such values may be passed to a call to [`tf.distribute.StrategyExtended.update`](strategyextended#update) to update the value of a variable. You may use [`tf.distribute.StrategyExtended.colocate_vars_with`](strategyextended#colocate_vars_with) to give a variable the same locality as another variable. You may convert a "PerReplica" value to a variable's locality by using [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to) or [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to). * *How to update a distributed variable* A distributed variable is variables created on multiple devices. As discussed in the [glossary](https://www.tensorflow.org/api_docs/python/tf/distribute), mirrored variable and SyncOnRead variable are two examples. The standard pattern for updating distributed variables is to: 1. In your function passed to [`tf.distribute.Strategy.run`](strategy#run), compute a list of (update, variable) pairs. For example, the update might be a gradient of the loss with respect to the variable. 2. Switch to cross-replica mode by calling `tf.distribute.get_replica_context().merge_call()` with the updates and variables as arguments. 3. Call [`tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v)`](strategyextended#reduce_to) (for one variable) or [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to) (for a list of variables) to sum the updates. 4. Call [`tf.distribute.StrategyExtended.update(v)`](strategyextended#update) for each variable to update its value. Steps 2 through 4 are done automatically by class [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) if you call its [`tf.keras.optimizers.Optimizer.apply_gradients`](../keras/optimizers/optimizer#apply_gradients) method in a replica context. In fact, a higher-level solution to update a distributed variable is by calling `assign` on the variable as you would do to a regular [`tf.Variable`](../variable). You can call the method in both *replica context* and *cross-replica context*. For a *mirrored variable*, calling `assign` in *replica context* requires you to specify the `aggregation` type in the variable constructor. In that case, the context switching and sync described in steps 2 through 4 are handled for you. If you call `assign` on *mirrored variable* in *cross-replica context*, you can only assign a single value or assign values from another mirrored variable or a mirrored [`tf.distribute.DistributedValues`](distributedvalues). For a *SyncOnRead variable*, in *replica context*, you can simply call `assign` on it and no aggregation happens under the hood. In *cross-replica context*, you can only assign a single value to a SyncOnRead variable. One example case is restoring from a checkpoint: if the `aggregation` type of the variable is [`tf.VariableAggregation.SUM`](../variableaggregation#SUM), it is assumed that replica values were added before checkpointing, so at the time of restoring, the value is divided by the number of replicas and then assigned to each replica; if the `aggregation` type is [`tf.VariableAggregation.MEAN`](../variableaggregation#MEAN), the value is assigned to each replica directly. | Attributes | | `experimental_require_static_shapes` | Returns `True` if static shape is required; `False` otherwise. | | `parameter_devices` | Returns the tuple of all devices used to place variables. | | `worker_devices` | Returns the tuple of all devices used to for compute replica execution. | Methods ------- ### `batch_reduce_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2386-L2456) ``` batch_reduce_to( reduce_op, value_destination_pairs, options=None ) ``` Combine multiple `reduce_to` calls into one for faster execution. Similar to `reduce_to`, but accepts a list of (value, destinations) pairs. It's more efficient than reduce each value separately. This API currently can only be called in cross-replica context. Other variants to reduce values across replicas are: * [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to): the non-batch version of this API. * [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce): the counterpart of this API in replica context. It supports both batched and non-batched all-reduce. * [`tf.distribute.Strategy.reduce`](strategy#reduce): a more convenient method to reduce to the host in cross-replica context. See `reduce_to` for more information. ``` @tf.function def step_fn(var): def merge_fn(strategy, value, var): # All-reduce the value. Note that `value` here is a # `tf.distribute.DistributedValues`. reduced = strategy.extended.batch_reduce_to( tf.distribute.ReduceOp.SUM, [(value, var)])[0] strategy.extended.update(var, lambda var, value: var.assign(value), args=(reduced,)) value = tf.identity(1.) tf.distribute.get_replica_context().merge_call(merge_fn, args=(value, var)) def run(strategy): with strategy.scope(): v = tf.Variable(0.) strategy.run(step_fn, args=(v,)) return v run(tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])) MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=2.0> } run(tf.distribute.experimental.CentralStorageStrategy( compute_devices=["GPU:0", "GPU:1"], parameter_device="CPU:0")) <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0> run(tf.distribute.OneDeviceStrategy("GPU:0")) <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value_destination_pairs` | a sequence of (value, destinations) pairs. See `tf.distribute.Strategy.reduce_to` for descriptions. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). Options to perform collective operations. This overrides the default options if the [`tf.distribute.Strategy`](strategy) takes one in the constructor. See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details of the options. | | Returns | | A list of reduced values, one per pair in `value_destination_pairs`. | ### `colocate_vars_with` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2227-L2272) ``` colocate_vars_with( colocate_with_variable ) ``` Scope that controls which devices variables will be created on. No operations should be added to the graph inside this scope, it should only be used when creating variables (some implementations work by changing variable creation, others work by using a tf.compat.v1.colocate\_with() scope). This may only be used inside `self.scope()`. #### Example usage: ``` with strategy.scope(): var1 = tf.Variable(...) with strategy.extended.colocate_vars_with(var1): # var2 and var3 will be created on the same device(s) as var1 var2 = tf.Variable(...) var3 = tf.Variable(...) def fn(v1, v2, v3): # operates on v1 from var1, v2 from var2, and v3 from var3 # `fn` runs on every device `var1` is on, `var2` and `var3` will be there # too. strategy.extended.update(var1, fn, args=(var2, var3)) ``` | Args | | `colocate_with_variable` | A variable created in this strategy's `scope()`. Variables created while in the returned context manager will be on the same set of devices as `colocate_with_variable`. | | Returns | | A context manager. | ### `reduce_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2298-L2381) ``` reduce_to( reduce_op, value, destinations, options=None ) ``` Combine (via e.g. sum or mean) values across replicas. `reduce_to` aggregates [`tf.distribute.DistributedValues`](distributedvalues) and distributed variables. It supports both dense values and [`tf.IndexedSlices`](../indexedslices). This API currently can only be called in cross-replica context. Other variants to reduce values across replicas are: * [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to): the batch version of this API. * [`tf.distribute.ReplicaContext.all_reduce`](replicacontext#all_reduce): the counterpart of this API in replica context. It supports both batched and non-batched all-reduce. * [`tf.distribute.Strategy.reduce`](strategy#reduce): a more convenient method to reduce to the host in cross-replica context. `destinations` specifies where to reduce the value to, e.g. "GPU:0". You can also pass in a `Tensor`, and the destinations will be the device of that tensor. For all-reduce, pass the same to `value` and `destinations`. It can be used in [`tf.distribute.ReplicaContext.merge_call`](replicacontext#merge_call) to write code that works for all [`tf.distribute.Strategy`](strategy). ``` @tf.function def step_fn(var): def merge_fn(strategy, value, var): # All-reduce the value. Note that `value` here is a # `tf.distribute.DistributedValues`. reduced = strategy.extended.reduce_to(tf.distribute.ReduceOp.SUM, value, destinations=var) strategy.extended.update(var, lambda var, value: var.assign(value), args=(reduced,)) value = tf.identity(1.) tf.distribute.get_replica_context().merge_call(merge_fn, args=(value, var)) def run(strategy): with strategy.scope(): v = tf.Variable(0.) strategy.run(step_fn, args=(v,)) return v run(tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])) MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=2.0> } run(tf.distribute.experimental.CentralStorageStrategy( compute_devices=["GPU:0", "GPU:1"], parameter_device="CPU:0")) <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0> run(tf.distribute.OneDeviceStrategy("GPU:0")) <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](distributedvalues), or a [`tf.Tensor`](../tensor) like object. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to `value` and `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is reduced to the devices of that variable, and this method doesn't update the variable. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). Options to perform collective operations. This overrides the default options if the [`tf.distribute.Strategy`](strategy) takes one in the constructor. See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details of the options. | | Returns | | A tensor or value reduced to `destinations`. | ### `update` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2557-L2633) ``` update( var, fn, args=(), kwargs=None, group=True ) ``` Run `fn` to update `var` using inputs mirrored to the same devices. [`tf.distribute.StrategyExtended.update`](strategyextended#update) takes a distributed variable `var` to be updated, an update function `fn`, and `args` and `kwargs` for `fn`. It applies `fn` to each component variable of `var` and passes corresponding values from `args` and `kwargs`. Neither `args` nor `kwargs` may contain per-replica values. If they contain mirrored values, they will be unwrapped before calling `fn`. For example, `fn` can be `assign_add` and `args` can be a mirrored DistributedValues where each component contains the value to be added to this mirrored variable `var`. Calling `update` will call `assign_add` on each component variable of `var` with the corresponding tensor value on that device. #### Example usage: ``` strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1']) # With 2 devices with strategy.scope(): v = tf.Variable(5.0, aggregation=tf.VariableAggregation.SUM) def update_fn(v): return v.assign(1.0) result = strategy.extended.update(v, update_fn) # result is # Mirrored:{ # 0: tf.Tensor(1.0, shape=(), dtype=float32), # 1: tf.Tensor(1.0, shape=(), dtype=float32) # } ``` If `var` is mirrored across multiple devices, then this method implements logic as following: ``` results = {} for device, v in var: with tf.device(device): # args and kwargs will be unwrapped if they are mirrored. results[device] = fn(v, *args, **kwargs) return merged(results) ``` Otherwise, this method returns `fn(var, *args, **kwargs)` colocated with `var`. | Args | | `var` | Variable, possibly mirrored to multiple devices, to operate on. | | `fn` | Function to call. Should take the variable as the first argument. | | `args` | Tuple or list. Additional positional arguments to pass to `fn()`. | | `kwargs` | Dict with keyword arguments to pass to `fn()`. | | `group` | Boolean. Defaults to True. If False, the return value will be unwrapped. | | Returns | | By default, the merged return value of `fn` across all replicas. The merged result has dependencies to make sure that if it is evaluated at all, the side effects (updates) will happen on every replica. If instead "group=False" is specified, this function will return a nest of lists where each list has an element per replica, and the caller is responsible for ensuring all elements are executed. | ### `value_container` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2659-L2672) ``` value_container( value ) ``` Returns the container that this per-replica `value` belongs to. | Args | | `value` | A value returned by `run()` or a variable created in `scope()`. | | Returns | | A container that `value` belongs to. If value does not belong to any container (including the case of container having been destroyed), returns the value itself. `value in experimental_local_results(value_container(value))` will always be true. | ### `variable_created_in_scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L2201-L2225) ``` variable_created_in_scope( v ) ``` Tests whether `v` was created while this strategy scope was active. Variables created inside the strategy scope are "owned" by it: ``` strategy = tf.distribute.MirroredStrategy() with strategy.scope(): v = tf.Variable(1.) strategy.extended.variable_created_in_scope(v) True ``` Variables created outside the strategy are not owned by it: ``` strategy = tf.distribute.MirroredStrategy() v = tf.Variable(1.) strategy.extended.variable_created_in_scope(v) False ``` | Args | | `v` | A [`tf.Variable`](../variable) instance. | | Returns | | True if `v` was created inside the scope, False if not. |
programming_docs
tensorflow tf.distribute.DistributedDataset tf.distribute.DistributedDataset ================================ Represents a dataset distributed among devices and machines. A [`tf.distribute.DistributedDataset`](distributeddataset) could be thought of as a "distributed" dataset. When you use [`tf.distribute`](../distribute) API to scale training to multiple devices or machines, you also need to distribute the input data, which leads to a [`tf.distribute.DistributedDataset`](distributeddataset) instance, instead of a [`tf.data.Dataset`](../data/dataset) instance in the non-distributed case. In TF 2.x, [`tf.distribute.DistributedDataset`](distributeddataset) objects are Python iterables. > > **Note:** [`tf.distribute.DistributedDataset`](distributeddataset) instances are *not* of type [`tf.data.Dataset`](../data/dataset). It only supports two usages we will mention below: iteration and `element_spec`. We don't support any other APIs to transform or inspect the dataset. > There are two APIs to create a [`tf.distribute.DistributedDataset`](distributeddataset) object: [`tf.distribute.Strategy.experimental_distribute_dataset(dataset)`](strategy#experimental_distribute_dataset)and [`tf.distribute.Strategy.distribute_datasets_from_function(dataset_fn)`](strategy#distribute_datasets_from_function). *When to use which?* When you have a [`tf.data.Dataset`](../data/dataset) instance, and the regular batch splitting (i.e. re-batch the input [`tf.data.Dataset`](../data/dataset) instance with a new batch size that is equal to the global batch size divided by the number of replicas in sync) and autosharding (i.e. the [`tf.data.experimental.AutoShardPolicy`](../data/experimental/autoshardpolicy) options) work for you, use the former API. Otherwise, if you are *not* using a canonical [`tf.data.Dataset`](../data/dataset) instance, or you would like to customize the batch splitting or sharding, you can wrap these logic in a `dataset_fn` and use the latter API. Both API handles prefetch to device for the user. For more details and examples, follow the links to the APIs. There are two main usages of a `DistributedDataset` object: 1. Iterate over it to generate the input for a single device or multiple devices, which is a [`tf.distribute.DistributedValues`](distributedvalues) instance. To do this, you can: * use a pythonic for-loop construct: ``` global_batch_size = 4 strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(4).batch(global_batch_size) dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def train_step(input): features, labels = input return labels - 0.3 * features for x in dist_dataset: # train_step trains the model using the dataset elements loss = strategy.run(train_step, args=(x,)) print("Loss is", loss) Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7]], shape=(2, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7]], shape=(2, 1), dtype=float32) } ``` ``` Placing the loop inside a <a href="../../tf/function"><code>tf.function</code></a> will give a performance boost. However `break` and `return` are currently not supported if the loop is placed inside a <a href="../../tf/function"><code>tf.function</code></a>. We also don't support placing the loop inside a <a href="../../tf/function"><code>tf.function</code></a> when using <a href="../../tf/distribute/experimental/MultiWorkerMirroredStrategy"><code>tf.distribute.experimental.MultiWorkerMirroredStrategy</code></a> or <a href="../../tf/distribute/experimental/TPUStrategy"><code>tf.distribute.experimental.TPUStrategy</code></a> with multiple workers. ``` * use `__iter__` to create an explicit iterator, which is of type [`tf.distribute.DistributedIterator`](distributediterator) ``` global_batch_size = 4 strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) train_dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(50).batch(global_batch_size) train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset) @tf.function def distributed_train_step(dataset_inputs): def train_step(input): loss = tf.constant(0.1) return loss per_replica_losses = strategy.run(train_step, args=(dataset_inputs,)) return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,axis=None) EPOCHS = 2 STEPS = 3 for epoch in range(EPOCHS): total_loss = 0.0 num_batches = 0 dist_dataset_iterator = iter(train_dist_dataset) for _ in range(STEPS): total_loss += distributed_train_step(next(dist_dataset_iterator)) num_batches += 1 average_train_loss = total_loss / num_batches template = ("Epoch {}, Loss: {:.4f}") print (template.format(epoch+1, average_train_loss)) Epoch 1, Loss: 0.2000 Epoch 2, Loss: 0.2000 ``` To achieve a performance improvement, you can also wrap the `strategy.run` call with a [`tf.range`](../range) inside a [`tf.function`](../function). This runs multiple steps in a [`tf.function`](../function). Autograph will convert it to a [`tf.while_loop`](../while_loop) on the worker. However, it is less flexible comparing with running a single step inside [`tf.function`](../function). For example, you cannot run things eagerly or arbitrary python code within the steps. 1. Inspect the [`tf.TypeSpec`](../typespec) of the data generated by `DistributedDataset`. [`tf.distribute.DistributedDataset`](distributeddataset) generates [`tf.distribute.DistributedValues`](distributedvalues) as input to the devices. If you pass the input to a [`tf.function`](../function) and would like to specify the shape and type of each Tensor argument to the function, you can pass a [`tf.TypeSpec`](../typespec) object to the `input_signature` argument of the [`tf.function`](../function). To get the [`tf.TypeSpec`](../typespec) of the input, you can use the `element_spec` property of the [`tf.distribute.DistributedDataset`](distributeddataset) or [`tf.distribute.DistributedIterator`](distributediterator) object. For example: ``` global_batch_size = 4 epochs = 1 steps_per_epoch = 1 mirrored_strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensors(([2.])).repeat(100).batch(global_batch_size) dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset) @tf.function(input_signature=[dist_dataset.element_spec]) def train_step(per_replica_inputs): def step_fn(inputs): return tf.square(inputs) return mirrored_strategy.run(step_fn, args=(per_replica_inputs,)) for _ in range(epochs): iterator = iter(dist_dataset) for _ in range(steps_per_epoch): output = train_step(next(iterator)) print(output) PerReplica:{ 0: tf.Tensor( [[4.] [4.]], shape=(2, 1), dtype=float32), 1: tf.Tensor( [[4.] [4.]], shape=(2, 1), dtype=float32) } ``` Visit the [tutorial](https://www.tensorflow.org/tutorials/distribute/input) on distributed input for more examples and caveats. | Attributes | | `element_spec` | The type specification of an element of this [`tf.distribute.DistributedDataset`](distributeddataset). ``` global_batch_size = 16 strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensors(([1.],[2])).repeat(100).batch(global_batch_size) dist_dataset = strategy.experimental_distribute_dataset(dataset) dist_dataset.element_spec (PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.float32, name=None), TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)), PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.int32, name=None), TensorSpec(shape=(None, 1), dtype=tf.int32, name=None))) ``` | Methods ------- ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/input_lib.py#L383-L405) ``` __iter__() ``` Creates an iterator for the [`tf.distribute.DistributedDataset`](distributeddataset). The returned iterator implements the Python Iterator protocol. #### Example usage: ``` global_batch_size = 4 strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4]).repeat().batch(global_batch_size) distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset)) print(next(distributed_iterator)) PerReplica:{ 0: tf.Tensor([1 2], shape=(2,), dtype=int32), 1: tf.Tensor([3 4], shape=(2,), dtype=int32) } ``` | Returns | | An [`tf.distribute.DistributedIterator`](distributediterator) instance for the given [`tf.distribute.DistributedDataset`](distributeddataset) object to enumerate over the distributed data. | tensorflow Module: tf.distribute.experimental Module: tf.distribute.experimental ================================== Experimental Distribution Strategy library. Modules ------- [`coordinator`](experimental/coordinator) module: Public API for tf.distribute.experimental.coordinator namespace. [`partitioners`](experimental/partitioners) module: Public API for tf.distribute.experimental.partitioners namespace. [`rpc`](experimental/rpc) module: Public API for tf.distribute.experimental.rpc namespace. Classes ------- [`class CentralStorageStrategy`](experimental/centralstoragestrategy): A one-machine strategy that puts all variables on a single device. [`class CollectiveCommunication`](experimental/communicationimplementation): Cross device communication implementation. [`class CollectiveHints`](experimental/collectivehints): Hints for collective operations like AllReduce. [`class CommunicationImplementation`](experimental/communicationimplementation): Cross device communication implementation. [`class CommunicationOptions`](experimental/communicationoptions): Options for cross device communications like All-reduce. [`class MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy): A distribution strategy for synchronous training on multiple workers. [`class ParameterServerStrategy`](experimental/parameterserverstrategy): An multi-worker tf.distribute strategy with parameter servers. [`class TPUStrategy`](experimental/tpustrategy): Synchronous training on TPUs and TPU Pods. [`class ValueContext`](experimental/valuecontext): A class wrapping information needed by a distribute function. tensorflow tf.distribute.NcclAllReduce tf.distribute.NcclAllReduce =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L956-L989) | NCCL all-reduce implementation of CrossDeviceOps. Inherits From: [`CrossDeviceOps`](crossdeviceops) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.NcclAllReduce`](https://www.tensorflow.org/api_docs/python/tf/distribute/NcclAllReduce) ``` tf.distribute.NcclAllReduce( num_packs=1 ) ``` It uses Nvidia NCCL for all-reduce. For the batch API, tensors will be repacked or aggregated for more efficient cross-device transportation. For reduces that are not all-reduce, it falls back to [`tf.distribute.ReductionToOneDevice`](reductiontoonedevice). Here is how you can use `NcclAllReduce` in [`tf.distribute.MirroredStrategy`](mirroredstrategy): ``` strategy = tf.distribute.MirroredStrategy( cross_device_ops=tf.distribute.NcclAllReduce()) ``` | Args | | `num_packs` | a non-negative integer. The number of packs to split values into. If zero, no packing will be done. | | Raises | | `ValueError` | if `num_packs` is negative. | Methods ------- ### `batch_reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L397-L444) ``` batch_reduce( reduce_op, value_destination_pairs, options=None ) ``` Reduce values to destinations in batches. See [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `value_destination_pairs` | a sequence of (value, destinations) pairs. See [`tf.distribute.CrossDeviceOps.reduce`](crossdeviceops#reduce) for descriptions. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A list of [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues), one per pair in `value_destination_pairs`. | | Raises | | `ValueError` | if `value_destination_pairs` is not an iterable of tuples of [`tf.distribute.DistributedValues`](distributedvalues) and destinations. | ### `broadcast` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L446-L463) ``` broadcast( tensor, destinations ) ``` Broadcast `tensor` to `destinations`. This can only be called in the cross-replica context. | Args | | `tensor` | a [`tf.Tensor`](../tensor) like object. The value to broadcast. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a [`tf.Variable`](../variable), the value is broadcasted to the devices of that variable, this method doesn't update the variable. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cross_device_ops.py#L271-L317) ``` reduce( reduce_op, per_replica_value, destinations, options=None ) ``` Reduce `per_replica_value` to `destinations`. See [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to). This can only be called in the cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) specifying how values should be combined. | | `per_replica_value` | a [`tf.distribute.DistributedValues`](distributedvalues), or a [`tf.Tensor`](../tensor) like object. | | `destinations` | a [`tf.distribute.DistributedValues`](distributedvalues), a [`tf.Variable`](../variable), a [`tf.Tensor`](../tensor) alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to `value` and `destinations`. Note that if it's a [`tf.Variable`](../variable), the value is reduced to the devices of that variable, and this method doesn't update the variable. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details. | | Returns | | A [`tf.Tensor`](../tensor) or [`tf.distribute.DistributedValues`](distributedvalues). | | Raises | | `ValueError` | if per\_replica\_value can't be converted to a [`tf.distribute.DistributedValues`](distributedvalues) or if destinations is not a string, [`tf.Variable`](../variable) or [`tf.distribute.DistributedValues`](distributedvalues). | tensorflow tf.distribute.RunOptions tf.distribute.RunOptions ======================== Run options for `strategy.run`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.RunOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/RunOptions) ``` tf.distribute.RunOptions( experimental_enable_dynamic_batch_size=True, experimental_bucketizing_dynamic_shape=False, experimental_xla_options=None ) ``` This can be used to hold some strategy specific configs. | Attributes | | `experimental_enable_dynamic_batch_size` | Boolean. Only applies to TPUStrategy. Default to True. If True, TPUStrategy will enable dynamic padder to support dynamic batch size for the inputs. Otherwise only static shape inputs are allowed. | | `experimental_bucketizing_dynamic_shape` | Boolean. Only applies to TPUStrategy. Default to False. If True, TPUStrategy will automatic bucketize inputs passed into `run` if the input shape is dynamic. This is a performance optimization to reduce XLA recompilation, which should not have impact on correctness. | | `experimental_xla_options` | A [`tf.tpu.XLAOptions`](../tpu/xlaoptions) instance. Only applies to TPUStrategy. Controls the XLA compiling options on TPUs. Default to None. | tensorflow tf.distribute.Server tf.distribute.Server ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L96-L239) | An in-process TensorFlow server, for use in distributed training. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.Server`](https://www.tensorflow.org/api_docs/python/tf/distribute/Server), [`tf.compat.v1.train.Server`](https://www.tensorflow.org/api_docs/python/tf/distribute/Server) ``` tf.distribute.Server( server_or_cluster_def, job_name=None, task_index=None, protocol=None, config=None, start=True ) ``` A [`tf.distribute.Server`](server) instance encapsulates a set of devices and a [`tf.compat.v1.Session`](../compat/v1/session) target that can participate in distributed training. A server belongs to a cluster (specified by a [`tf.train.ClusterSpec`](../train/clusterspec)), and corresponds to a particular task in a named job. The server can communicate with any other server in the same cluster. | Args | | `server_or_cluster_def` | A [`tf.train.ServerDef`](../train/serverdef) or [`tf.train.ClusterDef`](../train/clusterdef) protocol buffer, or a [`tf.train.ClusterSpec`](../train/clusterspec) object, describing the server to be created and/or the cluster of which it is a member. | | `job_name` | (Optional.) Specifies the name of the job of which the server is a member. Defaults to the value in `server_or_cluster_def`, if specified. | | `task_index` | (Optional.) Specifies the task index of the server in its job. Defaults to the value in `server_or_cluster_def`, if specified. Otherwise defaults to 0 if the server's job has only one task. | | `protocol` | (Optional.) Specifies the protocol to be used by the server. Acceptable values include `"grpc", "grpc+verbs"`. Defaults to the value in `server_or_cluster_def`, if specified. Otherwise defaults to `"grpc"`. | | `config` | (Options.) A [`tf.compat.v1.ConfigProto`](../compat/v1/configproto) that specifies default configuration options for all sessions that run on this server. | | `start` | (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to `True`. | | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while creating the TensorFlow server. | | Attributes | | `server_def` | Returns the [`tf.train.ServerDef`](../train/serverdef) for this server. | | `target` | Returns the target for a [`tf.compat.v1.Session`](../compat/v1/session) to connect to this server. To create a [`tf.compat.v1.Session`](../compat/v1/session) that connects to this server, use the following snippet: ``` server = tf.distribute.Server(...) with tf.compat.v1.Session(server.target): # ... ``` | Methods ------- ### `create_local_server` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L216-L239) ``` @staticmethod create_local_server( config=None, start=True ) ``` Creates a new single-process cluster running on the local host. This method is a convenience wrapper for creating a [`tf.distribute.Server`](server) with a [`tf.train.ServerDef`](../train/serverdef) that specifies a single-process cluster containing a single task in a job called `"local"`. | Args | | `config` | (Options.) A [`tf.compat.v1.ConfigProto`](../compat/v1/configproto) that specifies default configuration options for all sessions that run on this server. | | `start` | (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to `True`. | | Returns | | A local [`tf.distribute.Server`](server). | ### `join` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L176-L185) ``` join() ``` Blocks until the server has shut down. This method currently blocks forever. | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while joining the TensorFlow server. | ### `start` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L167-L174) ``` start() ``` Starts this server. | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while starting the TensorFlow server. |
programming_docs
tensorflow tf.distribute.in_cross_replica_context tf.distribute.in\_cross\_replica\_context ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribution_strategy_context.py#L208-L230) | Returns `True` if in a cross-replica context. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.in_cross_replica_context`](https://www.tensorflow.org/api_docs/python/tf/distribute/in_cross_replica_context) ``` tf.distribute.in_cross_replica_context() ``` See [`tf.distribute.get_replica_context`](get_replica_context) for details. ``` assert not tf.distribute.in_cross_replica_context() with strategy.scope(): assert tf.distribute.in_cross_replica_context() def f(): assert not tf.distribute.in_cross_replica_context() strategy.run(f) ``` | Returns | | `True` if in a cross-replica context (`get_replica_context()` returns `None`), or `False` if in a replica context (`get_replica_context()` returns non-`None`). | tensorflow tf.distribute.ReplicaContext tf.distribute.ReplicaContext ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L3295-L3542) | A class with a collection of APIs that can be called in a replica context. ``` tf.distribute.ReplicaContext( strategy, replica_id_in_sync_group ) ``` You can use [`tf.distribute.get_replica_context`](get_replica_context) to get an instance of `ReplicaContext`, which can only be called inside the function passed to [`tf.distribute.Strategy.run`](strategy#run). ``` strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1']) def func(): replica_context = tf.distribute.get_replica_context() return replica_context.replica_id_in_sync_group strategy.run(func) PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=0>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } ``` | Args | | `strategy` | A [`tf.distribute.Strategy`](strategy). | | `replica_id_in_sync_group` | An integer, a `Tensor` or None. Prefer an integer whenever possible to avoid issues with nested [`tf.function`](../function). It accepts a `Tensor` only to be compatible with `tpu.replicate`. | | Attributes | | `devices` | Returns the devices this replica is to be executed on, as a tuple of strings. (deprecated) **Note:** For [`tf.distribute.MirroredStrategy`](mirroredstrategy) and [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](experimental/multiworkermirroredstrategy), this returns a nested list of device strings, e.g, [["GPU:0"]]. | | `num_replicas_in_sync` | Returns number of replicas that are kept in sync. | | `replica_id_in_sync_group` | Returns the id of the replica. This identifies the replica among all replicas that are kept in sync. The value of the replica id can range from 0 to [`tf.distribute.ReplicaContext.num_replicas_in_sync`](replicacontext#num_replicas_in_sync) - 1. **Note:** This is not guaranteed to be the same ID as the XLA replica ID use for low-level operations such as collective\_permute. | | `strategy` | The current [`tf.distribute.Strategy`](strategy) object. | Methods ------- ### `all_gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L3299-L3454) ``` all_gather( value, axis, options=None ) ``` All-gathers `value` across all replicas along `axis`. > > **Note:** An `all_gather` method can only be called in replica context. For a cross-replica context counterpart, see [`tf.distribute.Strategy.gather`](strategy#gather). All replicas need to participate in the all-gather, otherwise this operation hangs. So if `all_gather` is called in any replica, it must be called in all replicas. > > > **Note:** If there are multiple `all_gather` calls, they need to be executed in the same order on all replicas. Dispatching `all_gather` based on conditions is usually error-prone. > For all strategies except [`tf.distribute.TPUStrategy`](tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `all_gather(..., axis=1, ...)` on it, but not `all_gather(..., axis=0, ...)` or `all_gather(..., axis=2, ...)`. However, with [`tf.distribute.TPUStrategy`](tpustrategy), all tensors must have exactly the same rank and same shape. > > **Note:** The input `value` must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../expand_dims) before gathering them. > You can pass in a single tensor to all-gather: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def gather_value(): ctx = tf.distribute.get_replica_context() local_value = tf.constant([1, 2, 3]) return ctx.all_gather(local_value, axis=0) result = strategy.run(gather_value) result PerReplica:{ 0: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>, 1: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)> } strategy.experimental_local_results(result) (<tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>, <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>) ``` You can also pass in a nested structure of tensors to all-gather, say, a list: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def gather_nest(): ctx = tf.distribute.get_replica_context() value_1 = tf.constant([1, 2, 3]) value_2 = tf.constant([[1, 2], [3, 4]]) # all_gather a nest of `tf.distribute.DistributedValues` return ctx.all_gather([value_1, value_2], axis=0) result = strategy.run(gather_nest) result [PerReplica:{ 0: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>, 1: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 2), dtype=int32, numpy= array([[1, 2], [3, 4], [1, 2], [3, 4]], dtype=int32)>, 1: <tf.Tensor: shape=(4, 2), dtype=int32, numpy= array([[1, 2], [3, 4], [1, 2], [3, 4]], dtype=int32)> }] strategy.experimental_local_results(result) ([<tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>, <tf.Tensor: shape=(4, 2), dtype=int32, numpy= array([[1, 2], [3, 4], [1, 2], [3, 4]], dtype=int32)>], [<tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>, <tf.Tensor: shape=(4, 2), dtype=int32, numpy= array([[1, 2], [3, 4], [1, 2], [3, 4]], dtype=int32)>]) ``` What if you are all-gathering tensors with different shapes on different replicas? Consider the following example with two replicas, where you have `value` as a nested structure consisting of two items to all-gather, `a` and `b`. * On Replica 0, `value` is `{'a': [0], 'b': [[0, 1]]}`. * On Replica 1, `value` is `{'a': [1], 'b': [[2, 3], [4, 5]]}`. * Result for `all_gather` with `axis=0` (on each of the replicas) is: ``` {'a': [1, 2], 'b': [[0, 1], [2, 3], [4, 5]]} ``` | Args | | `value` | a nested structure of [`tf.Tensor`](../tensor) which [`tf.nest.flatten`](../nest/flatten) accepts, or a [`tf.distribute.DistributedValues`](distributedvalues) instance. The structure of the [`tf.Tensor`](../tensor) need to be same on all replicas. The underlying tensor constructs can only be dense tensors with non-zero rank, NOT [`tf.IndexedSlices`](../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). Options to perform collective operations. This overrides the default options if the [`tf.distribute.Strategy`](strategy) takes one in the constructor. See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details of the options. | | Returns | | A nested structure of [`tf.Tensor`](../tensor) with the gathered values. The structure is the same as `value`. | ### `all_reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L3171-L3281) ``` all_reduce( reduce_op, value, options=None ) ``` All-reduces `value` across all replicas. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): ctx = tf.distribute.get_replica_context() value = tf.identity(1.) return ctx.all_reduce(tf.distribute.ReduceOp.SUM, value) strategy.experimental_local_results(strategy.run(step_fn)) (<tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>) ``` It supports batched operations. You can pass a list of values and it attempts to batch them when possible. You can also specify `options` to indicate the desired batching behavior, e.g. batch the values into multiple packs so that they can better overlap with computations. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): ctx = tf.distribute.get_replica_context() value1 = tf.identity(1.) value2 = tf.identity(2.) return ctx.all_reduce(tf.distribute.ReduceOp.SUM, [value1, value2]) strategy.experimental_local_results(strategy.run(step_fn)) ([<tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=4.0>], [<tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=4.0>]) ``` Note that all replicas need to participate in the all-reduce, otherwise this operation hangs. Note that if there're multiple all-reduces, they need to execute in the same order on all replicas. Dispatching all-reduce based on conditions is usually error-prone. Known limitation: if `value` contains [`tf.IndexedSlices`](../indexedslices), attempting to compute gradient w.r.t `value` would result in an error. This API currently can only be called in the replica context. Other variants to reduce values across replicas are: * [`tf.distribute.StrategyExtended.reduce_to`](strategyextended#reduce_to): the reduce and all-reduce API in the cross-replica context. * [`tf.distribute.StrategyExtended.batch_reduce_to`](strategyextended#batch_reduce_to): the batched reduce and all-reduce API in the cross-replica context. * [`tf.distribute.Strategy.reduce`](strategy#reduce): a more convenient method to reduce to the host in cross-replica context. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a potentially nested structure of [`tf.Tensor`](../tensor) or [`tf.IndexedSlices`](../indexedslices) which [`tf.nest.flatten`](../nest/flatten) accepts. The structure and the shapes of `value` need to be same on all replicas. | | `options` | a [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions). Options to perform collective operations. This overrides the default options if the [`tf.distribute.Strategy`](strategy) takes one in the constructor. See [`tf.distribute.experimental.CommunicationOptions`](experimental/communicationoptions) for details of the options. | | Returns | | A nested structure of [`tf.Tensor`](../tensor) with the reduced values. The structure is the same as `value`. | ### `merge_call` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L3070-L3103) ``` merge_call( merge_fn, args=(), kwargs=None ) ``` Merge args across replicas and run `merge_fn` in a cross-replica context. This allows communication and coordination when there are multiple calls to the step\_fn triggered by a call to `strategy.run(step_fn, ...)`. See [`tf.distribute.Strategy.run`](strategy#run) for an explanation. If not inside a distributed scope, this is equivalent to: ``` strategy = tf.distribute.get_strategy() with cross-replica-context(strategy): return merge_fn(strategy, *args, **kwargs) ``` | Args | | `merge_fn` | Function that joins arguments from threads that are given as PerReplica. It accepts [`tf.distribute.Strategy`](strategy) object as the first argument. | | `args` | List or tuple with positional per-thread arguments for `merge_fn`. | | `kwargs` | Dict with keyword per-thread arguments for `merge_fn`. | | Returns | | The return value of `merge_fn`, except for `PerReplica` values which are unpacked. | tensorflow tf.distribute.InputContext tf.distribute.InputContext ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L474-L543) | A class wrapping information needed by an input function. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.InputContext`](https://www.tensorflow.org/api_docs/python/tf/distribute/InputContext) ``` tf.distribute.InputContext( num_input_pipelines=1, input_pipeline_id=0, num_replicas_in_sync=1 ) ``` This is a context class that is passed to the user's input function and contains information about the compute replicas and input pipelines. The number of compute replicas (in sync training) helps compute the local batch size from the desired global batch size for each replica. The input pipeline information can be used to return a different subset of the input in each replica (for e.g. shard the input pipeline, use a different input source etc). | Args | | `num_input_pipelines` | the number of input pipelines in a cluster. | | `input_pipeline_id` | the current input pipeline id, should be an int in [0,`num_input_pipelines`). | | `num_replicas_in_sync` | the number of replicas that are in sync. | | Attributes | | `input_pipeline_id` | Returns the input pipeline ID. | | `num_input_pipelines` | Returns the number of input pipelines. | | `num_replicas_in_sync` | Returns the number of compute replicas in sync. | Methods ------- ### `get_per_replica_batch_size` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L521-L539) ``` get_per_replica_batch_size( global_batch_size ) ``` Returns the per-replica batch size. | Args | | `global_batch_size` | the global batch size which should be divisible by `num_replicas_in_sync`. | | Returns | | the per-replica batch size. | | Raises | | `ValueError` | if `global_batch_size` not divisible by `num_replicas_in_sync`. | tensorflow tf.distribute.experimental_set_strategy tf.distribute.experimental\_set\_strategy ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribution_strategy_context.py#L274-L320) | Set a [`tf.distribute.Strategy`](strategy) as current without `with strategy.scope()`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.experimental_set_strategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental_set_strategy) ``` tf.distribute.experimental_set_strategy( strategy ) ``` ``` tf.distribute.experimental_set_strategy(strategy1) f() tf.distribute.experimental_set_strategy(strategy2) g() tf.distribute.experimental_set_strategy(None) h() ``` is equivalent to: ``` with strategy1.scope(): f() with strategy2.scope(): g() h() ``` In general, you should use the `with strategy.scope():` API, but this alternative may be convenient in notebooks where you would have to put each cell in a `with strategy.scope():` block. > > **Note:** This should only be called outside of any TensorFlow scope to avoid improper nesting. > | Args | | `strategy` | A [`tf.distribute.Strategy`](strategy) object or None. | | Raises | | `RuntimeError` | If called inside a `with strategy.scope():`. | tensorflow tf.distribute.DistributedIterator tf.distribute.DistributedIterator ================================= An iterator over [`tf.distribute.DistributedDataset`](distributeddataset). [`tf.distribute.DistributedIterator`](distributediterator) is the primary mechanism for enumerating elements of a [`tf.distribute.DistributedDataset`](distributeddataset). It supports the Python Iterator protocol, which means it can be iterated over using a for-loop or by fetching individual elements explicitly via `get_next()`. You can create a [`tf.distribute.DistributedIterator`](distributediterator) by calling `iter` on a [`tf.distribute.DistributedDataset`](distributeddataset) or creating a python loop over a [`tf.distribute.DistributedDataset`](distributeddataset). Visit the [tutorial](https://www.tensorflow.org/tutorials/distribute/input) on distributed input for more examples and caveats. | Attributes | | `element_spec` | The type specification of an element of [`tf.distribute.DistributedIterator`](distributediterator). ``` global_batch_size = 16 strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.from_tensors(([1.],[2])).repeat(100).batch(global_batch_size) distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset)) distributed_iterator.element_spec (PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.float32, name=None), TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)), PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.int32, name=None), TensorSpec(shape=(None, 1), dtype=tf.int32, name=None))) ``` | Methods ------- ### `get_next` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/input_lib.py#L139-L166) ``` get_next() ``` Returns the next input from the iterator for all replicas. #### Example use: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) dataset = tf.data.Dataset.range(100).batch(2) dist_dataset = strategy.experimental_distribute_dataset(dataset) dist_dataset_iterator = iter(dist_dataset) @tf.function def one_step(input): return input step_num = 5 for _ in range(step_num): strategy.run(one_step, args=(dist_dataset_iterator.get_next(),)) strategy.experimental_local_results(dist_dataset_iterator.get_next()) (<tf.Tensor: shape=(1,), dtype=int64, numpy=array([10])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([11])>) ``` | Returns | | A single [`tf.Tensor`](../tensor) or a [`tf.distribute.DistributedValues`](distributedvalues) which contains the next input for all replicas. | | Raises | | [`tf.errors.OutOfRangeError`](../errors/outofrangeerror): If the end of the iterator has been reached. | ### `get_next_as_optional` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/input_lib.py#L194-L230) ``` get_next_as_optional() ``` Returns a [`tf.experimental.Optional`](../experimental/optional) that contains the next value for all replicas. If the [`tf.distribute.DistributedIterator`](distributediterator) has reached the end of the sequence, the returned [`tf.experimental.Optional`](../experimental/optional) will have no value. #### Example usage: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) global_batch_size = 2 steps_per_loop = 2 dataset = tf.data.Dataset.range(10).batch(global_batch_size) distributed_iterator = iter( strategy.experimental_distribute_dataset(dataset)) def step_fn(x): # train the model with inputs return x @tf.function def train_fn(distributed_iterator): for _ in tf.range(steps_per_loop): optional_data = distributed_iterator.get_next_as_optional() if not optional_data.has_value(): break per_replica_results = strategy.run(step_fn, args=(optional_data.get_value(),)) tf.print(strategy.experimental_local_results(per_replica_results)) train_fn(distributed_iterator) # ([0 1], [2 3]) # ([4], []) ``` | Returns | | An [`tf.experimental.Optional`](../experimental/optional) object representing the next value from the [`tf.distribute.DistributedIterator`](distributediterator) (if it has one) or no value. | ### `__iter__` ``` __iter__() ```
programming_docs
tensorflow tf.distribute.InputReplicationMode tf.distribute.InputReplicationMode ================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L457-L470) | Replication mode for input function. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.InputReplicationMode`](https://www.tensorflow.org/api_docs/python/tf/distribute/InputReplicationMode) * `PER_WORKER`: The input function will be called on each worker independently, creating as many input pipelines as number of workers. Replicas will dequeue from the local Dataset on their worker. [`tf.distribute.Strategy`](strategy) doesn't manage any state sharing between such separate input pipelines. * `PER_REPLICA`: The input function will be called on each replica separately. [`tf.distribute.Strategy`](strategy) doesn't manage any state sharing between such separate input pipelines. | Class Variables | | PER\_REPLICA | `<InputReplicationMode.PER_REPLICA: 'PER_REPLICA'>` | | PER\_WORKER | `<InputReplicationMode.PER_WORKER: 'PER_WORKER'>` | tensorflow tf.distribute.experimental.ParameterServerStrategy tf.distribute.experimental.ParameterServerStrategy ================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/parameter_server_strategy_v2.py#L63-L540) | An multi-worker tf.distribute strategy with parameter servers. Inherits From: [`Strategy`](../strategy) #### View aliases **Main aliases** [`tf.distribute.ParameterServerStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy) ``` tf.distribute.experimental.ParameterServerStrategy( cluster_resolver, variable_partitioner=None ) ``` Parameter server training is a common data-parallel method to scale up a machine learning model on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. By default, workers read and update these variables independently without synchronizing with each other. Under this configuration, it is known as asynchronous training. In TensorFlow 2, we recommend an architecture based on central coordination for parameter server training. Each worker and parameter server runs a [`tf.distribute.Server`](../server), and on top of that, a coordinator task is responsible for creating resources on workers and parameter servers, dispatching functions, and coordinating the training. The coordinator uses a [`tf.distribute.experimental.coordinator.ClusterCoordinator`](coordinator/clustercoordinator) to coordinate the cluster, and a [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy) to define variables on parameter servers and computation on workers. For the training to work, the coordinator dispatches [`tf.function`](../../function)s to be executed on remote workers. Upon receiving requests from the coordinator, a worker executes the [`tf.function`](../../function) by reading the variables from parameter servers, executing the ops, and updating the variables on the parameter servers. Each of the worker only processes the requests from the coordinator, and communicates with parameter servers, without direct interactions with other workers in the cluster. As a result, failures of some workers do not prevent the cluster from continuing the work, and this allows the cluster to train with instances that can be occasionally unavailable (e.g. preemptible or spot instances). The coordinator and parameter servers though, must be available at all times for the cluster to make progress. Note that the coordinator is not one of the training workers. Instead, it creates resources such as variables and datasets, dispatchs [`tf.function`](../../function)s, saves checkpoints and so on. In addition to workers, parameter servers and the coordinator, an optional evaluator can be run on the side that periodically reads the checkpoints saved by the coordinator and runs evaluations against each checkpoint. `ParameterServerStrategy` is supported with two training APIs: [Custom Training Loop (CTL)](https://www.tensorflow.org/tutorials/distribute/custom_training) and [Keras Training API, also known as `Model.fit`](https://www.tensorflow.org/tutorials/distribute/keras). CTL is recommended when users prefer to define the details of their training loop, and [`Model.fit`](../../keras/model#fit) is recommended when users prefer a high-level abstraction and handling of training. When using a CTL, `ParameterServerStrategy` has to work in conjunction with a [`tf.distribute.experimental.coordinator.ClusterCoordinator`](coordinator/clustercoordinator) object. When using [`Model.fit`](../../keras/model#fit), currently only the [`tf.keras.utils.experimental.DatasetCreator`](../../keras/utils/experimental/datasetcreator) input type is supported. **Example code for coordinator** This section provides code snippets that are intended to be run on (the only) one task that is designated as the coordinator. Note that `cluster_resolver`, `variable_partitioner`, and `dataset_fn` arguments are explained in the following "Cluster setup", "Variable partitioning", and "Dataset preparation" sections. With a CTL, ``` # Prepare a strategy to use with the cluster and variable partitioning info. strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=..., variable_partitioner=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy=strategy) # Prepare a distribute dataset that will place datasets on the workers. distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn=...) with strategy.scope(): model = ... optimizer, metrics = ... # Keras optimizer/metrics are great choices checkpoint = tf.train.Checkpoint(model=model, optimizer=optimizer) checkpoint_manager = tf.train.CheckpointManager( checkpoint, checkpoint_dir, max_to_keep=2) # `load_checkpoint` infers initial epoch from `optimizer.iterations`. initial_epoch = load_checkpoint(checkpoint_manager) or 0 @tf.function def worker_fn(iterator): def replica_fn(inputs): batch_data, labels = inputs # calculate gradient, applying gradient, metrics update etc. strategy.run(replica_fn, args=(next(iterator),)) for epoch in range(initial_epoch, num_epoch): distributed_iterator = iter(distributed_dataset) # Reset iterator state. for step in range(steps_per_epoch): # Asynchronously schedule the `worker_fn` to be executed on an arbitrary # worker. This call returns immediately. coordinator.schedule(worker_fn, args=(distributed_iterator,)) # `join` blocks until all scheduled `worker_fn`s finish execution. Once it # returns, we can read the metrics and save checkpoints as needed. coordinator.join() logging.info('Metric result: %r', metrics.result()) train_accuracy.reset_states() checkpoint_manager.save() ``` With [`Model.fit`](../../keras/model#fit), ``` # Prepare a strategy to use with the cluster and variable partitioning info. strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=..., variable_partitioner=...) # A dataset function takes a `input_context` and returns a `Dataset` def dataset_fn(input_context): dataset = tf.data.Dataset.from_tensors(...) return dataset.repeat().shard(...).batch(...).prefetch(...) # With `Model.fit`, a `DatasetCreator` needs to be used. input = tf.keras.utils.experimental.DatasetCreator(dataset_fn=...) with strategy.scope(): model = ... # Make sure the `Model` is created within scope. model.compile(optimizer="rmsprop", loss="mse", steps_per_execution=..., ...) # Optional callbacks to checkpoint the model, back up the progress, etc. callbacks = [tf.keras.callbacks.ModelCheckpoint(...), ...] # `steps_per_epoch` is required with `ParameterServerStrategy`. model.fit(input, epochs=..., steps_per_epoch=..., callbacks=callbacks) ``` **Example code for worker and parameter servers** In addition to the coordinator, there should be tasks designated as "worker" or "ps". They should run the following code to start a TensorFlow server, waiting for coordinator's requests: ``` # Provide a `tf.distribute.cluster_resolver.ClusterResolver` that serves # the cluster information. See below "Cluster setup" section. cluster_resolver = ... server = tf.distribute.Server( cluster_resolver.cluster_spec(), job_name=cluster_resolver.task_type, task_index=cluster_resolver.task_id, protocol="grpc") # Blocking the process that starts a server from exiting. server.join() ``` **Cluster setup** In order for the tasks in the cluster to know other tasks' addresses, a [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) is required to be used in coordinator, worker, and ps. The [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) is responsible for providing the cluster information, as well as the task type and id of the current task. See [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) for more information. If `TF_CONFIG` environment variable is set, a [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](../cluster_resolver/tfconfigclusterresolver) should be used as well. Since there are assumptions in [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy) around the naming of the task types, "chief", "ps", and "worker" should be used in the [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) to refer to the coordinator, parameter servers, and workers, respectively. The following example demonstrates setting `TF_CONFIG` for the task designated as a parameter server (task type "ps") and index 1 (the second task), in a cluster with 1 chief, 2 parameter servers, and 3 workers. Note that it needs to be set before the use of [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](../cluster_resolver/tfconfigclusterresolver). Example code for cluster setup: ``` os.environ['TF_CONFIG'] = ''' { "cluster": { "chief": ["chief.example.com:2222"], "ps": ["ps0.example.com:2222", "ps1.example.com:2222"], "worker": ["worker0.example.com:2222", "worker1.example.com:2222", "worker2.example.com:2222"] }, "task": { "type": "ps", "index": 1 } } ''' ``` If you prefer to run the same binary for all tasks, you will need to let the binary branch into different roles at the beginning of the program: ``` # If coordinator, create a strategy and start the training program. if cluster_resolver.task_type == 'chief': strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver) ... # If worker/ps, create a server elif cluster_resolver.task_type in ("worker", "ps"): server = tf.distribute.Server(...) ... ``` Alternatively, you can also start a bunch of TensorFlow servers in advance and connect to them later. The coordinator can be in the same cluster or on any machine that has connectivity to workers and parameter servers. This is covered in our guide and tutorial. **Variable creation with `strategy.scope()`** [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy) follows the [`tf.distribute`](../../distribute) API contract where variable creation is expected to be inside the context manager returned by `strategy.scope()`, in order to be correctly placed on parameter servers in a round-robin manner: ``` # In this example, we're assuming having 3 ps. strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy=strategy) # Variables should be created inside scope to be placed on parameter servers. # If created outside scope such as `v1` here, it would be placed on the # coordinator. v1 = tf.Variable(initial_value=0.0) with strategy.scope(): v2 = tf.Variable(initial_value=1.0) v3 = tf.Variable(initial_value=2.0) v4 = tf.Variable(initial_value=3.0) v5 = tf.Variable(initial_value=4.0) # v2 through v5 are created in scope and are distributed on parameter servers. # Default placement is round-robin but the order should not be relied on. assert v2.device == "/job:ps/replica:0/task:0/device:CPU:0" assert v3.device == "/job:ps/replica:0/task:1/device:CPU:0" assert v4.device == "/job:ps/replica:0/task:2/device:CPU:0" assert v5.device == "/job:ps/replica:0/task:0/device:CPU:0" ``` See [`distribute.Strategy.scope`](../mirroredstrategy#scope) for more information. **Variable partitioning** Having dedicated servers to store variables means being able to divide up, or "shard" the variables across the ps. Partitioning large variable among ps is a commonly used technique to boost training throughput and mitigate memory constraints. It enables parallel computations and updates on different shards of a variable, and often yields better load balancing across parameter servers. Without sharding, models with large variables (e.g, embeddings) that can't fit into one machine's memory would otherwise be unable to train. With [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy), if a `variable_partitioner` is provided to `__init__` and certain conditions are satisfied, the resulting variables created in scope are sharded across the parameter servers, in a round-robin fashion. The variable reference returned from [`tf.Variable`](../../variable) becomes a type that serves as the container of the sharded variables. One can access `variables` attribute of this container for the actual variable components. If building model with [`tf.Module`](../../module) or Keras, the variable components are collected in the `variables` alike attributes. It is recommended to use size-based partitioners like [`tf.distribute.experimental.partitioners.MinSizePartitioner`](partitioners/minsizepartitioner) to avoid partitioning small variables, which could have negative impact on model training speed. ``` # Partition the embedding layer into 2 shards. variable_partitioner = ( tf.distribute.experimental.partitioners.MinSizePartitioner( min_shard_bytes=(256 << 10), max_shards = 2)) strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=..., variable_partitioner = variable_partitioner) with strategy.scope(): embedding = tf.keras.layers.Embedding(input_dim=1024, output_dim=1024) assert len(embedding.variables) == 2 assert isinstance(embedding.variables[0], tf.Variable) assert isinstance(embedding.variables[1], tf.Variable) assert embedding.variables[0].shape == (512, 1024) assert embedding.variables[1].shape == (512, 1024) ``` The sharded variable container can be converted to a `Tensor` via [`tf.convert_to_tensor`](../../convert_to_tensor). This means the container can be directly used in most Python Ops where such `Tensor` conversion automatically happens. For example, in the above code snippet, `x * self.w` would implicitly apply the said tensor conversion. Note that such conversion can be expensive, as the variable components need to be transferred from multiple parameter servers to where the value is used. [`tf.nn.embedding_lookup`](../../nn/embedding_lookup) on the other hand doesn't apply the tensor conversion, and performs parallel lookups on the variable components instead. This is crucial to scale up embedding lookups when the embedding table variable is large. When a partitioned variable is saved to a `SavedModel`, it will be saved as if it is one single variable. This improves serving efficiency by eliminating a number of Ops that handle the partiton aspects. Known limitations of variable partitioning: * Number of partitions must not change across Checkpoint saving/loading. * After saving partitioned variables to a SavedModel, the SavedModel can't be loaded via [`tf.saved_model.load`](../../saved_model/load). * Partition variable doesn't directly work with [`tf.GradientTape`](../../gradienttape), please use the `variables` attributes to get the actual variable components and use them in gradient APIs instead. **Dataset preparation** With [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy), a dataset is created in each of the workers to be used for training. This is done by creating a `dataset_fn` that takes no argument and returns a [`tf.data.Dataset`](../../data/dataset), and passing the `dataset_fn` into `tf.distribute.experimental.coordinator. ClusterCoordinator.create_per_worker_dataset`. We recommend the dataset to be shuffled and repeated to have the examples run through the training as evenly as possible. ``` def dataset_fn(): filenames = ... dataset = tf.data.Dataset.from_tensor_slices(filenames) # Dataset is recommended to be shuffled, and repeated. return dataset.shuffle(buffer_size=...).repeat().batch(batch_size=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(strategy=...) distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn) ``` **Limitations** * [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy) in TF2 is experimental, and the API is subject to further changes. * When using [`Model.fit`](../../keras/model#fit), [`tf.distribute.experimental.ParameterServerStrategy`](parameterserverstrategy) must be used with a [`tf.keras.utils.experimental.DatasetCreator`](../../keras/utils/experimental/datasetcreator), and `steps_per_epoch` must be specified. | Args | | `cluster_resolver` | a [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) object. | | `variable_partitioner` | a [`distribute.experimental.partitioners.Partitioner`](partitioners/partitioner) that specifies how to partition variables. If `None`, variables will not be partitioned. * Predefined partitioners in [`tf.distribute.experimental.partitioners`](partitioners) can be used for this argument. A commonly used partitioner is `MinSizePartitioner(min_shard_bytes = 256 << 10, max_shards = num_ps)`, which allocates at least 256K per shard, and each ps gets at most one shard. * `variable_partitioner` will be called for each variable created under strategy `scope` to instruct how the variable should be partitioned. Variables that have only one partition along the partitioning axis (i.e., no need for partition) will be created as a normal [`tf.Variable`](../../variable). * Only the first / outermost axis partitioning is supported. * Div partition strategy is used to partition variables. Assuming we assign consecutive integer ids along the first axis of a variable, then ids are assigned to shards in a contiguous manner, while attempting to keep each shard size identical. If the ids do not evenly divide the number of shards, each of the first several shards will be assigned one more id. For instance, a variable whose first dimension is 13 has 13 ids, and they are split across 5 shards as: `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`. * Variables created under `strategy.extended.colocate_vars_with` will not be partitioned. | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker [`tf.distribute`](../../distribute) strategy such as [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy()`](../tpustrategy), there is a [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) must set the relevant attribute, or override this property; otherwise, `None` is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver), and in those cases this property will return `None`. The [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, ``` os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. ``` For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver)'s API docstring. | | `extended` | [`tf.distribute.StrategyExtended`](../strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](../inputcontext) argument and returns a [`tf.data.Dataset`](../../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](../inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](../distributeddataset) returned by this API to query the [`tf.TypeSpec`](../../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../../function). Follow [`tf.distribute.DistributedDataset.element_spec`](../distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](../inputcontext) instance and returning a [`tf.data.Dataset`](../../data/dataset). | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](../distributeddataset) from [`tf.data.Dataset`](../../data/dataset). The returned [`tf.distribute.DistributedDataset`](../distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](../distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](../distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](../distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](../strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](../tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../../data/tfrecorddataset), [`tf.data.TextLineDataset`](../../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](../distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](../distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](../distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](../strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](../distributedvalues) or [`tf.Tensor`](../../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](../tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](../multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](../replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](../tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](../distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](../tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](../distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](../onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](../replicacontext#all_reduce): This differs from [`Strategy.reduce`](../mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](../strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](../strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](../mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](../reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](../reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1197-L1312) ``` run( fn, args=(), kwargs=None, options=None ) ``` Invokes `fn` on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes `fn` on each replica. If `args` or `kwargs` have [`tf.distribute.DistributedValues`](../distributedvalues), such as those produced by a [`tf.distribute.DistributedDataset`](../distributeddataset) from [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function), when `fn` is executed on a particular replica, it will be executed with the component of [`tf.distribute.DistributedValues`](../distributedvalues) that correspond to that replica. `fn` is invoked under a replica context. `fn` may call [`tf.distribute.get_replica_context()`](../get_replica_context) to access members such as `all_reduce`. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in `args` or `kwargs` can be a nested structure of tensors, e.g. a list of tensors, in which case `args` and `kwargs` will be passed to the `fn` invoked on each replica. Or `args` or `kwargs` can be [`tf.distribute.DistributedValues`](../distributedvalues) containing tensors or composite tensors, i.e. [`tf.compat.v1.TensorInfo.CompositeTensor`](../../compat/v1/tensorinfo/compositetensor), in which case each `fn` call will get the component of a [`tf.distribute.DistributedValues`](../distributedvalues) corresponding to its replica. Note that arbitrary Python values that are not of the types above are not supported. #### Example usage: 1. Constant tensor input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } ``` 1. DistributedValues input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> ``` 1. Use [`tf.distribute.ReplicaContext`](../replicacontext) to allreduce values. ``` strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } ``` | Args | | `fn` | The function to run on each replica. | | `args` | Optional positional arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](../distributedvalues). | | `kwargs` | Optional keyword arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](../distributedvalues). | | `options` | An optional instance of [`tf.distribute.RunOptions`](../runoptions) specifying the options to run `fn`. | | Returns | | Merged return value of `fn` across replicas. The structure of the return value is the same as the return value from `fn`. Each element in the structure can either be [`tf.distribute.DistributedValues`](../distributedvalues), `Tensor` objects, or `Tensor`s (for example, if running on a single replica). | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](../get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](../strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.experimental.CollectiveHints tf.distribute.experimental.CollectiveHints ========================================== Hints for collective operations like AllReduce. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.experimental.CollectiveHints`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CollectiveHints) ``` tf.distribute.experimental.CollectiveHints( bytes_per_pack=0, timeout_seconds=None ) ``` This can be passed to methods like `tf.distribute.get_replica_context().all_reduce()` to optimize collective operation performance. Note that these are only hints, which may or may not change the actual behavior. Some options only apply to certain strategy and are ignored by others. One common optimization is to break gradients all-reduce into multiple packs so that weight updates can overlap with gradient all-reduce. #### Examples: * bytes\_per\_pack ``` hints = tf.distribute.experimental.CollectiveHints( bytes_per_pack=50 * 1024 * 1024) grads = tf.distribute.get_replica_context().all_reduce( 'sum', grads, experimental_hints=hints) optimizer.apply_gradients(zip(grads, vars), experimental_aggregate_gradients=False) ``` * timeout\_seconds ``` strategy = tf.distribute.MirroredStrategy() hints = tf.distribute.experimental.CollectiveHints( timeout_seconds=120.0) try: strategy.reduce("sum", v, axis=None, experimental_hints=hints) except tf.errors.DeadlineExceededError: do_something() ``` | Args | | `bytes_per_pack` | a non-negative integer. Breaks collective operations into packs of certain size. If it's zero, the value is determined automatically. This only applies to all-reduce with `MultiWorkerMirroredStrategy` currently. | | `timeout_seconds` | a float or None, timeout in seconds. If not None, the collective raises [`tf.errors.DeadlineExceededError`](../../errors/deadlineexceedederror) if it takes longer than this timeout. This can be useful when debugging hanging issues. This should only be used for debugging since it creates a new thread for each collective, i.e. an overhead of `timeout_seconds * num_collectives_per_second` more threads. This only works for [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy). | | Raises | | `ValueError` | When arguments have invalid value. | tensorflow tf.distribute.experimental.TPUStrategy tf.distribute.experimental.TPUStrategy ====================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/tpu_strategy.py#L656-L733) | Synchronous training on TPUs and TPU Pods. Inherits From: [`Strategy`](../strategy) ``` tf.distribute.experimental.TPUStrategy( tpu_cluster_resolver=None, device_assignment=None ) ``` To construct a TPUStrategy object, you need to run the initialization code as below: ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) strategy = tf.distribute.experimental.TPUStrategy(resolver) ``` While using distribution strategies, the variables created within the strategy's scope will be replicated across all the replicas and can be kept in sync using all-reduce algorithms. To run TF2 programs on TPUs, you can either use `.compile` and `.fit` APIs in [`tf.keras`](../../keras) with TPUStrategy, or write your own customized training loop by calling `strategy.run` directly. Note that TPUStrategy doesn't support pure eager execution, so please make sure the function passed into `strategy.run` is a [`tf.function`](../../function) or `strategy.run` is called inside a [`tf.function`](../../function) if eager behavior is enabled. | Args | | `tpu_cluster_resolver` | A tf.distribute.cluster\_resolver.TPUClusterResolver, which provides information about the TPU cluster. | | `device_assignment` | Optional [`tf.tpu.experimental.DeviceAssignment`](../../tpu/experimental/deviceassignment) to specify the placement of replicas on the TPU cluster. | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. [`tf.distribute.experimental.TPUStrategy`](tpustrategy) provides the associated [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver). If the user provides one in `__init__`, that instance is returned; if the user does not, a default [`tf.distribute.cluster_resolver.TPUClusterResolver`](../cluster_resolver/tpuclusterresolver) is provided. | | `extended` | [`tf.distribute.StrategyExtended`](../strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](../inputcontext) argument and returns a [`tf.data.Dataset`](../../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](../inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](../distributeddataset) returned by this API to query the [`tf.TypeSpec`](../../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../../function). Follow [`tf.distribute.DistributedDataset.element_spec`](../distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](../inputcontext) instance and returning a [`tf.data.Dataset`](../../data/dataset). | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](../distributeddataset) from [`tf.data.Dataset`](../../data/dataset). The returned [`tf.distribute.DistributedDataset`](../distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](../distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](../distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](../distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](../strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](../tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../../data/tfrecorddataset), [`tf.data.TextLineDataset`](../../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](../distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](../distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](../distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](../strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](../distributedvalues) or [`tf.Tensor`](../../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](../tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](../multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](../replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](../tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](../distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](../tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](../distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](../onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](../replicacontext#all_reduce): This differs from [`Strategy.reduce`](../mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](../strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](../strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](../mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](../reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](../reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/tpu_strategy.py#L711-L721) ``` run( fn, args=(), kwargs=None, options=None ) ``` See base class. ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](../get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](../strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.experimental.MultiWorkerMirroredStrategy tf.distribute.experimental.MultiWorkerMirroredStrategy ====================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/collective_all_reduce_strategy.py#L56-L214) | A distribution strategy for synchronous training on multiple workers. Inherits From: [`MultiWorkerMirroredStrategy`](../multiworkermirroredstrategy), [`Strategy`](../strategy) ``` tf.distribute.experimental.MultiWorkerMirroredStrategy( communication=tf.distribute.experimental.CollectiveCommunication.AUTO, cluster_resolver=None ) ``` This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to [`tf.distribute.MirroredStrategy`](../mirroredstrategy), it replicates all variables and computations to each local device. The difference is that it uses a distributed collective implementation (e.g. all-reduce), so that multiple workers can work together. You need to launch your program on each worker and configure `cluster_resolver` correctly. For example, if you are using [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](../cluster_resolver/tfconfigclusterresolver), each worker needs to have its corresponding `task_type` and `task_id` set in the `TF_CONFIG` environment variable. An example TF\_CONFIG on worker-0 of a two worker cluster is: ``` TF_CONFIG = '{"cluster": {"worker": ["localhost:12345", "localhost:23456"]}, "task": {"type": "worker", "index": 0} }' ``` Your program runs on each worker as-is. Note that collectives require each worker to participate. All [`tf.distribute`](../../distribute) and non [`tf.distribute`](../../distribute) API may use collectives internally, e.g. checkpointing and saving since reading a [`tf.Variable`](../../variable) with [`tf.VariableSynchronization.ON_READ`](../../variablesynchronization#ON_READ) all-reduces the value. Therefore it's recommended to run exactly the same program on each worker. Dispatching based on `task_type` or `task_id` of the worker is error-prone. `cluster_resolver.num_accelerators()` determines the number of GPUs the strategy uses. If it's zero, the strategy uses the CPU. All workers need to use the same number of devices, otherwise the behavior is undefined. This strategy is not intended for TPU. Use [`tf.distribute.TPUStrategy`](../tpustrategy) instead. After setting up TF\_CONFIG, using this strategy is similar to using [`tf.distribute.MirroredStrategy`](../mirroredstrategy) and [`tf.distribute.TPUStrategy`](../tpustrategy). ``` strategy = tf.distribute.MultiWorkerMirroredStrategy() with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Dense(2, input_shape=(5,)), ]) optimizer = tf.keras.optimizers.SGD(learning_rate=0.1) def dataset_fn(ctx): x = np.random.random((2, 5)).astype(np.float32) y = np.random.randint(2, size=(2, 1)) dataset = tf.data.Dataset.from_tensor_slices((x, y)) return dataset.repeat().batch(1, drop_remainder=True) dist_dataset = strategy.distribute_datasets_from_function(dataset_fn) model.compile() model.fit(dist_dataset) ``` You can also write your own training loop: ``` @tf.function def train_step(iterator): def step_fn(inputs): features, labels = inputs with tf.GradientTape() as tape: logits = model(features, training=True) loss = tf.keras.losses.sparse_categorical_crossentropy( labels, logits) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) strategy.run(step_fn, args=(next(iterator),)) for _ in range(NUM_STEP): train_step(iterator) ``` See [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for a detailed tutorial. **Saving** You need to save and checkpoint on all workers instead of just one. This is because variables whose synchronization=ON\_READ triggers aggregation during saving. It's recommended to save to a different path on each worker to avoid race conditions. Each worker saves the same thing. See [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#model_saving_and_loading) tutorial for examples. **Known Issues** * [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](../cluster_resolver/tfconfigclusterresolver) does not return the correct number of accelerators. The strategy uses all available GPUs if `cluster_resolver` is [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](../cluster_resolver/tfconfigclusterresolver) or `None`. * In eager mode, the strategy needs to be created before calling any other Tensorflow API. | Args | | `communication` | optional [`tf.distribute.experimental.CommunicationImplementation`](communicationimplementation). This is a hint on the preferred collective communication implementation. Possible values include `AUTO`, `RING`, and `NCCL`. | | `cluster_resolver` | optional [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver). If `None`, [`tf.distribute.cluster_resolver.TFConfigClusterResolver`](../cluster_resolver/tfconfigclusterresolver) is used. | | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. As a multi-worker strategy, [`tf.distribute.MultiWorkerMirroredStrategy`](../multiworkermirroredstrategy) provides the associated [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver). If the user provides one in `__init__`, that instance is returned; if the user does not, a default `TFConfigClusterResolver` is provided. | | `extended` | [`tf.distribute.StrategyExtended`](../strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](../inputcontext) argument and returns a [`tf.data.Dataset`](../../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](../inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](../distributeddataset) returned by this API to query the [`tf.TypeSpec`](../../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../../function). Follow [`tf.distribute.DistributedDataset.element_spec`](../distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](../inputcontext) instance and returning a [`tf.data.Dataset`](../../data/dataset). | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L989-L1108) ``` experimental_distribute_dataset( dataset, options=None ) ``` Creates [`tf.distribute.DistributedDataset`](../distributeddataset) from [`tf.data.Dataset`](../../data/dataset). The returned [`tf.distribute.DistributedDataset`](../distributeddataset) can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a [`tf.distribute.DistributedDataset`](../distributeddataset). You can only create an iterator or examine the [`tf.TypeSpec`](../../typespec) of the data generated by it. See API docs of [`tf.distribute.DistributedDataset`](../distributeddataset) to learn more. The following is an example: ``` global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] ``` Three key actions happening under the hood of this method are batching, sharding, and prefetching. In the code snippet above, `dataset` is batched by `global_batch_size`, and calling `experimental_distribute_dataset` on it rebatches `dataset` to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. `x` is a [`tf.distribute.DistributedValues`](../distributedvalues) containing data for all replicas, and each replica gets data of the new batch size. [`tf.distribute.Strategy.run`](../strategy#run) will take care of feeding the right per-replica data in `x` to the right `replica_fn` executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy`](../tpustrategy)), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right [`tf.data.experimental.AutoShardPolicy`](../../data/experimental/autoshardpolicy) is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using [`tf.data.experimental.DistributeOptions`](../../data/experimental/distributeoptions). Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. > > **Note:** for autosharding across multiple workers, the default mode is [`tf.data.experimental.AutoShardPolicy.AUTO`](../../data/experimental/autoshardpolicy#AUTO). This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. [`tf.data.TFRecordDataset`](../../data/tfrecorddataset), [`tf.data.TextLineDataset`](../../data/textlinedataset), etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the [`tf.data.experimental.DistributeOptions.auto_shard_policy`](../../data/experimental/distributeoptions#auto_shard_policy) to be [`tf.data.experimental.AutoShardPolicy.OFF`](../../data/experimental/autoshardpolicy#OFF). > By default, this method adds a prefetch transformation at the end of the user provided [`tf.data.Dataset`](../../data/dataset) instance. The argument to the prefetch transformation which is `buffer_size` is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) instead, which does not do any automatic batching or sharding for you. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_dataset). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset` | [`tf.data.Dataset`](../../data/dataset) that will be sharded across all replicas using the rules stated above. | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](../distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](../distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](../distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1541-L1559) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. > > **Note:** This only returns values on the worker initiated by this client. When using a [`tf.distribute.Strategy`](../strategy) like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy), each worker will be its own client, and this function will only return values computed on that worker. > | Args | | `value` | A value returned by `experimental_run()`, `run(), or a variable created in`scope`. | | Returns | | A tuple of values contained in `value` where ith element corresponds to ith replica. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](../distributedvalues) or [`tf.Tensor`](../../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](../tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](../multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](../replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](../tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](../distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](../tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](../distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](../onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1314-L1516) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas and return result on current device. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> ``` To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: ``` strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 ``` This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. > > **Note:** The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For `TPUStrategy`, it is the first TPU host. For multi client `MultiWorkerMirroredStrategy`, this is CPU of each worker. > There are a number of different tf.distribute APIs for reducing values across replicas: * [`tf.distribute.ReplicaContext.all_reduce`](../replicacontext#all_reduce): This differs from [`Strategy.reduce`](../mirroredstrategy#reduce) in that it is for replica context and does not copy the results to the host device. `all_reduce` should be typically used for reductions inside the training step such as gradients. * [`tf.distribute.StrategyExtended.reduce_to`](../strategyextended#reduce_to) and [`tf.distribute.StrategyExtended.batch_reduce_to`](../strategyextended#batch_reduce_to): These APIs are more advanced versions of [`Strategy.reduce`](../mirroredstrategy#reduce) as they allow customizing the destination of the result. They are also called in cross replica context. *What should axis be?* Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. With `axis=None`, `reduce` will aggregate only across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). ``` strategy.reduce("sum", per_replica_result, axis=None) ``` Sometimes, you will want to aggregate across both the global batch *and* all replicas. You can get this behavior by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. ``` strategy.reduce("sum", per_replica_result, axis=0) ``` If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](../reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. | Args | | `reduce_op` | a [`tf.distribute.ReduceOp`](../reduceop) value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with `OneDeviceStrategy` or default strategy. | | `axis` | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1197-L1312) ``` run( fn, args=(), kwargs=None, options=None ) ``` Invokes `fn` on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes `fn` on each replica. If `args` or `kwargs` have [`tf.distribute.DistributedValues`](../distributedvalues), such as those produced by a [`tf.distribute.DistributedDataset`](../distributeddataset) from [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function), when `fn` is executed on a particular replica, it will be executed with the component of [`tf.distribute.DistributedValues`](../distributedvalues) that correspond to that replica. `fn` is invoked under a replica context. `fn` may call [`tf.distribute.get_replica_context()`](../get_replica_context) to access members such as `all_reduce`. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in `args` or `kwargs` can be a nested structure of tensors, e.g. a list of tensors, in which case `args` and `kwargs` will be passed to the `fn` invoked on each replica. Or `args` or `kwargs` can be [`tf.distribute.DistributedValues`](../distributedvalues) containing tensors or composite tensors, i.e. [`tf.compat.v1.TensorInfo.CompositeTensor`](../../compat/v1/tensorinfo/compositetensor), in which case each `fn` call will get the component of a [`tf.distribute.DistributedValues`](../distributedvalues) corresponding to its replica. Note that arbitrary Python values that are not of the types above are not supported. #### Example usage: 1. Constant tensor input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } ``` 1. DistributedValues input. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> ``` 1. Use [`tf.distribute.ReplicaContext`](../replicacontext) to allreduce values. ``` strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } ``` | Args | | `fn` | The function to run on each replica. | | `args` | Optional positional arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](../distributedvalues). | | `kwargs` | Optional keyword arguments to `fn`. Its element can be a tensor, a nested structure of tensors or a [`tf.distribute.DistributedValues`](../distributedvalues). | | `options` | An optional instance of [`tf.distribute.RunOptions`](../runoptions) specifying the options to run `fn`. | | Returns | | Merged return value of `fn` across replicas. The structure of the return value is the same as the return value from `fn`. Each element in the structure can either be [`tf.distribute.DistributedValues`](../distributedvalues), `Tensor` objects, or `Tensor`s (for example, if running on a single replica). | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](../get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](../strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow Module: tf.distribute.experimental.rpc Module: tf.distribute.experimental.rpc ====================================== Public API for tf.distribute.experimental.rpc namespace. Classes ------- [`class Client`](rpc/client): Client class for invoking RPCs to the server. [`class Server`](rpc/server): A Server base class for accepting RPCs for registered tf.functions. tensorflow tf.distribute.experimental.CommunicationImplementation tf.distribute.experimental.CommunicationImplementation ====================================================== Cross device communication implementation. #### View aliases **Main aliases** [`tf.distribute.experimental.CollectiveCommunication`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.experimental.CollectiveCommunication`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation), [`tf.compat.v1.distribute.experimental.CommunicationImplementation`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation) * `AUTO`: Automatically chosen by Tensorflow. * `RING`: TensorFlow's ring algorithms for all-reduce and all-gather. * `NCCL`: NVIDIA®'s NCCL library. This is now only used for all-reduce on GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING. | Class Variables | | AUTO | `<CommunicationImplementation.AUTO: 'AUTO'>` | | NCCL | `<CommunicationImplementation.NCCL: 'NCCL'>` | | RING | `<CommunicationImplementation.RING: 'RING'>` | tensorflow Module: tf.distribute.experimental.coordinator Module: tf.distribute.experimental.coordinator ============================================== Public API for tf.distribute.experimental.coordinator namespace. Classes ------- [`class ClusterCoordinator`](coordinator/clustercoordinator): An object to schedule and coordinate remote function execution. [`class PerWorkerValues`](coordinator/perworkervalues): A container that holds a list of values, one value per worker. [`class RemoteValue`](coordinator/remotevalue): An asynchronously available value of a scheduled function. tensorflow tf.distribute.experimental.ValueContext tf.distribute.experimental.ValueContext ======================================= A class wrapping information needed by a distribute function. ``` tf.distribute.experimental.ValueContext( replica_id_in_sync_group=0, num_replicas_in_sync=1 ) ``` This is a context class that is passed to the `value_fn` in `strategy.experimental_distribute_values_from_function` and contains information about the compute replicas. The `num_replicas_in_sync` and `replica_id` can be used to customize the value on each replica. #### Example usage: 1. Directly constructed. ``` def value_fn(context): return context.replica_id_in_sync_group/context.num_replicas_in_sync context = tf.distribute.experimental.ValueContext( replica_id_in_sync_group=2, num_replicas_in_sync=4) per_replica_value = value_fn(context) per_replica_value 0.5 ``` 1. Passed in by `experimental_distribute_values_from_function`. ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` | Args | | `replica_id_in_sync_group` | the current replica\_id, should be an int in [0,`num_replicas_in_sync`). | | `num_replicas_in_sync` | the number of replicas that are in sync. | | Attributes | | `num_replicas_in_sync` | Returns the number of compute replicas in sync. | | `replica_id_in_sync_group` | Returns the replica ID. | tensorflow tf.distribute.experimental.CommunicationOptions tf.distribute.experimental.CommunicationOptions =============================================== Options for cross device communications like All-reduce. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.experimental.CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) ``` tf.distribute.experimental.CommunicationOptions( bytes_per_pack=0, timeout_seconds=None, implementation=tf.distribute.experimental.CollectiveCommunication.AUTO ) ``` This can be passed to methods like `tf.distribute.get_replica_context().all_reduce()` to optimize collective operation performance. Note that these are only hints, which may or may not change the actual behavior. Some options only apply to certain strategy and are ignored by others. One common optimization is to break gradients all-reduce into multiple packs so that weight updates can overlap with gradient all-reduce. #### Examples: ``` options = tf.distribute.experimental.CommunicationOptions( bytes_per_pack=50 * 1024 * 1024, timeout_seconds=120.0, implementation=tf.distribute.experimental.CommunicationImplementation.NCCL ) grads = tf.distribute.get_replica_context().all_reduce( 'sum', grads, options=options) optimizer.apply_gradients(zip(grads, vars), experimental_aggregate_gradients=False) ``` | Args | | `bytes_per_pack` | a non-negative integer. Breaks collective operations into packs of certain size. If it's zero, the value is determined automatically. This hint is respected by all multi-replica strategies except `TPUStrategy`. | | `timeout_seconds` | a float or None, timeout in seconds. If not None, the collective raises [`tf.errors.DeadlineExceededError`](../../errors/deadlineexceedederror) if it takes longer than this timeout. Zero disables timeout. This can be useful when debugging hanging issues. This should only be used for debugging since it creates a new thread for each collective, i.e. an overhead of `timeout_seconds * num_collectives_per_second` more threads. This only works for [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy). | | `implementation` | a [`tf.distribute.experimental.CommunicationImplementation`](communicationimplementation). This is a hint on the preferred communication implementation. Possible values include `AUTO`, `RING`, and `NCCL`. NCCL is generally more performant for GPU, but doesn't work for CPU. This only works for [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy). | | Raises | | `ValueError` | When arguments have invalid value. | tensorflow Module: tf.distribute.experimental.partitioners Module: tf.distribute.experimental.partitioners =============================================== Public API for tf.distribute.experimental.partitioners namespace. Classes ------- [`class FixedShardsPartitioner`](partitioners/fixedshardspartitioner): Partitioner that allocates a fixed number of shards. [`class MaxSizePartitioner`](partitioners/maxsizepartitioner): Partitioner that keeps shards below `max_shard_bytes`. [`class MinSizePartitioner`](partitioners/minsizepartitioner): Partitioner that allocates a minimum size per shard. [`class Partitioner`](partitioners/partitioner): Partitioner base class: all partitiners inherit from this class. tensorflow tf.distribute.experimental.CentralStorageStrategy tf.distribute.experimental.CentralStorageStrategy ================================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/central_storage_strategy.py#L24-L209) | A one-machine strategy that puts all variables on a single device. Inherits From: [`Strategy`](../strategy) ``` tf.distribute.experimental.CentralStorageStrategy( compute_devices=None, parameter_device=None ) ``` Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs. #### For Example: ``` strategy = tf.distribute.experimental.CentralStorageStrategy() # Create a dataset ds = tf.data.Dataset.range(5).batch(2) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): return val + 1 # Iterate over the distributed dataset for x in dist_dataset: # process dataset elements strategy.run(train_step, args=(x,)) ``` | Attributes | | `cluster_resolver` | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker [`tf.distribute`](../../distribute) strategy such as [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](multiworkermirroredstrategy) or [`tf.distribute.TPUStrategy()`](../tpustrategy), there is a [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) must set the relevant attribute, or override this property; otherwise, `None` is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver), and in those cases this property will return `None`. The [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver) may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, ``` os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. ``` For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](../cluster_resolver/clusterresolver)'s API docstring. | | `extended` | [`tf.distribute.StrategyExtended`](../strategyextended) with additional methods. | | `num_replicas_in_sync` | Returns number of replicas over which gradients are aggregated. | Methods ------- ### `distribute_datasets_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1110-L1187) ``` distribute_datasets_from_function( dataset_fn, options=None ) ``` Distributes [`tf.data.Dataset`](../../data/dataset) instances created by calls to `dataset_fn`. The argument `dataset_fn` that users pass in is an input function that has a [`tf.distribute.InputContext`](../inputcontext) argument and returns a [`tf.data.Dataset`](../../data/dataset) instance. It is expected that the returned dataset from `dataset_fn` is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) does not batch or shard the [`tf.data.Dataset`](../../data/dataset) instance returned from the input function. `dataset_fn` will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the `Dataset` every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, `tf.distribute.experimental_distribute_dataset` does batching and sharding for you.) For example, where `experimental_distribute_dataset` is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in `experimental_distribute_dataset`). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The `dataset_fn` should take an [`tf.distribute.InputContext`](../inputcontext) instance where information about batching and input replication can be accessed. You can use `element_spec` property of the [`tf.distribute.DistributedDataset`](../distributeddataset) returned by this API to query the [`tf.TypeSpec`](../../typespec) of the elements returned by the iterator. This can be used to set the `input_signature` property of a [`tf.function`](../../function). Follow [`tf.distribute.DistributedDataset.element_spec`](../distributeddataset#element_spec) to see an example. > > **Note:** If you are using TPUStrategy, the order in which the data is processed by the workers when using [`tf.distribute.Strategy.experimental_distribute_dataset`](../strategy#experimental_distribute_dataset) or [`tf.distribute.Strategy.distribute_datasets_from_function`](../strategy#distribute_datasets_from_function) is not guaranteed. This is typically required if you are using [`tf.distribute`](../../distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to [this snippet](https://www.tensorflow.org/tutorials/distribute/input#caveats) for an example of how to order outputs. > > > **Note:** Stateful dataset transformations are currently not supported with `tf.distribute.experimental_distribute_dataset` or `tf.distribute.distribute_datasets_from_function`. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a `map_fn` that uses [`tf.random.uniform`](../../random/uniform) to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. > For a tutorial on more usage and properties of this method, refer to the [tutorial on distributed input](https://www.tensorflow.org/tutorials/distribute/input#tfdistributestrategyexperimental_distribute_datasets_from_function)). If you are interested in last partial batch handling, read [this section](https://www.tensorflow.org/tutorials/distribute/input#partial_batches). | Args | | `dataset_fn` | A function taking a [`tf.distribute.InputContext`](../inputcontext) instance and returning a [`tf.data.Dataset`](../../data/dataset). | | `options` | [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | | Returns | | A [`tf.distribute.DistributedDataset`](../distributeddataset). | ### `experimental_distribute_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/central_storage_strategy.py#L73-L109) ``` experimental_distribute_dataset( dataset, options=None ) ``` Distributes a tf.data.Dataset instance provided via dataset. The returned dataset is a wrapped strategy dataset which creates a multidevice iterator under the hood. It prefetches the input data to the specified devices on the worker. The returned distributed dataset can be iterated over similar to how regular datasets can. > > **Note:** Currently, the user cannot add any more transformations to a distributed dataset. > #### For Example: ``` strategy = tf.distribute.CentralStorageStrategy() # with 1 CPU and 1 GPU dataset = tf.data.Dataset.range(10).batch(2) dist_dataset = strategy.experimental_distribute_dataset(dataset) for x in dist_dataset: print(x) # Prints PerReplica values [0, 1], [2, 3],... ``` Args: dataset: [`tf.data.Dataset`](../../data/dataset) to be prefetched to device. options: [`tf.distribute.InputOptions`](../inputoptions) used to control options on how this dataset is distributed. | Returns | | A "distributed `Dataset`" that the caller can iterate over. | ### `experimental_distribute_values_from_function` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1677-L1751) ``` experimental_distribute_values_from_function( value_fn ) ``` Generates [`tf.distribute.DistributedValues`](../distributedvalues) from `value_fn`. This function is to generate [`tf.distribute.DistributedValues`](../distributedvalues) to pass into `run`, `reduce`, or other methods that take distributed values when not using datasets. | Args | | `value_fn` | The function to run to generate values. It is called for each replica with `tf.distribute.ValueContext` as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. | | Returns | | A [`tf.distribute.DistributedValues`](../distributedvalues) containing a value for each replica. | #### Example usage: 1. Return constant value per replica: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) ``` 1. Distribute values in array based on replica\_id: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0) ``` 1. Specify values using num\_replicas\_in\_sync: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2) ``` 1. Place values on devices and distribute: ``` strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn) ``` ### `experimental_local_results` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/central_storage_strategy.py#L111-L125) ``` experimental_local_results( value ) ``` Returns the list of all local per-replica values contained in `value`. In `CentralStorageStrategy` there is a single worker so the value returned will be all the values on that worker. | Args | | `value` | A value returned by `run()`, `extended.call_for_each_replica()`, or a variable created in `scope`. | | Returns | | A tuple of values contained in `value`. If `value` represents a single value, this returns `(value,).` | ### `gather` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L1753-L1858) ``` gather( value, axis ) ``` Gather `value` across replicas along `axis` to the current device. Given a [`tf.distribute.DistributedValues`](../distributedvalues) or [`tf.Tensor`](../../tensor)-like object `value`, this API gathers and concatenates `value` across replicas along the `axis`-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For [`tf.distribute.TPUStrategy`](../tpustrategy), it is the first TPU host. For multi-client [`tf.distribute.MultiWorkerMirroredStrategy`](../multiworkermirroredstrategy), this is the CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see [`tf.distribute.ReplicaContext.all_gather`](../replicacontext#all_gather). > > **Note:** For all strategies except [`tf.distribute.TPUStrategy`](../tpustrategy), the input `value` on different replicas must have the same rank, and their shapes must be the same in all dimensions except the `axis`-th dimension. In other words, their shapes cannot be different in a dimension `d` where `d` does not equal to the `axis` argument. For example, given a [`tf.distribute.DistributedValues`](../distributedvalues) with component tensors of shape `(1, 2, 3)` and `(1, 3, 3)` on two replicas, you can call `gather(..., axis=1, ...)` on it, but not `gather(..., axis=0, ...)` or `gather(..., axis=2, ...)`. However, for [`tf.distribute.TPUStrategy.gather`](../tpustrategy#gather), all tensors must have exactly the same rank and same shape. > > > **Note:** Given a [`tf.distribute.DistributedValues`](../distributedvalues) `value`, its component tensors must have a non-zero rank. Otherwise, consider using [`tf.expand_dims`](../../expand_dims) before gathering them. > ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)> ``` Consider the following example for more combinations: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)> ``` | Args | | `value` | a [`tf.distribute.DistributedValues`](../distributedvalues) instance, e.g. returned by [`Strategy.run`](../mirroredstrategy#run), to be combined into a single tensor. It can also be a regular tensor when used with [`tf.distribute.OneDeviceStrategy`](../onedevicestrategy) or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a [`tf.IndexedSlices`](../../indexedslices). | | `axis` | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). | | Returns | | A `Tensor` that's the concatenation of `value` across replicas along `axis` dimension. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/central_storage_strategy.py#L145-L209) ``` reduce( reduce_op, value, axis ) ``` Reduce `value` across replicas. Given a per-replica value returned by `run`, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements. For example, if you have a global batch size of 8 and 2 replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and `[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient). More often you will want to aggregate across the global batch, which you can get by specifying the batch dimension as the `axis`, typically `axis=0`. In this case it would return a scalar `0+1+2+3+4+5+6+7`. If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify `axis=0`. If you specify [`tf.distribute.ReduceOp.MEAN`](../reduceop#MEAN), using `axis=0` will use the correct denominator of 6. Contrast this with computing `reduce_mean` to get a scalar value on each replica and this function to average those means, which will weigh some values `1/8` and others `1/4`. #### For Example: ``` strategy = tf.distribute.experimental.CentralStorageStrategy( compute_devices=['CPU:0', 'GPU:0'], parameter_device='CPU:0') ds = tf.data.Dataset.range(10) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): # pass through return val # Iterate over the distributed dataset for x in dist_dataset: result = strategy.run(train_step, args=(x,)) result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=None).numpy() # result: array([ 4, 6, 8, 10]) result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=0).numpy() # result: 28 ``` | Args | | `reduce_op` | A [`tf.distribute.ReduceOp`](../reduceop) value specifying how values should be combined. | | `value` | A "per replica" value, e.g. returned by `run` to be combined into a single tensor. | | `axis` | Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or `None` to only reduce across replicas (e.g. if the tensor has no batch dimension). | | Returns | | A `Tensor`. | ### `run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/central_storage_strategy.py#L127-L143) ``` run( fn, args=(), kwargs=None, options=None ) ``` Run `fn` on each replica, with the given arguments. In `CentralStorageStrategy`, `fn` is called on each of the compute replicas, with the provided "per replica" arguments specific to that device. | Args | | `fn` | The function to run. The output must be a [`tf.nest`](../../nest) of `Tensor`s. | | `args` | (Optional) Positional arguments to `fn`. | | `kwargs` | (Optional) Keyword arguments to `fn`. | | `options` | (Optional) An instance of [`tf.distribute.RunOptions`](../runoptions) specifying the options to run `fn`. | | Returns | | Return value from running `fn`. | ### `scope` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/distribute_lib.py#L863-L955) ``` scope() ``` Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: ``` strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> ``` *What happens when Strategy.scope is entered?* * `strategy` is installed in the global context as the "current" strategy. Inside this scope, [`tf.distribute.get_strategy()`](../get_strategy) will now return this strategy. Outside this scope, it returns the default no-op strategy. * Entering the scope also enters the "cross-replica context". See [`tf.distribute.StrategyExtended`](../strategyextended) for an explanation on cross-replica and replica contexts. * Variable creation inside `scope` is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like `MirroredStrategy`, `TPUStrategy` and `MultiWorkerMiroredStrategy` create variables replicated on each replica, whereas `ParameterServerStrategy` creates variables on the parameter servers. This is done using a custom [`tf.variable_creator_scope`](../../variable_creator_scope). * In some strategies, a default device scope may also be entered: in `MultiWorkerMiroredStrategy`, a default device scope of "/CPU:0" is entered on each worker. > > **Note:** Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras `model.fit`. If you're not using `model.fit`, you need to use `strategy.run` API to explicitly distribute that computation. See an example in the [custom training loop tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training). > *What should be in scope and what should be outside?* There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). * Anything that creates variables that should be distributed variables must be called in a `strategy.scope`. This can be accomplished either by directly calling the variable creating function within the scope context, or by relying on another API like `strategy.run` or [`keras.Model.fit`](../../keras/model#fit) to automatically enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Some common objects that create variables in TF are Models, Optimizers, Metrics. Such objects should always be initialized in the scope, and any functions that may lazily create variables (e.g., `Model.__call__()`, tracing a [`tf.function`](../../function), etc.) should similarly be called within scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the `strategy.scope` can also work seamlessly, without the user having to enter the scope. * Some strategy APIs (such as `strategy.run` and `strategy.reduce`) which require to be in a strategy's scope, enter the scope automatically, which means when using those APIs you don't need to explicitly enter the scope yourself. * When a [`tf.keras.Model`](../../keras/model) is created inside a `strategy.scope`, the Model object captures the scope information. When high level training framework methods such as `model.compile`, `model.fit`, etc. are then called, the captured scope will be automatically entered, and the associated strategy will be used to distribute the training etc. See a detailed example in [distributed keras tutorial](https://www.tensorflow.org/tutorials/distribute/keras). WARNING: Simply calling `model(..)` does not automatically enter the captured scope -- only high level training framework APIs support this behavior: `model.compile`, `model.fit`, `model.evaluate`, `model.predict` and `model.save` can all be called inside or outside the scope. * The following can be either inside or outside the scope: + Creating the input datasets + Defining [`tf.function`](../../function)s that represent your training step + Saving APIs such as [`tf.saved_model.save`](../../saved_model/save). Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. + Checkpoint saving. As mentioned above - `checkpoint.restore` may sometimes need to be inside scope if it creates variables. | Returns | | A context manager. |
programming_docs
tensorflow tf.distribute.experimental.partitioners.MinSizePartitioner tf.distribute.experimental.partitioners.MinSizePartitioner ========================================================== Partitioner that allocates a minimum size per shard. Inherits From: [`Partitioner`](partitioner) ``` tf.distribute.experimental.partitioners.MinSizePartitioner( min_shard_bytes=(256 << 10), max_shards=1, bytes_per_string=16 ) ``` This partitioner ensures each shard has at least `min_shard_bytes`, and tries to allocate as many shards as possible, i.e., keeping shard size as small as possible. The maximum number of such shards (upper bound) is given by `max_shards`. #### Examples: ``` partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=2) partitions = partitioner(tf.TensorShape([6, 1]), tf.float32) [2, 1] partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=10) partitions = partitioner(tf.TensorShape([6, 1]), tf.float32) [6, 1] # use in ParameterServerStrategy # strategy = tf.distribute.experimental.ParameterServerStrategy( # cluster_resolver=cluster_resolver, variable_partitioner=partitioner) ``` | Args | | `min_shard_bytes` | Minimum bytes of each shard. Defaults to 256K. | | `max_shards` | Upper bound on the number of shards. Defaults to 1. | | `bytes_per_string` | If the partition value is of type string, this provides an estimate of how large each string is. | Methods ------- ### `__call__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/sharded_variable.py#L162-L167) ``` __call__( shape, dtype, axis=0 ) ``` Partitions the given `shape` and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: ``` partitioner = FixedShardsPartitioner(num_shards=2) partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0) print(partitions) # [2, 0] ``` | Args | | `shape` | a [`tf.TensorShape`](../../../tensorshape), the shape to partition. | | `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. | | `axis` | The axis to partition along. Default: outermost axis. | | Returns | | A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow tf.distribute.experimental.partitioners.FixedShardsPartitioner tf.distribute.experimental.partitioners.FixedShardsPartitioner ============================================================== Partitioner that allocates a fixed number of shards. Inherits From: [`Partitioner`](partitioner) ``` tf.distribute.experimental.partitioners.FixedShardsPartitioner( num_shards ) ``` #### Examples: ``` # standalone usage: partitioner = FixedShardsPartitioner(num_shards=2) partitions = partitioner(tf.TensorShape([10, 3]), tf.float32) [2, 1] # use in ParameterServerStrategy # strategy = tf.distribute.experimental.ParameterServerStrategy( # cluster_resolver=cluster_resolver, variable_partitioner=partitioner) ``` | Args | | `num_shards` | `int`, number of shards to partition. | Methods ------- ### `__call__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/sharded_variable.py#L107-L111) ``` __call__( shape, dtype, axis=0 ) ``` Partitions the given `shape` and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: ``` partitioner = FixedShardsPartitioner(num_shards=2) partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0) print(partitions) # [2, 0] ``` | Args | | `shape` | a [`tf.TensorShape`](../../../tensorshape), the shape to partition. | | `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. | | `axis` | The axis to partition along. Default: outermost axis. | | Returns | | A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow tf.distribute.experimental.partitioners.MaxSizePartitioner tf.distribute.experimental.partitioners.MaxSizePartitioner ========================================================== Partitioner that keeps shards below `max_shard_bytes`. Inherits From: [`Partitioner`](partitioner) ``` tf.distribute.experimental.partitioners.MaxSizePartitioner( max_shard_bytes, max_shards=None, bytes_per_string=16 ) ``` This partitioner ensures each shard has at most `max_shard_bytes`, and tries to allocate as few shards as possible, i.e., keeping shard size as large as possible. If the partitioner hits the `max_shards` limit, then each shard may end up larger than `max_shard_bytes`. By default `max_shards` equals `None` and no limit on the number of shards is enforced. #### Examples: ``` partitioner = MaxSizePartitioner(max_shard_bytes=4) partitions = partitioner(tf.TensorShape([6, 1]), tf.float32) [6, 1] partitioner = MaxSizePartitioner(max_shard_bytes=4, max_shards=2) partitions = partitioner(tf.TensorShape([6, 1]), tf.float32) [2, 1] partitioner = MaxSizePartitioner(max_shard_bytes=1024) partitions = partitioner(tf.TensorShape([6, 1]), tf.float32) [1, 1] # use in ParameterServerStrategy # strategy = tf.distribute.experimental.ParameterServerStrategy( # cluster_resolver=cluster_resolver, variable_partitioner=partitioner) ``` | Args | | `max_shard_bytes` | The maximum size any given shard is allowed to be. | | `max_shards` | The maximum number of shards in `int` created taking precedence over `max_shard_bytes`. | | `bytes_per_string` | If the partition value is of type string, this provides an estimate of how large each string is. | Methods ------- ### `__call__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/sharded_variable.py#L223-L228) ``` __call__( shape, dtype, axis=0 ) ``` Partitions the given `shape` and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: ``` partitioner = FixedShardsPartitioner(num_shards=2) partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0) print(partitions) # [2, 0] ``` | Args | | `shape` | a [`tf.TensorShape`](../../../tensorshape), the shape to partition. | | `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. | | `axis` | The axis to partition along. Default: outermost axis. | | Returns | | A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow tf.distribute.experimental.partitioners.Partitioner tf.distribute.experimental.partitioners.Partitioner =================================================== Partitioner base class: all partitiners inherit from this class. Partitioners should implement a `__call__` method with the following signature: ``` def __call__(self, shape, dtype, axis=0): # Partitions the given `shape` and returns the partition results. # See docstring of `__call__` method for the format of partition results. ``` Methods ------- ### `__call__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/sharded_variable.py#L59-L79) ``` __call__( shape, dtype, axis=0 ) ``` Partitions the given `shape` and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: ``` partitioner = FixedShardsPartitioner(num_shards=2) partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0) print(partitions) # [2, 0] ``` | Args | | `shape` | a [`tf.TensorShape`](../../../tensorshape), the shape to partition. | | `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. | | `axis` | The axis to partition along. Default: outermost axis. | | Returns | | A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow tf.distribute.experimental.coordinator.ClusterCoordinator tf.distribute.experimental.coordinator.ClusterCoordinator ========================================================= An object to schedule and coordinate remote function execution. #### View aliases **Main aliases** [`tf.distribute.coordinator.ClusterCoordinator`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/coordinator/ClusterCoordinator) ``` tf.distribute.experimental.coordinator.ClusterCoordinator( strategy ) ``` This class is used to create fault-tolerant resources and dispatch functions to remote TensorFlow servers. Currently, this class is not supported to be used in a standalone manner. It should be used in conjunction with a [`tf.distribute`](../../../distribute) strategy that is designed to work with it. The `ClusterCoordinator` class currently only works [`tf.distribute.experimental.ParameterServerStrategy`](../parameterserverstrategy). **The `schedule`/`join` APIs** The most important APIs provided by this class is the `schedule`/`join` pair. The `schedule` API is non-blocking in that it queues a [`tf.function`](../../../function) and returns a `RemoteValue` immediately. The queued functions will be dispatched to remote workers in background threads and their `RemoteValue`s will be filled asynchronously. Since `schedule` doesn’t require worker assignment, the [`tf.function`](../../../function) passed in can be executed on any available worker. If the worker it is executed on becomes unavailable before its completion, it will be migrated to another worker. Because of this fact and function execution is not atomic, a function may be executed more than once. **Handling Task Failure** This class when used with [`tf.distribute.experimental.ParameterServerStrategy`](../parameterserverstrategy), comes with built-in fault tolerance for worker failures. That is, when some workers are not available for any reason to be reached from the coordinator, the training progress continues to be made with the remaining workers. Upon recovery of a failed worker, it will be added for function execution after datasets created by `create_per_worker_dataset` are re-built on it. When a parameter server fails, a [`tf.errors.UnavailableError`](../../../errors/unavailableerror) is raised by `schedule`, `join` or `done`. In this case, in addition to bringing back the failed parameter server, users should restart the coordinator so that it reconnects to workers and parameter servers, re-creates the variables, and loads checkpoints. If the coordinator fails, after the user brings it back, the program will automatically connect to workers and parameter servers, and continue the progress from a checkpoint. It is thus essential that in user's program, a checkpoint file is periodically saved, and restored at the start of the program. If an [`tf.keras.optimizers.Optimizer`](../../../keras/optimizers/optimizer) is checkpointed, after restoring from a checkpoiont, its `iterations` property roughly indicates the number of steps that have been made. This can be used to decide how many epochs and steps are needed before the training completion. See [`tf.distribute.experimental.ParameterServerStrategy`](../parameterserverstrategy) docstring for an example usage of this API. This is currently under development, and the API as well as implementation are subject to changes. | Args | | `strategy` | a supported [`tf.distribute.Strategy`](../../strategy) object. Currently, only [`tf.distribute.experimental.ParameterServerStrategy`](../parameterserverstrategy) is supported. | | Raises | | `ValueError` | if the strategy being used is not supported. | | Attributes | | `strategy` | Returns the `Strategy` associated with the `ClusterCoordinator`. | Methods ------- ### `create_per_worker_dataset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/cluster_coordinator.py#L1127-L1186) ``` create_per_worker_dataset( dataset_fn ) ``` Create dataset on workers by calling `dataset_fn` on worker devices. This creates the given dataset generated by dataset\_fn on workers and returns an object that represents the collection of those individual datasets. Calling `iter` on such collection of datasets returns a [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues), which is a collection of iterators, where the iterators have been placed on respective workers. Calling `next` on a `PerWorkerValues` of iterator is unsupported. The iterator is meant to be passed as an argument into [`tf.distribute.experimental.coordinator.ClusterCoordinator.schedule`](clustercoordinator#schedule). When the scheduled function is about to be executed by a worker, the function will receive the individual iterator that corresponds to the worker. The `next` method can be called on an iterator inside a scheduled function when the iterator is an input of the function. Currently the `schedule` method assumes workers are all the same and thus assumes the datasets on different workers are the same, except they may be shuffled differently if they contain a `dataset.shuffle` operation and a random seed is not set. Because of this, we also recommend the datasets to be repeated indefinitely and schedule a finite number of steps instead of relying on the `OutOfRangeError` from a dataset. #### Example: ``` strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy=strategy) @tf.function def worker_fn(iterator): return next(iterator) def per_worker_dataset_fn(): return strategy.distribute_datasets_from_function( lambda x: tf.data.Dataset.from_tensor_slices([3] * 3)) per_worker_dataset = coordinator.create_per_worker_dataset( per_worker_dataset_fn) per_worker_iter = iter(per_worker_dataset) remote_value = coordinator.schedule(worker_fn, args=(per_worker_iter,)) assert remote_value.fetch() == 3 ``` | Args | | `dataset_fn` | The dataset function that returns a dataset. This is to be executed on the workers. | | Returns | | An object that represents the collection of those individual datasets. `iter` is expected to be called on this object that returns a [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues) of the iterators (that are on the workers). | ### `done` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/cluster_coordinator.py#L1109-L1125) ``` done() ``` Returns whether all the scheduled functions have finished execution. If any previously scheduled function raises an error, `done` will fail by raising any one of those errors. When `done` returns True or raises, it guarantees that there is no function that is still being executed. | Returns | | Whether all the scheduled functions have finished execution. | | Raises | | `Exception` | one of the exceptions caught by the coordinator by any previously scheduled function since the last time an error was thrown or since the beginning of the program. | ### `fetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/cluster_coordinator.py#L1210-L1266) ``` fetch( val ) ``` Blocking call to fetch results from the remote values. This is a wrapper around [`tf.distribute.experimental.coordinator.RemoteValue.fetch`](remotevalue#fetch) for a `RemoteValue` structure; it returns the execution results of `RemoteValue`s. If not ready, wait for them while blocking the caller. #### Example: ``` strategy = ... coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy) def dataset_fn(): return tf.data.Dataset.from_tensor_slices([1, 1, 1]) with strategy.scope(): v = tf.Variable(initial_value=0) @tf.function def worker_fn(iterator): def replica_fn(x): v.assign_add(x) return v.read_value() return strategy.run(replica_fn, args=(next(iterator),)) distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn) distributed_iterator = iter(distributed_dataset) result = coordinator.schedule(worker_fn, args=(distributed_iterator,)) assert coordinator.fetch(result) == 1 ``` | Args | | `val` | The value to fetch the results from. If this is structure of [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue), `fetch()` will be called on the individual [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) to get the result. | | Returns | | If `val` is a [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) or a structure of [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue)s, return the fetched [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) values immediately if they are available, or block the call until they are available, and return the fetched [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) values with the same structure. If `val` is other types, return it as-is. | ### `join` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/cluster_coordinator.py#L1088-L1107) ``` join() ``` Blocks until all the scheduled functions have finished execution. If any previously scheduled function raises an error, `join` will fail by raising any one of those errors, and clear the errors collected so far. If this happens, some of the previously scheduled functions may have not been executed. Users can call `fetch` on the returned [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) to inspect if they have executed, failed, or cancelled. If some that have been cancelled need to be rescheduled, users should call `schedule` with the function again. When `join` returns or raises, it guarantees that there is no function that is still being executed. | Raises | | `Exception` | one of the exceptions caught by the coordinator by any previously scheduled function since the last time an error was thrown or since the beginning of the program. | ### `schedule` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/cluster_coordinator.py#L1017-L1086) ``` schedule( fn, args=None, kwargs=None ) ``` Schedules `fn` to be dispatched to a worker for asynchronous execution. This method is non-blocking in that it queues the `fn` which will be executed later and returns a [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) object immediately. `fetch` can be called on it to wait for the function execution to finish and retrieve its output from a remote worker. On the other hand, call [`tf.distribute.experimental.coordinator.ClusterCoordinator.join`](clustercoordinator#join) to wait for all scheduled functions to finish. `schedule` guarantees that `fn` will be executed on a worker at least once; it could be more than once if its corresponding worker fails in the middle of its execution. Note that since worker can fail at any point when executing the function, it is possible that the function is partially executed, but [`tf.distribute.experimental.coordinator.ClusterCoordinator`](clustercoordinator) guarantees that in those events, the function will eventually be executed on any worker that is available. If any previously scheduled function raises an error, `schedule` will raise any one of those errors, and clear the errors collected so far. What happens here, some of the previously scheduled functions may have not been executed. User can call `fetch` on the returned [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) to inspect if they have executed, failed, or cancelled, and reschedule the corresponding function if needed. When `schedule` raises, it guarantees that there is no function that is still being executed. At this time, there is no support of worker assignment for function execution, or priority of the workers. `args` and `kwargs` are the arguments passed into `fn`, when `fn` is executed on a worker. They can be [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues) and in this case, the argument will be substituted with the corresponding component on the target worker. Arguments that are not [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues) will be passed into `fn` as-is. Currently, [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) is not supported to be input `args` or `kwargs`. | Args | | `fn` | A [`tf.function`](../../../function); the function to be dispatched to a worker for execution asynchronously. Regular python function is not supported to be scheduled. | | `args` | Positional arguments for `fn`. | | `kwargs` | Keyword arguments for `fn`. | | Returns | | A [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) object that represents the output of the function scheduled. | | Raises | | `Exception` | one of the exceptions caught by the coordinator from any previously scheduled function, since the last time an error was thrown or since the beginning of the program. |
programming_docs
tensorflow tf.distribute.experimental.coordinator.RemoteValue tf.distribute.experimental.coordinator.RemoteValue ================================================== An asynchronously available value of a scheduled function. #### View aliases **Main aliases** [`tf.distribute.coordinator.RemoteValue`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/coordinator/RemoteValue) This class is used as the return value of [`tf.distribute.experimental.coordinator.ClusterCoordinator.schedule`](clustercoordinator#schedule) where the underlying value becomes available at a later time once the function has been executed. Using [`tf.distribute.experimental.coordinator.RemoteValue`](remotevalue) as an input to a subsequent function scheduled with [`tf.distribute.experimental.coordinator.ClusterCoordinator.schedule`](clustercoordinator#schedule) is currently not supported. #### Example: ``` strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=...) coordinator = ( tf.distribute.experimental.coordinator.ClusterCoordinator(strategy)) with strategy.scope(): v1 = tf.Variable(initial_value=0.0) v2 = tf.Variable(initial_value=1.0) @tf.function def worker_fn(): v1.assign_add(0.1) v2.assign_sub(0.2) return v1.read_value() / v2.read_value() result = coordinator.schedule(worker_fn) # Note that `fetch()` gives the actual result instead of a `tf.Tensor`. assert result.fetch() == 0.125 for _ in range(10): # `worker_fn` will be run on arbitrary workers that are available. The # `result` value will be available later. result = coordinator.schedule(worker_fn) ``` Methods ------- ### `fetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/values.py#L116-L132) ``` fetch() ``` Wait for the result of `RemoteValue` and return the numpy result. This makes the value concrete by copying the remote value to local. | Returns | | The numpy array structure of the actual output of the [`tf.function`](../../../function) associated with this `RemoteValue`, previously returned by a [`tf.distribute.experimental.coordinator.ClusterCoordinator.schedule`](clustercoordinator#schedule) call. This can be a single value, or a structure of values, depending on the output of the [`tf.function`](../../../function). | | Raises | | [`tf.errors.CancelledError`](https://www.tensorflow.org/api_docs/python/tf/errors/CancelledError) | If the function that produces this `RemoteValue` is aborted or cancelled due to failure. | ### `get` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/coordinator/values.py#L134-L150) ``` get() ``` Wait for the result of `RemoteValue` and return the tensor result. This makes the value concrete by copying the remote tensor to local. | Returns | | The actual output (in the form of [`tf.Tensor`](../../../tensor)s) of the [`tf.function`](../../../function) associated with this `RemoteValue`, previously returned by a [`tf.distribute.experimental.coordinator.ClusterCoordinator.schedule`](clustercoordinator#schedule) call. This can be a single Tensor, or a structure of Tensors, depending on the output of the [`tf.function`](../../../function). | | Raises | | [`tf.errors.CancelledError`](https://www.tensorflow.org/api_docs/python/tf/errors/CancelledError) | If the function that produces this `RemoteValue` is aborted or cancelled due to failure. | tensorflow tf.distribute.experimental.coordinator.PerWorkerValues tf.distribute.experimental.coordinator.PerWorkerValues ====================================================== A container that holds a list of values, one value per worker. #### View aliases **Main aliases** [`tf.distribute.coordinator.PerWorkerValue`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/coordinator/PerWorkerValues) ``` tf.distribute.experimental.coordinator.PerWorkerValues( values ) ``` [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues) contains a collection of values, where each of the values is located on its corresponding worker, and upon being used as one of the `args` or `kwargs` of [`tf.distribute.experimental.coordinator.ClusterCoordinator.schedule()`](clustercoordinator#schedule), the value specific to a worker will be passed into the function being executed at that corresponding worker. Currently, the only supported path to create an object of [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues) is through calling `iter` on a [`ClusterCoordinator.create_per_worker_dataset`](clustercoordinator#create_per_worker_dataset)-returned distributed dataset instance. The mechanism to create a custom [`tf.distribute.experimental.coordinator.PerWorkerValues`](perworkervalues) is not yet supported. tensorflow tf.distribute.experimental.rpc.Client tf.distribute.experimental.rpc.Client ===================================== Client class for invoking RPCs to the server. Methods ------- ### `call` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/experimental/rpc/rpc_ops.py#L217-L257) ``` call( method_name: str, args: Optional[Sequence[core_tf_types.Tensor]] = None, output_specs=None, timeout_in_ms=0 ) ``` Method for making RPC calls to remote server. This invokes RPC to the server, executing the registered method\_name remotely. Args: method\_name: Remote registered method to invoke args: List of arguments for the registered method. output\_specs: Output specs for the output from method. For example, if tf.function is: @tf.function(input\_signature=[ tf.TensorSpec([], tf.int32), tf.TensorSpec([], tf.int32) ]) def multiply\_fn(a, b): return tf.math.multiply(a, b) output\_spec is: tf.TensorSpec((), tf.int32) If you have access to TF Function, the output specs can be generated from tf.function by calling: output\_specs = tf.nest.map\_structure(tf.type\_spec\_from\_value, tf\_function.get\_concrete\_function().structured\_outputs If output\_specs are not provided, flattened list of tensors will be returned in response. timeout\_in\_ms: Timeout for this call. If 0, default client timeout will be used. | Returns | | An instance of `StatusOrResult` class with the following available methods. * `is_ok()`: Returns True of RPC was successful. * `get_error()`: Returns TF error\_code and error message for the RPC. * `get_value()`: Returns the returned value from remote TF function execution when RPC is successful. Calling any of the above methods will block till RPC is completed and result is available. | ### `create` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/experimental/rpc/rpc_ops.py#L121-L215) ``` @staticmethod create( rpc_layer, address, name='', timeout_in_ms=0 ) ``` Create TF RPC client to connect to the given address. | Args | | `rpc_layer` | Communication layer between client and server. Only "grpc" rpc layer is supported at the moment. | | `address` | Address of the server to connect the RPC client to. | | `name` | Name of the RPC Client. You can create multiple clients connecting to same server and distinguish them using different names. | | `timeout_in_ms` | The default timeout to use for outgoing RPCs from client. 0 indicates no timeout. Exceeding timeout during RPC will raise DeadlineExceeded error. | | Returns | | An instance of [`tf.distribute.experimental.rpc.Client`](client) with the following dynamically added methods for eagerly created clients: * `Registered methods` e.g. multiply(\*\*args): If Client is created when executing eagerly, client will request the list of registered methods from server during client creation. The convenience methods for RPCs will be dynamically added to the created Client instance. For example, when a server has method "multiply" registered, the client object created in eager mode will have 'multiply' method available. Users can use client.multiply(..) to make RPC, instead of client.call("multiply", ...) Both "call" and "multiply" methods are non-blocking i.e. they return a StatusOrResult object which should be used to wait for getting value or error. Along with the above, blocking versions of the registered methods are also dynamically added to client instance. e.g. multiply\_blocking(\*\*args). These methods block till the RPC is finished and return response for successful RPC. Otherwise raise exception. These methods are not available when Client is created inside a tf.function. | | Raises | | A ValueError if rpc\_layer other than "grpc" is used. Only GRPC is supported at the moment. A DeadlineExceeded exception in eager mode if timeout exceeds while creating and listing client methods. | #### Example usage: ``` # Have server already started. import portpicker @tf.function(input_signature=[ tf.TensorSpec([], tf.int32), tf.TensorSpec([], tf.int32)]) def remote_fn(a, b): return tf.add(a, b) ``` ``` port = portpicker.pick_unused_port() address = "localhost:{}".format(port) server = tf.distribute.experimental.rpc.Server.create("grpc", address) server.register("addition", remote_fn) server.start() ``` ``` # Start client client = tf.distribute.experimental.rpc.Client.create("grpc", address=address, name="test_client") ``` ``` a = tf.constant(2, dtype=tf.int32) b = tf.constant(3, dtype=tf.int32) ``` ``` result = client.call( args=[a, b], method_name="addition", output_specs=tf.TensorSpec((), tf.int32)) ``` ``` if result.is_ok(): result.get_value() ``` ``` result = client.addition(a, b) ``` ``` if result.is_ok(): result.get_value() ``` ``` value = client.addition_blocking(a, b) ``` tensorflow tf.distribute.experimental.rpc.Server tf.distribute.experimental.rpc.Server ===================================== A Server base class for accepting RPCs for registered tf.functions. Functions can be registered on the server and are exposed via RPCs. Methods ------- ### `create` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/experimental/rpc/rpc_ops.py#L57-L91) ``` @staticmethod create( rpc_layer, address ) ``` Create TF RPC server at given address. | Args | | `rpc_layer` | Communication layer between client and server. Only "grpc" rpc layer is supported at the moment. | | `address` | Address where RPC server is hosted. | | Returns | | An instance of [`tf.distribute.experimental.rpc.Server`](server) class. | | Raises | | A ValueError if rpc\_layer other than "grpc" is used. Only GRPC is supported at the moment. | #### Example usage: ``` import portpicker @tf.function(input_signature=[ tf.TensorSpec([], tf.int32), tf.TensorSpec([], tf.int32)]) def remote_fn(a, b): return tf.add(a, b) ``` ``` port = portpicker.pick_unused_port() address = "localhost:{}".format(port) server = tf.distribute.experimental.rpc.Server.create("grpc", address) server.register("addition", remote_fn) server.start() ``` ### `register` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/experimental/rpc/rpc_ops.py#L93-L106) ``` register( method_name: str, func: Union[def_function.Function, tf_function.ConcreteFunction] ) ``` Method for registering tf.function on server. Registered methods can be invoked remotely from clients. | Args | | `method_name` | Name of the tf.function. Clients use this method\_name to make RPCs. | | `func` | A [`tf.function`](../../../function) or ConcreteFunction to register. | ### `start` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/experimental/rpc/rpc_ops.py#L108-L114) ``` start() ``` Starts the RPC server on provided address. Server listens for new requests from client, once it is started. tensorflow tf.distribute.cluster_resolver.SimpleClusterResolver tf.distribute.cluster\_resolver.SimpleClusterResolver ===================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L289-L415) | Simple implementation of ClusterResolver that accepts all attributes. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/SimpleClusterResolver) ``` tf.distribute.cluster_resolver.SimpleClusterResolver( cluster_spec, master='', task_type=None, task_id=None, environment='', num_accelerators=None, rpc_layer=None ) ``` Please see the base class for documentation of arguments of its constructor. It is useful if you want to specify some or all attributes. Usage example with [`tf.distribute.Strategy`](../strategy): ``` cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222", "worker1.example.com:2222"]}) # On worker 0 cluster_resolver = SimpleClusterResolver(cluster, task_type="worker", task_id=0, num_accelerators={"GPU": 8}, rpc_layer="grpc") strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=cluster_resolver) # On worker 1 cluster_resolver = SimpleClusterResolver(cluster, task_type="worker", task_id=1, num_accelerators={"GPU": 8}, rpc_layer="grpc") strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=cluster_resolver) ``` | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `rpc_layer` | | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L341-L343) ``` cluster_spec() ``` Returns the ClusterSpec passed into the constructor. ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L345-L366) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Returns the master address to use when creating a session. > > **Note:** this is only useful for TensorFlow 1.x. > | Args | | `task_type` | (Optional) The type of the TensorFlow task of the master. | | `task_id` | (Optional) The index of the TensorFlow task of the master. | | `rpc_layer` | (Optional) The RPC used by distributed TensorFlow. | | Returns | | The name or URL of the session master. | If a task\_type and task\_id is given, this will override the `master` string passed into the initialization function. ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L388-L407) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. The SimpleClusterResolver does not do automatic detection of accelerators, and thus all arguments are unused and we simply return the value provided in the constructor. | Args | | `task_type` | Unused. | | `task_id` | Unused. | | `config_proto` | Unused. | tensorflow tf.distribute.cluster_resolver.TPUClusterResolver tf.distribute.cluster\_resolver.TPUClusterResolver ================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L56-L433) | Cluster Resolver for Google Cloud TPUs. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver) ``` tf.distribute.cluster_resolver.TPUClusterResolver( tpu=None, zone=None, project=None, job_name='worker', coordinator_name=None, coordinator_address=None, credentials='default', service=None, discovery_url=None ) ``` This is an implementation of cluster resolvers for the Google Cloud TPU service. TPUClusterResolver supports the following distinct environments: Google Compute Engine Google Kubernetes Engine Google internal It can be passed into [`tf.distribute.TPUStrategy`](../tpustrategy) to support TF2 training on Cloud TPUs. | Args | | `tpu` | A string corresponding to the TPU to use. It can be the TPU name or TPU worker gRPC address. If not set, it will try automatically resolve the TPU address on Cloud TPUs. If set to "local", it will assume that the TPU is directly connected to the VM instead of over the network. | | `zone` | Zone where the TPUs are located. If omitted or empty, we will assume that the zone of the TPU is the same as the zone of the GCE VM, which we will try to discover from the GCE metadata service. | | `project` | Name of the GCP project containing Cloud TPUs. If omitted or empty, we will try to discover the project name of the GCE VM from the GCE metadata service. | | `job_name` | Name of the TensorFlow job the TPUs belong to. | | `coordinator_name` | The name to use for the coordinator. Set to None if the coordinator should not be included in the computed ClusterSpec. | | `coordinator_address` | The address of the coordinator (typically an ip:port pair). If set to None, a TF server will be started. If coordinator\_name is None, a TF server will not be started even if coordinator\_address is None. | | `credentials` | GCE Credentials. If None, then we use default credentials from the oauth2client | | `service` | The GCE API object returned by the googleapiclient.discovery function. If you specify a custom service object, then the credentials parameter will be ignored. | | `discovery_url` | A URL template that points to the location of the discovery service. It should have two parameters {api} and {apiVersion} that when filled in produce an absolute URL to the discovery document for that service. The environment variable 'TPU\_API\_DISCOVERY\_URL' will override this. | | Raises | | `ImportError` | If the googleapiclient is not installed. | | `ValueError` | If no TPUs are specified. | | `RuntimeError` | If an empty TPU name is specified and this is running in a Google Cloud environment. | | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | | `tpu_hardware_feature` | Returns the tpu topology info stored. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L304-L341) ``` cluster_spec() ``` Returns a ClusterSpec object based on the latest TPU information. We retrieve the information from the GCE APIs every time this method is called. | Returns | | A ClusterSpec containing host information returned from Cloud TPUs, or None. | | Raises | | `RuntimeError` | If the provided TPU is not healthy. | ### `connect` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L71-L111) ``` @staticmethod connect( tpu=None, zone=None, project=None ) ``` Initializes TPU and returns a TPUClusterResolver. This API will connect to remote TPU cluster and initialize the TPU hardwares. Example usage: ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver.connect( tpu='') ``` It can be viewed as convenient wrapper of the following code: ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) ``` | Args | | `tpu` | A string corresponding to the TPU to use. It can be the TPU name or TPU worker gRPC address. If not set, it will try automatically resolve the TPU address on Cloud TPUs. | | `zone` | Zone where the TPUs are located. If omitted or empty, we will assume that the zone of the TPU is the same as the zone of the GCE VM, which we will try to discover from the GCE metadata service. | | `project` | Name of the GCP project containing Cloud TPUs. If omitted or empty, we will try to discover the project name of the GCE VM from the GCE metadata service. | | Returns | | An instance of TPUClusterResolver object. | | Raises | | `NotFoundError` | If no TPU devices found in eager mode. | ### `get_job_name` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L276-L277) ``` get_job_name() ``` ### `get_master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L273-L274) ``` get_master() ``` ### `get_tpu_system_metadata` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L279-L302) ``` get_tpu_system_metadata() ``` Returns the metadata of the TPU system. Users can call this method to get some facts of the TPU system, like total number of cores, number of TPU workers and the devices. E.g. ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tpu_system_metadata = resolver.get_tpu_system_metadata() num_hosts = tpu_system_metadata.num_hosts ``` | Returns | | A [`tf.tpu.experimental.TPUSystemMetadata`](../../tpu/experimental/tpusystemmetadata) object. | ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L229-L271) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Get the Master string to be used for the session. In the normal case, this returns the grpc path (grpc://1.2.3.4:8470) of first instance in the ClusterSpec returned by the cluster\_spec function. If a non-TPU name is used when constructing a TPUClusterResolver, that will be returned instead (e.g. If the tpus argument's value when constructing this TPUClusterResolver was 'grpc://10.240.1.2:8470', 'grpc://10.240.1.2:8470' will be returned). | Args | | `task_type` | (Optional, string) The type of the TensorFlow task of the master. | | `task_id` | (Optional, integer) The index of the TensorFlow task of the master. | | `rpc_layer` | (Optional, string) The RPC protocol TensorFlow should use to communicate with TPUs. | | Returns | | string, the connection string to use when creating a session. | | Raises | | `ValueError` | If none of the TPUs specified exists. | ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L343-L397) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of TPU cores per worker. Connects to the master and list all the devices present in the master, and counts them up. Also verifies that the device counts per host in the cluster is the same before returning the number of TPU cores per host. | Args | | `task_type` | Unused. | | `task_id` | Unused. | | `config_proto` | Used to create a connection to a TPU master in order to retrieve the system metadata. | | Raises | | `RuntimeError` | If we cannot talk to a TPU worker after retrying or if the number of TPU devices per host is different. | ### `set_tpu_topology` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L399-L402) ``` set_tpu_topology( serialized_tpu_topology ) ``` Sets the tpu topology info stored in this resolver. ### `__enter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L223-L224) ``` __enter__() ``` ### `__exit__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tpu/tpu_cluster_resolver.py#L226-L227) ``` __exit__( type, value, traceback ) ```
programming_docs
tensorflow tf.distribute.cluster_resolver.ClusterResolver tf.distribute.cluster\_resolver.ClusterResolver =============================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L57-L285) | Abstract class for all implementations of ClusterResolvers. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.ClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver) This defines the skeleton for all implementations of ClusterResolvers. ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary information to set up distributed training. By letting TensorFlow communicate with these systems, we will be able to automatically discover and resolve IP addresses for various TensorFlow workers. This will eventually allow us to automatically recover from underlying machine failures and scale TensorFlow worker clusters up and down. Note to Implementors of [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver) subclass: In addition to these abstract methods, when task\_type, task\_id, and rpc\_layer attributes are applicable, you should also implement them either as properties with getters or setters, or directly set the attributes `self._task_type`, `self._task_id`, or `self._rpc_layer` so the base class' getters and setters are used. See [`tf.distribute.cluster*resolver.SimpleClusterResolver.**init*_`](simpleclusterresolver#__init__) for an example. In general, multi-client tf.distribute strategies such as [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](../experimental/multiworkermirroredstrategy) require task\_type and task\_id properties to be available in the `ClusterResolver` they are using. On the other hand, these concepts are not applicable in single-client strategies, such as [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy), because the program is only expected to be run on one task, so there should not be a need to have code branches according to task type and task id. * task\_type is the name of the server's current named job (e.g. 'worker', 'ps' in a distributed parameterized training job). * task\_id is the ordinal index of the server within the task type. * rpc\_layer is the protocol used by TensorFlow to communicate with other TensorFlow servers in a distributed environment. | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L94-L108) ``` @abc.abstractmethod cluster_spec() ``` Retrieve the current state of the cluster and return a [`tf.train.ClusterSpec`](../../train/clusterspec). | Returns | | A [`tf.train.ClusterSpec`](../../train/clusterspec) representing the state of the cluster at the moment this function is called. | Implementors of this function must take care in ensuring that the ClusterSpec returned is up-to-date at the time of calling this function. This usually means retrieving the information from the underlying cluster management system every time this function is invoked and reconstructing a cluster\_spec, rather than attempting to cache anything. ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L110-L128) ``` @abc.abstractmethod master( task_type=None, task_id=None, rpc_layer=None ) ``` Retrieves the name or URL of the session master. > > **Note:** this is only useful for TensorFlow 1.x. > | Args | | `task_type` | (Optional) The type of the TensorFlow task of the master. | | `task_id` | (Optional) The index of the TensorFlow task of the master. | | `rpc_layer` | (Optional) The RPC protocol for the given cluster. | | Returns | | The name or URL of the session master. | Implementors of this function must take care in ensuring that the master returned is up-to-date at the time to calling this function. This usually means retrieving the master every time this function is invoked. ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L130-L167) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task\_type, and task\_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. | Args | | `task_type` | (Optional) The type of the TensorFlow task of the machine we want to query. | | `task_id` | (Optional) The index of the TensorFlow task of the machine we want to query. | | `config_proto` | (Optional) Configuration for starting a new session to query how many accelerator cores it has. | | Returns | | A map of accelerator types to number of cores. | tensorflow tf.distribute.cluster_resolver.TFConfigClusterResolver tf.distribute.cluster\_resolver.TFConfigClusterResolver ======================================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tfconfig_cluster_resolver.py#L48-L200) | Implementation of a ClusterResolver which reads the TF\_CONFIG EnvVar. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TFConfigClusterResolver) ``` tf.distribute.cluster_resolver.TFConfigClusterResolver( task_type=None, task_id=None, rpc_layer=None, environment=None ) ``` This is an implementation of cluster resolvers when using TF\_CONFIG to set information about the cluster. The cluster spec returned will be initialized from the TF\_CONFIG environment variable. An example to set TF\_CONFIG is: ``` os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0} }) ``` However, sometimes the container orchestration framework will set TF\_CONFIG for you. In this case, you can just create an instance without passing in any arguments. You can find an example here to let Kuburnetes set TF\_CONFIG for you: https://github.com/tensorflow/ecosystem/tree/master/kubernetes. Then you can use it with [`tf.distribute.Strategy`](../strategy) as: ``` # `TFConfigClusterResolver` is already the default one in the following # strategy. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=TFConfigClusterResolver()) ``` | Args | | `task_type` | (String, optional) Overrides the task type specified in the TF\_CONFIG environment variable. | | `task_id` | (Integer, optional) Overrides the task index specified in the TF\_CONFIG environment variable. | | `rpc_layer` | (String, optional) Overrides the rpc layer TensorFlow uses. | | `environment` | (String, optional) Overrides the environment TensorFlow operates in. | | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `rpc_layer` | | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tfconfig_cluster_resolver.py#L149-L158) ``` cluster_spec() ``` Returns a ClusterSpec based on the TF\_CONFIG environment variable. | Returns | | A ClusterSpec with information from the TF\_CONFIG environment variable. | ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tfconfig_cluster_resolver.py#L160-L200) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Returns the master address to use when creating a TensorFlow session. > > **Note:** this is only useful for TensorFlow 1.x. > | Args | | `task_type` | (String, optional) Overrides and sets the task\_type of the master. | | `task_id` | (Integer, optional) Overrides and sets the task id of the master. | | `rpc_layer` | (String, optional) Overrides and sets the protocol over which TensorFlow nodes communicate with each other. | | Returns | | The address of the master. | | Raises | | `RuntimeError` | If the task\_type or task\_id is not specified and the `TF_CONFIG` environment variable does not contain a task section. | ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/tfconfig_cluster_resolver.py#L140-L147) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task\_type, and task\_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. | Args | | `task_type` | (Optional) The type of the TensorFlow task of the machine we want to query. | | `task_id` | (Optional) The index of the TensorFlow task of the machine we want to query. | | `config_proto` | (Optional) Configuration for starting a new session to query how many accelerator cores it has. | | Returns | | A map of accelerator types to number of cores. | tensorflow tf.distribute.cluster_resolver.SlurmClusterResolver tf.distribute.cluster\_resolver.SlurmClusterResolver ==================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/slurm_cluster_resolver.py#L164-L397) | ClusterResolver for system with Slurm workload manager. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/SlurmClusterResolver) ``` tf.distribute.cluster_resolver.SlurmClusterResolver( jobs=None, port_base=8888, gpus_per_node=None, gpus_per_task=None, tasks_per_node=None, auto_set_gpu=True, rpc_layer='grpc' ) ``` This is an implementation of ClusterResolver for Slurm clusters. This allows the specification of jobs and task counts, number of tasks per node, number of GPUs on each node and number of GPUs for each task. It retrieves system attributes by Slurm environment variables, resolves allocated computing node names, constructs a cluster and returns a ClusterResolver object which can be used for distributed TensorFlow. | Args | | `jobs` | Dictionary with job names as key and number of tasks in the job as value. Defaults to as many 'worker's as there are (Slurm) tasks. | | `port_base` | The first port number to start with for processes on a node. | | `gpus_per_node` | Number of GPUs available on each node. Defaults to the number of GPUs reported by nvidia-smi | | `gpus_per_task` | Number of GPUs to be used for each task. Default is to evenly distribute the gpus\_per\_node to tasks\_per\_node. | | `tasks_per_node` | Number of tasks running on each node. Can be an integer if the number of tasks per node is constant or a dictionary mapping hostnames to number of tasks on that node. If not set the Slurm environment is queried for the correct mapping. | | `auto_set_gpu` | Set the visible CUDA devices automatically while resolving the cluster by setting CUDA\_VISIBLE\_DEVICES environment variable. Defaults to True. | | `rpc_layer` | The protocol TensorFlow used to communicate between nodes. Defaults to 'grpc'. | | Raises | | `RuntimeError` | If requested more GPUs per node then available or requested more tasks then assigned tasks or resolving missing values from the environment failed. | | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/slurm_cluster_resolver.py#L300-L353) ``` cluster_spec() ``` Returns a ClusterSpec object based on the latest instance group info. This returns a ClusterSpec object for use based on information from the specified initialization parameters and Slurm environment variables. The cluster specification is resolved each time this function is called. The resolver extract hostnames of nodes by scontrol and pack tasks in that order until a node a has number of tasks that is equal to specification. GPUs on nodes are allocated to tasks by specification through setting CUDA\_VISIBLE\_DEVICES environment variable. | Returns | | A ClusterSpec containing host information retrieved from Slurm's environment variables. | ### `get_task_info` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/slurm_cluster_resolver.py#L355-L367) ``` get_task_info() ``` Returns job name and task\_id for the process which calls this. This returns the job name and task index for the process which calls this function according to its rank and cluster specification. The job name and task index are set after a cluster is constructed by cluster\_spec otherwise defaults to None. | Returns | | A string specifying job name the process belongs to and an integer specifying the task index the process belongs to in that job. | ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/slurm_cluster_resolver.py#L369-L389) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Returns the master string for connecting to a TensorFlow master. | Args | | `task_type` | (Optional) Overrides the default auto-selected task type. | | `task_id` | (Optional) Overrides the default auto-selected task index. | | `rpc_layer` | (Optional) Overrides the default RPC protocol TensorFlow uses to communicate across nodes. | | Returns | | A connection string for connecting to a TensorFlow master. | ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/slurm_cluster_resolver.py#L391-L397) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task\_type, and task\_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. | Args | | `task_type` | (Optional) The type of the TensorFlow task of the machine we want to query. | | `task_id` | (Optional) The index of the TensorFlow task of the machine we want to query. | | `config_proto` | (Optional) Configuration for starting a new session to query how many accelerator cores it has. | | Returns | | A map of accelerator types to number of cores. |
programming_docs
tensorflow tf.distribute.cluster_resolver.UnionResolver tf.distribute.cluster\_resolver.UnionResolver ============================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L419-L624) | Performs a union on underlying ClusterResolvers. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.UnionResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/UnionResolver) ``` tf.distribute.cluster_resolver.UnionResolver( *args, **kwargs ) ``` This class performs a union given two or more existing ClusterResolvers. It merges the underlying ClusterResolvers, and returns one unified ClusterSpec when cluster\_spec is called. The details of the merge function is documented in the cluster\_spec function. For additional ClusterResolver properties such as task type, task index, rpc layer, environment, etc..., we will return the value from the first ClusterResolver in the union. An example to combine two cluster resolvers: ``` cluster_0 = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222", "worker1.example.com:2222"]}) cluster_resolver_0 = SimpleClusterResolver(cluster, task_type="worker", task_id=0, rpc_layer="grpc") cluster_1 = tf.train.ClusterSpec({"ps": ["ps0.example.com:2222", "ps1.example.com:2222"]}) cluster_resolver_1 = SimpleClusterResolver(cluster, task_type="ps", task_id=0, rpc_layer="grpc") # Its task type would be "worker". cluster_resolver = UnionClusterResolver(cluster_resolver_0, cluster_resolver_1) ``` An example to override the number of GPUs in a TFConfigClusterResolver instance: ``` tf_config = TFConfigClusterResolver() gpu_override = SimpleClusterResolver(tf_config.cluster_spec(), num_accelerators={"GPU": 1}) cluster_resolver = UnionResolver(gpu_override, tf_config) ``` | Args | | `*args` | `ClusterResolver` objects to be unionized. | | `**kwargs` | rpc\_layer - (Optional) Override value for the RPC layer used by TensorFlow. task\_type - (Optional) Override value for the current task type. task\_id - (Optional) Override value for the current task index. | | Raises | | `TypeError` | If any argument is not a subclass of `ClusterResolvers`. | | `ValueError` | If there are no arguments passed. | | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `rpc_layer` | | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L495-L567) ``` cluster_spec() ``` Returns a union of all the ClusterSpecs from the ClusterResolvers. | Returns | | A ClusterSpec containing host information merged from all the underlying ClusterResolvers. | | Raises | | `KeyError` | If there are conflicting keys detected when merging two or more dictionaries, this exception is raised. | > > **Note:** If there are multiple ClusterResolvers exposing ClusterSpecs with the same job name, we will merge the list/dict of workers. > If *all* underlying ClusterSpecs expose the set of workers as lists, we will concatenate the lists of workers, starting with the list of workers from the first ClusterResolver passed into the constructor. If *any* of the ClusterSpecs expose the set of workers as a dict, we will treat all the sets of workers as dicts (even if they are returned as lists) and will only merge them into a dict if there is no conflicting keys. If there is a conflicting key, we will raise a `KeyError`. ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L569-L589) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Returns the master address to use when creating a session. This usually returns the master from the first ClusterResolver passed in, but you can override this by specifying the task\_type and task\_id. > > **Note:** this is only useful for TensorFlow 1.x. > | Args | | `task_type` | (Optional) The type of the TensorFlow task of the master. | | `task_id` | (Optional) The index of the TensorFlow task of the master. | | `rpc_layer` | (Optional) The RPC protocol for the given cluster. | | Returns | | The name or URL of the session master. | ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L611-L616) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task\_type, and task\_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. | Args | | `task_type` | (Optional) The type of the TensorFlow task of the machine we want to query. | | `task_id` | (Optional) The index of the TensorFlow task of the machine we want to query. | | `config_proto` | (Optional) Configuration for starting a new session to query how many accelerator cores it has. | | Returns | | A map of accelerator types to number of cores. | tensorflow tf.distribute.cluster_resolver.GCEClusterResolver tf.distribute.cluster\_resolver.GCEClusterResolver ================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/gce_cluster_resolver.py#L31-L207) | ClusterResolver for Google Compute Engine. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/GCEClusterResolver) ``` tf.distribute.cluster_resolver.GCEClusterResolver( project, zone, instance_group, port, task_type='worker', task_id=0, rpc_layer='grpc', credentials='default', service=None ) ``` This is an implementation of cluster resolvers for the Google Compute Engine instance group platform. By specifying a project, zone, and instance group, this will retrieve the IP address of all the instances within the instance group and return a ClusterResolver object suitable for use for distributed TensorFlow. > > **Note:** this cluster resolver cannot retrieve `task_type`, `task_id` or `rpc_layer`. To use it with some distribution strategies like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](../experimental/multiworkermirroredstrategy), you will need to specify `task_type` and `task_id` in the constructor. > Usage example with tf.distribute.Strategy: ``` # On worker 0 cluster_resolver = GCEClusterResolver("my-project", "us-west1", "my-instance-group", task_type="worker", task_id=0) strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=cluster_resolver) # On worker 1 cluster_resolver = GCEClusterResolver("my-project", "us-west1", "my-instance-group", task_type="worker", task_id=1) strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=cluster_resolver) ``` | Args | | `project` | Name of the GCE project. | | `zone` | Zone of the GCE instance group. | | `instance_group` | Name of the GCE instance group. | | `port` | Port of the listening TensorFlow server (default: 8470) | | `task_type` | Name of the TensorFlow job this GCE instance group of VM instances belong to. | | `task_id` | The task index for this particular VM, within the GCE instance group. In particular, every single instance should be assigned a unique ordinal index within an instance group manually so that they can be distinguished from each other. | | `rpc_layer` | The RPC layer TensorFlow should use to communicate across instances. | | `credentials` | GCE Credentials. If nothing is specified, this defaults to GoogleCredentials.get\_application\_default(). | | `service` | The GCE API object returned by the googleapiclient.discovery function. (Default: discovery.build('compute', 'v1')). If you specify a custom service object, then the credentials parameter will be ignored. | | Raises | | `ImportError` | If the googleapiclient is not installed. | | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `rpc_layer` | | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/gce_cluster_resolver.py#L125-L168) ``` cluster_spec() ``` Returns a ClusterSpec object based on the latest instance group info. This returns a ClusterSpec object for use based on information from the specified instance group. We will retrieve the information from the GCE APIs every time this method is called. | Returns | | A ClusterSpec containing host information retrieved from GCE. | ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/gce_cluster_resolver.py#L170-L181) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Retrieves the name or URL of the session master. > > **Note:** this is only useful for TensorFlow 1.x. > | Args | | `task_type` | (Optional) The type of the TensorFlow task of the master. | | `task_id` | (Optional) The index of the TensorFlow task of the master. | | `rpc_layer` | (Optional) The RPC protocol for the given cluster. | | Returns | | The name or URL of the session master. | Implementors of this function must take care in ensuring that the master returned is up-to-date at the time to calling this function. This usually means retrieving the master every time this function is invoked. ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L130-L167) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task\_type, and task\_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. | Args | | `task_type` | (Optional) The type of the TensorFlow task of the machine we want to query. | | `task_id` | (Optional) The index of the TensorFlow task of the machine we want to query. | | `config_proto` | (Optional) Configuration for starting a new session to query how many accelerator cores it has. | | Returns | | A map of accelerator types to number of cores. | tensorflow tf.distribute.cluster_resolver.KubernetesClusterResolver tf.distribute.cluster\_resolver.KubernetesClusterResolver ========================================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/kubernetes_cluster_resolver.py#L24-L181) | ClusterResolver for Kubernetes. Inherits From: [`ClusterResolver`](clusterresolver) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/KubernetesClusterResolver) ``` tf.distribute.cluster_resolver.KubernetesClusterResolver( job_to_label_mapping=None, tf_server_port=8470, rpc_layer='grpc', override_client=None ) ``` This is an implementation of cluster resolvers for Kubernetes. When given the the Kubernetes namespace and label selector for pods, we will retrieve the pod IP addresses of all running pods matching the selector, and return a ClusterSpec based on that information. > > **Note:** it cannot retrieve `task_type`, `task_id` or `rpc_layer`. To use it with some distribution strategies like [`tf.distribute.experimental.MultiWorkerMirroredStrategy`](../experimental/multiworkermirroredstrategy), you will need to specify `task_type` and `task_id` by setting these attributes. > Usage example with tf.distribute.Strategy: ``` # On worker 0 cluster_resolver = KubernetesClusterResolver( {"worker": ["job-name=worker-cluster-a", "job-name=worker-cluster-b"]}) cluster_resolver.task_type = "worker" cluster_resolver.task_id = 0 strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=cluster_resolver) # On worker 1 cluster_resolver = KubernetesClusterResolver( {"worker": ["job-name=worker-cluster-a", "job-name=worker-cluster-b"]}) cluster_resolver.task_type = "worker" cluster_resolver.task_id = 1 strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( cluster_resolver=cluster_resolver) ``` | Args | | `job_to_label_mapping` | A mapping of TensorFlow jobs to label selectors. This allows users to specify many TensorFlow jobs in one Cluster Resolver, and each job can have pods belong with different label selectors. For example, a sample mapping might be ``` {'worker': ['job-name=worker-cluster-a', 'job-name=worker-cluster-b'], 'ps': ['job-name=ps-1', 'job-name=ps-2']} ``` | | `tf_server_port` | The port the TensorFlow server is listening on. | | `rpc_layer` | (Optional) The RPC layer TensorFlow should use to communicate between tasks in Kubernetes. Defaults to 'grpc'. | | `override_client` | The Kubernetes client (usually automatically retrieved using `from kubernetes import client as k8sclient`). If you pass this in, you are responsible for setting Kubernetes credentials manually. | | Raises | | `ImportError` | If the Kubernetes Python client is not installed and no `override_client` is passed in. | | `RuntimeError` | If autoresolve\_task is not a boolean or a callable. | | Attributes | | `environment` | Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. | | `task_id` | Returns the task id this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.cluster_resolver.TPUClusterResolver`](tpuclusterresolver). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class docstring. | | `task_type` | Returns the task type this `ClusterResolver` indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See [Multi-worker configuration](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#multi-worker_configuration) for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, ``` cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. ``` Returns `None` if such information is not available or is not applicable in the current distributed environment, such as training with [`tf.distribute.experimental.TPUStrategy`](../experimental/tpustrategy). For more information, please see [`tf.distribute.cluster_resolver.ClusterResolver`](clusterresolver)'s class doc. | Methods ------- ### `cluster_spec` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/kubernetes_cluster_resolver.py#L140-L181) ``` cluster_spec() ``` Returns a ClusterSpec object based on the latest info from Kubernetes. We retrieve the information from the Kubernetes master every time this method is called. | Returns | | A ClusterSpec containing host information returned from Kubernetes. | | Raises | | `RuntimeError` | If any of the pods returned by the master is not in the `Running` phase. | ### `master` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/kubernetes_cluster_resolver.py#L112-L138) ``` master( task_type=None, task_id=None, rpc_layer=None ) ``` Returns the master address to use when creating a session. You must have set the task\_type and task\_id object properties before calling this function, or pass in the `task_type` and `task_id` parameters when using this function. If you do both, the function parameters will override the object properties. > > **Note:** this is only useful for TensorFlow 1.x. > | Args | | `task_type` | (Optional) The type of the TensorFlow task of the master. | | `task_id` | (Optional) The index of the TensorFlow task of the master. | | `rpc_layer` | (Optional) The RPC protocol for the given cluster. | | Returns | | The name or URL of the session master. | ### `num_accelerators` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/distribute/cluster_resolver/cluster_resolver.py#L130-L167) ``` num_accelerators( task_type=None, task_id=None, config_proto=None ) ``` Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task\_type, and task\_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. | Args | | `task_type` | (Optional) The type of the TensorFlow task of the machine we want to query. | | `task_id` | (Optional) The index of the TensorFlow task of the machine we want to query. | | `config_proto` | (Optional) Configuration for starting a new session to query how many accelerator cores it has. | | Returns | | A map of accelerator types to number of cores. |
programming_docs
tensorflow tf.audio.encode_wav tf.audio.encode\_wav ==================== Encode audio data using the WAV file format. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.audio.encode_wav`](https://www.tensorflow.org/api_docs/python/tf/audio/encode_wav) ``` tf.audio.encode_wav( audio, sample_rate, name=None ) ``` This operation will generate a string suitable to be saved out to create a .wav audio file. It will be encoded in the 16-bit PCM format. It takes in float values in the range -1.0f to 1.0f, and any outside that value will be clamped to that range. `audio` is a 2-D float Tensor of shape `[length, channels]`. `sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100). | Args | | `audio` | A `Tensor` of type `float32`. 2-D with shape `[length, channels]`. | | `sample_rate` | A `Tensor` of type `int32`. Scalar containing the sample frequency. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `string`. | tensorflow tf.audio.decode_wav tf.audio.decode\_wav ==================== Decode a 16-bit PCM WAV file to a float tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.audio.decode_wav`](https://www.tensorflow.org/api_docs/python/tf/audio/decode_wav) ``` tf.audio.decode_wav( contents, desired_channels=-1, desired_samples=-1, name=None ) ``` The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float. When desired\_channels is set, if the input contains fewer channels than this then the last channel will be duplicated to give the requested number, else if the input has more channels than requested then the additional channels will be ignored. If desired\_samples is set, then the audio will be cropped or padded with zeroes to the requested length. The first output contains a Tensor with the content of the audio samples. The lowest dimension will be the number of channels, and the second will be the number of samples. For example, a ten-sample-long stereo WAV file should give an output shape of [10, 2]. | Args | | `contents` | A `Tensor` of type `string`. The WAV-encoded audio, usually from a file. | | `desired_channels` | An optional `int`. Defaults to `-1`. Number of sample channels wanted. | | `desired_samples` | An optional `int`. Defaults to `-1`. Length of audio requested. | | `name` | A name for the operation (optional). | | Returns | | A tuple of `Tensor` objects (audio, sample\_rate). | | `audio` | A `Tensor` of type `float32`. | | `sample_rate` | A `Tensor` of type `int32`. | tensorflow tf.sets.size tf.sets.size ============ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sets_impl.py#L30-L57) | Compute number of unique elements along last dimension of `a`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sets.set_size`](https://www.tensorflow.org/api_docs/python/tf/sets/size), [`tf.compat.v1.sets.size`](https://www.tensorflow.org/api_docs/python/tf/sets/size) ``` tf.sets.size( a, validate_indices=True ) ``` | Args | | `a` | `SparseTensor`, with indices sorted in row-major order. | | `validate_indices` | Whether to validate the order and range of sparse indices in `a`. | | Returns | | `int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the number of unique elements in the corresponding `[0...n-1]` dimension of `a`. | | Raises | | `TypeError` | If `a` is an invalid types. | tensorflow tf.sets.union tf.sets.union ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sets_impl.py#L288-L364) | Compute set union of elements in last dimension of `a` and `b`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sets.set_union`](https://www.tensorflow.org/api_docs/python/tf/sets/union), [`tf.compat.v1.sets.union`](https://www.tensorflow.org/api_docs/python/tf/sets/union) ``` tf.sets.union( a, b, validate_indices=True ) ``` All but the last dimension of `a` and `b` must match. #### Example: ``` import tensorflow as tf import collections # [[{1, 2}, {3}], [{4}, {5, 6}]] a = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 2), ((0, 1, 0), 3), ((1, 0, 0), 4), ((1, 1, 0), 5), ((1, 1, 1), 6), ]) a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2, 2, 2]) # [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]] b = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 3), ((0, 1, 0), 2), ((1, 0, 0), 4), ((1, 0, 1), 5), ((1, 1, 0), 5), ((1, 1, 1), 6), ((1, 1, 2), 7), ((1, 1, 3), 8), ]) b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) # `set_union` is applied to each aligned pair of sets. tf.sets.union(a, b) # The result will be a equivalent to either of: # # np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]]) # # collections.OrderedDict([ # ((0, 0, 0), 1), # ((0, 0, 1), 2), # ((0, 0, 2), 3), # ((0, 1, 0), 2), # ((0, 1, 1), 3), # ((1, 0, 0), 4), # ((1, 0, 1), 5), # ((1, 1, 0), 5), # ((1, 1, 1), 6), # ((1, 1, 2), 7), # ((1, 1, 3), 8), # ]) ``` | Args | | `a` | `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices must be sorted in row-major order. | | `b` | `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices must be sorted in row-major order. | | `validate_indices` | Whether to validate the order and range of sparse indices in `a` and `b`. | | Returns | | A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but the last dimension the same. Elements along the last dimension contain the unions. | tensorflow tf.sets.difference tf.sets.difference ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sets_impl.py#L207-L285) | Compute set difference of elements in last dimension of `a` and `b`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sets.difference`](https://www.tensorflow.org/api_docs/python/tf/sets/difference), [`tf.compat.v1.sets.set_difference`](https://www.tensorflow.org/api_docs/python/tf/sets/difference) ``` tf.sets.difference( a, b, aminusb=True, validate_indices=True ) ``` All but the last dimension of `a` and `b` must match. #### Example: ``` import tensorflow as tf import collections # Represent the following array of sets as a sparse tensor: # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]]) a = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 2), ((0, 1, 0), 3), ((1, 0, 0), 4), ((1, 1, 0), 5), ((1, 1, 1), 6), ]) a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2, 2, 2]) # np.array([[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]]) b = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 3), ((0, 1, 0), 2), ((1, 0, 0), 4), ((1, 0, 1), 5), ((1, 1, 0), 5), ((1, 1, 1), 6), ((1, 1, 2), 7), ((1, 1, 3), 8), ]) b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) # `set_difference` is applied to each aligned pair of sets. tf.sets.difference(a, b) # The result will be equivalent to either of: # # np.array([[{2}, {3}], [{}, {}]]) # # collections.OrderedDict([ # ((0, 0, 0), 2), # ((0, 1, 0), 3), # ]) ``` | Args | | `a` | `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices must be sorted in row-major order. | | `b` | `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices must be sorted in row-major order. | | `aminusb` | Whether to subtract `b` from `a`, vs vice versa. | | `validate_indices` | Whether to validate the order and range of sparse indices in `a` and `b`. | | Returns | | A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but the last dimension the same. Elements along the last dimension contain the differences. | | Raises | | `TypeError` | If inputs are invalid types, or if `a` and `b` have different types. | | `ValueError` | If `a` is sparse and `b` is dense. | | `errors_impl.InvalidArgumentError` | If the shapes of `a` and `b` do not match in any dimension other than the last dimension. | tensorflow tf.sets.intersection tf.sets.intersection ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sets_impl.py#L136-L204) | Compute set intersection of elements in last dimension of `a` and `b`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sets.intersection`](https://www.tensorflow.org/api_docs/python/tf/sets/intersection), [`tf.compat.v1.sets.set_intersection`](https://www.tensorflow.org/api_docs/python/tf/sets/intersection) ``` tf.sets.intersection( a, b, validate_indices=True ) ``` All but the last dimension of `a` and `b` must match. #### Example: ``` import tensorflow as tf import collections # Represent the following array of sets as a sparse tensor: # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]]) a = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 2), ((0, 1, 0), 3), ((1, 0, 0), 4), ((1, 1, 0), 5), ((1, 1, 1), 6), ]) a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2,2,2]) # b = np.array([[{1}, {}], [{4}, {5, 6, 7, 8}]]) b = collections.OrderedDict([ ((0, 0, 0), 1), ((1, 0, 0), 4), ((1, 1, 0), 5), ((1, 1, 1), 6), ((1, 1, 2), 7), ((1, 1, 3), 8), ]) b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) # `tf.sets.intersection` is applied to each aligned pair of sets. tf.sets.intersection(a, b) # The result will be equivalent to either of: # # np.array([[{1}, {}], [{4}, {5, 6}]]) # # collections.OrderedDict([ # ((0, 0, 0), 1), # ((1, 0, 0), 4), # ((1, 1, 0), 5), # ((1, 1, 1), 6), # ]) ``` | Args | | `a` | `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices must be sorted in row-major order. | | `b` | `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices must be sorted in row-major order. | | `validate_indices` | Whether to validate the order and range of sparse indices in `a` and `b`. | | Returns | | A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but the last dimension the same. Elements along the last dimension contain the intersections. | tensorflow tf.sysconfig.get_lib tf.sysconfig.get\_lib ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/sysconfig.py#L44-L52) | Get the directory containing the TensorFlow framework library. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sysconfig.get_lib`](https://www.tensorflow.org/api_docs/python/tf/sysconfig/get_lib) ``` tf.sysconfig.get_lib() ``` | Returns | | The directory as string. | tensorflow tf.sysconfig.get_build_info tf.sysconfig.get\_build\_info ============================= Get a dictionary describing TensorFlow's build environment. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sysconfig.get_build_info`](https://www.tensorflow.org/api_docs/python/tf/sysconfig/get_build_info) ``` tf.sysconfig.get_build_info() ``` Values are generated when TensorFlow is compiled, and are static for each TensorFlow package. The return value is a dictionary with string keys such as: * cuda\_version * cudnn\_version * is\_cuda\_build * is\_rocm\_build * msvcp\_dll\_names * nvcuda\_dll\_name * cudart\_dll\_name * cudnn\_dll\_name Note that the actual keys and values returned by this function is subject to change across different versions of TensorFlow or across platforms. | Returns | | A Dictionary describing TensorFlow's build environment. | tensorflow tf.sysconfig.get_include tf.sysconfig.get\_include ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/sysconfig.py#L29-L41) | Get the directory containing the TensorFlow C++ header files. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sysconfig.get_include`](https://www.tensorflow.org/api_docs/python/tf/sysconfig/get_include) ``` tf.sysconfig.get_include() ``` | Returns | | The directory as string. | tensorflow tf.sysconfig.get_compile_flags tf.sysconfig.get\_compile\_flags ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/sysconfig.py#L55-L67) | Get the compilation flags for custom operators. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sysconfig.get_compile_flags`](https://www.tensorflow.org/api_docs/python/tf/sysconfig/get_compile_flags) ``` tf.sysconfig.get_compile_flags() ``` | Returns | | The compilation flags. | tensorflow tf.sysconfig.get_link_flags tf.sysconfig.get\_link\_flags ============================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/platform/sysconfig.py#L70-L86) | Get the link flags for custom operators. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sysconfig.get_link_flags`](https://www.tensorflow.org/api_docs/python/tf/sysconfig/get_link_flags) ``` tf.sysconfig.get_link_flags() ``` | Returns | | The link flags. | tensorflow tf.train.FeatureList tf.train.FeatureList ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Mainly used as part of a [`tf.train.SequenceExample`](sequenceexample). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.FeatureList`](https://www.tensorflow.org/api_docs/python/tf/train/FeatureList) Contains a list of [`tf.train.Feature`](feature)s. The [`tf.train.SequenceExample`](sequenceexample) proto can be thought of as a proto implementation of the following python type: ``` # tf.train.Feature Feature = Union[List[bytes], List[int64], List[float]] # tf.train.FeatureList FeatureList = List[Feature] # tf.train.FeatureLists FeatureLists = Dict[str, FeatureList] class SequenceExample(typing.NamedTuple): context: Dict[str, Feature] feature_lists: FeatureLists ``` This proto implements the `List[Feature]` portion. | Attributes | | `feature` | `repeated Feature feature` | tensorflow tf.train.load_checkpoint tf.train.load\_checkpoint ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_utils.py#L42-L64) | Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.load_checkpoint`](https://www.tensorflow.org/api_docs/python/tf/train/load_checkpoint) ``` tf.train.load_checkpoint( ckpt_dir_or_file ) ``` If `ckpt_dir_or_file` resolves to a directory with multiple checkpoints, reader for the latest checkpoint is returned. | Args | | `ckpt_dir_or_file` | Directory with checkpoints file or path to checkpoint file. | | Returns | | `CheckpointReader` object. | | Raises | | `ValueError` | If `ckpt_dir_or_file` resolves to a directory with no checkpoints. | tensorflow tf.train.JobDef tf.train.JobDef =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/protobuf/cluster.proto) | A ProtocolMessage #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.JobDef`](https://www.tensorflow.org/api_docs/python/tf/train/JobDef) | Attributes | | `name` | `string name` | | `tasks` | `repeated TasksEntry tasks` | Child Classes ------------- [`class TasksEntry`](jobdef/tasksentry) tensorflow tf.train.FeatureLists tf.train.FeatureLists ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Mainly used as part of a [`tf.train.SequenceExample`](sequenceexample). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.FeatureLists`](https://www.tensorflow.org/api_docs/python/tf/train/FeatureLists) Contains a list of [`tf.train.Feature`](feature)s. The [`tf.train.SequenceExample`](sequenceexample) proto can be thought of as a proto implementation of the following python type: ``` # tf.train.Feature Feature = Union[List[bytes], List[int64], List[float]] # tf.train.FeatureList FeatureList = List[Feature] # tf.train.FeatureLists FeatureLists = Dict[str, FeatureList] class SequenceExample(typing.NamedTuple): context: Dict[str, Feature] feature_lists: FeatureLists ``` This proto implements the `Dict[str, FeatureList]` portion. | Attributes | | `feature_list` | `repeated FeatureListEntry feature_list` | Child Classes ------------- [`class FeatureListEntry`](featurelists/featurelistentry) tensorflow tf.train.CheckpointOptions tf.train.CheckpointOptions ========================== Options for constructing a Checkpoint. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.CheckpointOptions`](https://www.tensorflow.org/api_docs/python/tf/train/CheckpointOptions) ``` tf.train.CheckpointOptions( experimental_io_device=None, experimental_enable_async_checkpoint=False ) ``` Used as the `options` argument to either [`tf.train.Checkpoint.save()`](checkpoint#save) or [`tf.train.Checkpoint.restore()`](checkpoint#restore) methods to adjust how variables are saved/restored. Example: Run IO ops on "localhost" while saving a checkpoint: ``` step = tf.Variable(0, name="step") checkpoint = tf.train.Checkpoint(step=step) options = tf.train.CheckpointOptions(experimental_io_device="/job:localhost") checkpoint.save("/tmp/ckpt", options=options) ``` | Args | | `experimental_io_device` | string. Applies in a distributed setting. Tensorflow device to use to access the filesystem. If `None` (default) then for each variable the filesystem is accessed from the CPU:0 device of the host where that variable is assigned. If specified, the filesystem is instead accessed from that device for all variables. This is for example useful if you want to save to a local directory, such as "/tmp" when running in a distributed setting. In that case pass a device for the host where the "/tmp" directory is accessible. | | `experimental_enable_async_checkpoint` | bool Type. Indicates whether async checkpoint is enabled. Default is False, i.e., no async checkpoint. Async checkpoint moves the checkpoint file writing off the main thread, so that the model can continue to train while the checkpoing file writing runs in the background. Async checkpoint reduces TPU device idle cycles and speeds up model training process, while memory consumption may increase. | | Attributes | | `experimental_enable_async_checkpoint` | | | `experimental_io_device` | |
programming_docs
tensorflow tf.train.ServerDef tf.train.ServerDef ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/protobuf/tensorflow_server.proto) | A ProtocolMessage #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.ServerDef`](https://www.tensorflow.org/api_docs/python/tf/train/ServerDef) | Attributes | | `cluster` | `ClusterDef cluster` | | `cluster_device_filters` | `ClusterDeviceFilters cluster_device_filters` | | `default_session_config` | `ConfigProto default_session_config` | | `job_name` | `string job_name` | | `port` | `int32 port` | | `protocol` | `string protocol` | | `task_index` | `int32 task_index` | tensorflow tf.train.load_variable tf.train.load\_variable ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_utils.py#L67-L82) | Returns the tensor value of the given variable in the checkpoint. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.load_variable`](https://www.tensorflow.org/api_docs/python/tf/train/load_variable) ``` tf.train.load_variable( ckpt_dir_or_file, name ) ``` | Args | | `ckpt_dir_or_file` | Directory with checkpoints file or path to checkpoint. | | `name` | Name of the variable to return. | | Returns | | A numpy `ndarray` with a copy of the value of this variable. | tensorflow tf.train.Coordinator tf.train.Coordinator ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L30-L403) | A coordinator for threads. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.Coordinator`](https://www.tensorflow.org/api_docs/python/tf/train/Coordinator) ``` tf.train.Coordinator( clean_stop_exception_types=None ) ``` This class implements a simple mechanism to coordinate the termination of a set of threads. #### Usage: ``` # Create a coordinator. coord = Coordinator() # Start a number of threads, passing the coordinator to each of them. ...start thread 1...(coord, ...) ...start thread N...(coord, ...) # Wait for all the threads to terminate. coord.join(threads) ``` Any of the threads can call `coord.request_stop()` to ask for all the threads to stop. To cooperate with the requests, each thread must check for `coord.should_stop()` on a regular basis. `coord.should_stop()` returns `True` as soon as `coord.request_stop()` has been called. A typical thread running with a coordinator will do something like: ``` while not coord.should_stop(): ...do some work... ``` #### Exception handling: A thread can report an exception to the coordinator as part of the `request_stop()` call. The exception will be re-raised from the `coord.join()` call. #### Thread code: ``` try: while not coord.should_stop(): ...do some work... except Exception as e: coord.request_stop(e) ``` #### Main code: ``` try: ... coord = Coordinator() # Start a number of threads, passing the coordinator to each of them. ...start thread 1...(coord, ...) ...start thread N...(coord, ...) # Wait for all the threads to terminate. coord.join(threads) except Exception as e: ...exception that was passed to coord.request_stop() ``` To simplify the thread implementation, the Coordinator provides a context handler `stop_on_exception()` that automatically requests a stop if an exception is raised. Using the context handler the thread code above can be written as: ``` with coord.stop_on_exception(): while not coord.should_stop(): ...do some work... ``` #### Grace period for stopping: After a thread has called `coord.request_stop()` the other threads have a fixed time to stop, this is called the 'stop grace period' and defaults to 2 minutes. If any of the threads is still alive after the grace period expires `coord.join()` raises a RuntimeError reporting the laggards. ``` try: ... coord = Coordinator() # Start a number of threads, passing the coordinator to each of them. ...start thread 1...(coord, ...) ...start thread N...(coord, ...) # Wait for all the threads to terminate, give them 10s grace period coord.join(threads, stop_grace_period_secs=10) except RuntimeError: ...one of the threads took more than 10s to stop after request_stop() ...was called. except Exception: ...exception that was passed to coord.request_stop() ``` | Args | | `clean_stop_exception_types` | Optional tuple of Exception types that should cause a clean stop of the coordinator. If an exception of one of these types is reported to `request_stop(ex)` the coordinator will behave as if `request_stop(None)` was called. Defaults to `(tf.errors.OutOfRangeError,)` which is used by input queues to signal the end of input. When feeding training data from a Python iterator it is common to add `StopIteration` to this list. | | Attributes | | `joined` | | Methods ------- ### `clear_stop` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L242-L251) ``` clear_stop() ``` Clears the stop flag. After this is called, calls to `should_stop()` will return `False`. ### `join` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L318-L393) ``` join( threads=None, stop_grace_period_secs=120, ignore_live_threads=False ) ``` Wait for threads to terminate. This call blocks until a set of threads have terminated. The set of thread is the union of the threads passed in the `threads` argument and the list of threads that registered with the coordinator by calling [`Coordinator.register_thread()`](coordinator#register_thread). After the threads stop, if an `exc_info` was passed to `request_stop`, that exception is re-raised. Grace period handling: When `request_stop()` is called, threads are given 'stop\_grace\_period\_secs' seconds to terminate. If any of them is still alive after that period expires, a `RuntimeError` is raised. Note that if an `exc_info` was passed to `request_stop()` then it is raised instead of that `RuntimeError`. | Args | | `threads` | List of `threading.Threads`. The started threads to join in addition to the registered threads. | | `stop_grace_period_secs` | Number of seconds given to threads to stop after `request_stop()` has been called. | | `ignore_live_threads` | If `False`, raises an error if any of the threads are still alive after `stop_grace_period_secs`. | | Raises | | `RuntimeError` | If any thread is still alive after `request_stop()` is called and the grace period expires. | ### `raise_requested_exception` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L399-L403) ``` raise_requested_exception() ``` If an exception has been passed to `request_stop`, this raises it. ### `register_thread` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L309-L316) ``` register_thread( thread ) ``` Register a thread to join. | Args | | `thread` | A Python thread to join. | ### `request_stop` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L183-L240) ``` request_stop( ex=None ) ``` Request that the threads stop. After this is called, calls to `should_stop()` will return `True`. > > **Note:** If an exception is being passed in, in must be in the context of handling the exception (i.e. `try: ... except Exception as ex: ...`) and not a newly created one. > | Args | | `ex` | Optional `Exception`, or Python `exc_info` tuple as returned by `sys.exc_info()`. If this is the first call to `request_stop()` the corresponding exception is recorded and re-raised from `join()`. | ### `should_stop` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L253-L259) ``` should_stop() ``` Check if stop was requested. | Returns | | True if a stop was requested. | ### `stop_on_exception` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L261-L295) ``` @contextlib.contextmanager stop_on_exception() ``` Context manager to request stop when an Exception is raised. Code that uses a coordinator must catch exceptions and pass them to the `request_stop()` method to stop the other threads managed by the coordinator. This context handler simplifies the exception handling. Use it as follows: ``` with coord.stop_on_exception(): # Any exception raised in the body of the with # clause is reported to the coordinator before terminating # the execution of the body. ...body... ``` This is completely equivalent to the slightly longer code: ``` try: ...body... except: coord.request_stop(sys.exc_info()) ``` | Yields | | nothing. | ### `wait_for_stop` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/coordinator.py#L297-L307) ``` wait_for_stop( timeout=None ) ``` Wait till the Coordinator is told to stop. | Args | | `timeout` | Float. Sleep for up to that many seconds waiting for should\_stop() to become True. | | Returns | | True if the Coordinator is told stop, False if the timeout expired. | tensorflow tf.train.latest_checkpoint tf.train.latest\_checkpoint =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_management.py#L324-L361) | Finds the filename of latest saved checkpoint file. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.latest_checkpoint`](https://www.tensorflow.org/api_docs/python/tf/train/latest_checkpoint) ``` tf.train.latest_checkpoint( checkpoint_dir, latest_filename=None ) ``` Gets the checkpoint state given the provided checkpoint\_dir and looks for a corresponding TensorFlow 2 (preferred) or TensorFlow 1.x checkpoint path. The latest\_filename argument is only applicable if you are saving checkpoint using [`v1.train.Saver.save`](../compat/v1/train/saver#save) See the [Training Checkpoints Guide](https://www.tensorflow.org/guide/checkpoint) for more details and examples.` | Args | | `checkpoint_dir` | Directory where the variables were saved. | | `latest_filename` | Optional name for the protocol buffer file that contains the list of most recent checkpoint filenames. See the corresponding argument to [`v1.train.Saver.save`](../compat/v1/train/saver#save). | | Returns | | The full path to the latest checkpoint or `None` if no checkpoint was found. | tensorflow tf.train.SequenceExample tf.train.SequenceExample ======================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/example.proto) | A `SequenceExample` is a format a sequences and some context. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.SequenceExample`](https://www.tensorflow.org/api_docs/python/tf/train/SequenceExample) It can be thought of as a proto-implementation of the following python type: ``` Feature = Union[List[bytes], List[int64], List[float]] class SequenceExample(typing.NamedTuple): context: Dict[str, Feature] feature_lists: Dict[str, List[Feature]] ``` To implement this as protos it's broken up into sub-messages as follows: ``` # tf.train.Feature Feature = Union[List[bytes], List[int64], List[float]] # tf.train.FeatureList FeatureList = List[Feature] # tf.train.FeatureLists FeatureLists = Dict[str, FeatureList] # tf.train.SequenceExample class SequenceExample(typing.NamedTuple): context: Dict[str, Feature] feature_lists: FeatureLists ``` To parse a `SequenceExample` in TensorFlow refer to the [`tf.io.parse_sequence_example`](../io/parse_sequence_example) function. The `context` contains features which apply to the entire example. The `feature_lists` contain a key, value map where each key is associated with a repeated set of [`tf.train.Features`](features) (a [`tf.train.FeatureList`](featurelist)). A `FeatureList` represents the values of a feature identified by its key over time / frames. Below is a `SequenceExample` for a movie recommendation application recording a sequence of ratings by a user. The time-independent features ("locale", "age", "favorites") describing the user are part of the context. The sequence of movies the user rated are part of the feature\_lists. For each movie in the sequence we have information on its name and actors and the user's rating. This information is recorded in three separate `feature_list`s. In the example below there are only two movies. All three `feature_list`s, namely "movie\_ratings", "movie\_names", and "actors" have a feature value for both movies. Note, that "actors" is itself a `bytes_list` with multiple strings per movie. ``` context: { feature: { key : "locale" value: { bytes_list: { value: [ "pt_BR" ] } } } feature: { key : "age" value: { float_list: { value: [ 19.0 ] } } } feature: { key : "favorites" value: { bytes_list: { value: [ "Majesty Rose", "Savannah Outen", "One Direction" ] } } } } feature_lists: { feature_list: { key : "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } feature_list: { key : "movie_names" value: { feature: { bytes_list: { value: [ "The Shawshank Redemption" ] } } feature: { bytes_list: { value: [ "Fight Club" ] } } } } feature_list: { key : "actors" value: { feature: { bytes_list: { value: [ "Tim Robbins", "Morgan Freeman" ] } } feature: { bytes_list: { value: [ "Brad Pitt", "Edward Norton", "Helena Bonham Carter" ] } } } } } ``` A conformant `SequenceExample` data set obeys the following conventions: `context`: * All conformant context features `K` must obey the same conventions as a conformant Example's features (see above). `feature_lists`: * A `FeatureList L` may be missing in an example; it is up to the parser configuration to determine if this is allowed or considered an empty list (zero length). * If a `FeatureList L` exists, it may be empty (zero length). * If a `FeatureList L` is non-empty, all features within the `FeatureList` must have the same data type `T`. Even across `SequenceExample`s, the type `T` of the `FeatureList` identified by the same key must be the same. An entry without any values may serve as an empty feature. * If a `FeatureList L` is non-empty, it is up to the parser configuration to determine if all features within the `FeatureList` must have the same size. The same holds for this `FeatureList` across multiple examples. * For sequence modeling ([example](https://github.com/tensorflow/nmt)), the feature lists represent a sequence of frames. In this scenario, all `FeatureList`s in a `SequenceExample` have the same number of `Feature` messages, so that the i-th element in each `FeatureList` is part of the i-th frame (or time step). **Examples of conformant and non-conformant examples' `FeatureLists`:** Conformant `FeatureLists`: ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } } ``` Non-conformant `FeatureLists` (mismatched types): ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { int64_list: { value: [ 5 ] } } } } } ``` Conditionally conformant `FeatureLists`, the parser configuration determines if the feature sizes must match: ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0, 6.0 ] } } } } } ``` **Examples of conformant and non-conformant `SequenceExample`s:** Conformant pair of SequenceExample: ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } } feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } feature: { float_list: { value: [ 2.0 ] } } } } } ``` Conformant pair of `SequenceExample`s: ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } } feature_lists: { feature_list: { key: "movie_ratings" value: { } } } ``` Conditionally conformant pair of `SequenceExample`s, the parser configuration determines if the second `feature_lists` is consistent (zero-length) or invalid (missing "movie\_ratings"): ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } } feature_lists: { } ``` Non-conformant pair of `SequenceExample`s (mismatched types): ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } } feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { int64_list: { value: [ 4 ] } } feature: { int64_list: { value: [ 5 ] } } feature: { int64_list: { value: [ 2 ] } } } } } ``` Conditionally conformant pair of `SequenceExample`s; the parser configuration determines if the feature sizes must match: ``` feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.5 ] } } feature: { float_list: { value: [ 5.0 ] } } } } } feature_lists: { feature_list: { key: "movie_ratings" value: { feature: { float_list: { value: [ 4.0 ] } } feature: { float_list: { value: [ 5.0, 3.0 ] } } } } ``` | Attributes | | `context` | `Features context` | | `feature_lists` | `FeatureLists feature_lists` | tensorflow tf.train.Example tf.train.Example ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/example.proto) | An `Example` is a standard proto storing data for training and inference. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.Example`](https://www.tensorflow.org/api_docs/python/tf/train/Example) An `Example` proto is a representation of the following python type: ``` Dict[str, Union[List[bytes], List[int64], List[float]]] ``` It contains a key-value store [`Example.features`](example#features) where each key (string) maps to a [`tf.train.Feature`](feature) message which contains a fixed-type list. This flexible and compact format allows the storage of large amounts of typed data, but requires that the data shape and use be determined by the configuration files and parsers that are used to read and write this format (refer to [`tf.io.parse_example`](../io/parse_example) for details). ``` from google.protobuf import text_format example = text_format.Parse(''' features { feature {key: "my_feature" value {int64_list {value: [1, 2, 3, 4]} } } }''', tf.train.Example()) ``` Use [`tf.io.parse_example`](../io/parse_example) to extract tensors from a serialized `Example` proto: ``` tf.io.parse_example( example.SerializeToString(), features = {'my_feature': tf.io.RaggedFeature(dtype=tf.int64)}) {'my_feature': <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1, 2, 3, 4], dtype=int64)>} ``` While the list of keys, and the contents of each key *could* be different for every `Example`, TensorFlow expects a fixed list of keys, each with a fixed `tf.dtype`. A conformant `Example` dataset obeys the following conventions: * If a Feature `K` exists in one example with data type `T`, it must be of type `T` in all other examples when present. It may be omitted. * The number of instances of Feature `K` list data may vary across examples, depending on the requirements of the model. * If a Feature `K` doesn't exist in an example, a `K`-specific default will be used, if configured. * If a Feature `K` exists in an example but contains no items, the intent is considered to be an empty tensor and no default will be used. | Attributes | | `features` | `Features features` |
programming_docs
tensorflow tf.train.ClusterSpec tf.train.ClusterSpec ==================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L243-L492) | Represents a cluster as a set of "tasks", organized into "jobs". #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.ClusterSpec`](https://www.tensorflow.org/api_docs/python/tf/train/ClusterSpec) ``` tf.train.ClusterSpec( cluster ) ``` A [`tf.train.ClusterSpec`](clusterspec) represents the set of processes that participate in a distributed TensorFlow computation. Every [`tf.distribute.Server`](../distribute/server) is constructed in a particular cluster. To create a cluster with two jobs and five tasks, you specify the mapping from job names to lists of network addresses (typically hostname-port pairs). ``` cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222", "worker1.example.com:2222", "worker2.example.com:2222"], "ps": ["ps0.example.com:2222", "ps1.example.com:2222"]}) ``` Each job may also be specified as a sparse mapping from task indices to network addresses. This enables a server to be configured without needing to know the identity of (for example) all other worker tasks: ``` cluster = tf.train.ClusterSpec({"worker": {1: "worker1.example.com:2222"}, "ps": ["ps0.example.com:2222", "ps1.example.com:2222"]}) ``` | Args | | `cluster` | A dictionary mapping one or more job names to (i) a list of network addresses, or (ii) a dictionary mapping integer task indices to network addresses; or a [`tf.train.ClusterDef`](clusterdef) protocol buffer. | | Raises | | `TypeError` | If `cluster` is not a dictionary mapping strings to lists of strings, and not a [`tf.train.ClusterDef`](clusterdef) protobuf. | | Attributes | | `jobs` | Returns a list of job names in this cluster. | Methods ------- ### `as_cluster_def` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L364-L366) ``` as_cluster_def() ``` Returns a [`tf.train.ClusterDef`](clusterdef) protocol buffer based on this cluster. ### `as_dict` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L337-L362) ``` as_dict() ``` Returns a dictionary from job names to their tasks. For each job, if the task index space is dense, the corresponding value will be a list of network addresses; otherwise it will be a dictionary mapping (sparse) task indices to the corresponding addresses. | Returns | | A dictionary mapping job names to lists or dictionaries describing the tasks in those jobs. | ### `job_tasks` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L438-L465) ``` job_tasks( job_name ) ``` Returns a mapping from task ID to address in the given job. > > **Note:** For backwards compatibility, this method returns a list. If the given job was defined with a sparse set of task indices, the length of this list may not reflect the number of tasks defined in this job. Use the [`tf.train.ClusterSpec.num_tasks`](clusterspec#num_tasks) method to find the number of tasks defined in a particular job. > | Args | | `job_name` | The string name of a job in this cluster. | | Returns | | A list of task addresses, where the index in the list corresponds to the task index of each task. The list may contain `None` if the job was defined with a sparse set of task indices. | | Raises | | `ValueError` | If `job_name` does not name a job in this cluster. | ### `num_tasks` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L377-L393) ``` num_tasks( job_name ) ``` Returns the number of tasks defined in the given job. | Args | | `job_name` | The string name of a job in this cluster. | | Returns | | The number of tasks defined in the given job. | | Raises | | `ValueError` | If `job_name` does not name a job in this cluster. | ### `task_address` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L414-L436) ``` task_address( job_name, task_index ) ``` Returns the address of the given task in the given job. | Args | | `job_name` | The string name of a job in this cluster. | | `task_index` | A non-negative integer. | | Returns | | The address of the given task in the given job. | | Raises | | `ValueError` | If `job_name` does not name a job in this cluster, or no task with index `task_index` is defined in that job. | ### `task_indices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L395-L412) ``` task_indices( job_name ) ``` Returns a list of valid task indices in the given job. | Args | | `job_name` | The string name of a job in this cluster. | | Returns | | A list of valid task indices in the given job. | | Raises | | `ValueError` | If `job_name` does not name a job in this cluster, or no task with index `task_index` is defined in that job. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L318-L319) ``` __bool__() ``` ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L324-L325) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L327-L328) ``` __ne__( other ) ``` Return self!=value. ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/server_lib.py#L318-L319) ``` __nonzero__() ``` tensorflow Module: tf.train.experimental Module: tf.train.experimental ============================= Public API for tf.train.experimental namespace. Classes ------- [`class PythonState`](experimental/pythonstate): A mixin for putting Python state in an object-based checkpoint. tensorflow tf.train.ExponentialMovingAverage tf.train.ExponentialMovingAverage ================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/moving_averages.py#L282-L685) | Maintains moving averages of variables by employing an exponential decay. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.ExponentialMovingAverage`](https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage) ``` tf.train.ExponentialMovingAverage( decay, num_updates=None, zero_debias=False, name='ExponentialMovingAverage' ) ``` When training a model, it is often beneficial to maintain moving averages of the trained parameters. Evaluations that use averaged parameters sometimes produce significantly better results than the final trained values. The `apply()` method adds shadow copies of trained variables the first time it is called, and maintains a moving average of the trained variables in their shadow copies at every additional invocation. It should generally be called immediately after creating the model weights, and then after each training step. The `average()` method gives access to the shadow variables. It allows you to use the moving averages in place of the last trained values for evaluations, by loading the moving averages into your model via `var.assign(ema.average(var))`. Additionally, although `ExponentialMovingAverage` objects are not directly trackable by checkpoints, `average()` returns the moving average variables for your model weights, which you can then checkpoint. (There is an example of this near the bottom of this docstring). So, `average()` is useful when building an evaluation model, or when restoring a model from a checkpoint file. The moving averages are computed using exponential decay. You specify the decay value (as a scalar float value, `Tensor`, or `Variable`) when creating the `ExponentialMovingAverage` object. The shadow variables are initialized with the same initial values as the trained variables. When you run `apply` to update the moving averages, each shadow variable is updated with the formula: `shadow_variable -= (1 - decay) * (shadow_variable - variable)` This is mathematically equivalent to the classic formula below, but the use of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless updates to the variables: `shadow_variable = decay * shadow_variable + (1 - decay) * variable` Reasonable values for `decay` are close to 1.0, typically in the multiple-nines range: 0.999, 0.9999, etc. To have fine-grained control over the value of the decay parameter during training, pass a scalar [`tf.Variable`](../variable) as the `decay` value to the constructor, and update the variable as needed. Example usage when creating a training model: ``` # Create variables. var0 = tf.Variable(...) var1 = tf.Variable(...) # ... use the variables to build a training model... # Create an ExponentialMovingAverage object ema = tf.train.ExponentialMovingAverage(decay=0.9999) # The first `apply` creates the shadow variables that hold the moving averages ema.apply([var0, var1]) # grab the moving averages for checkpointing purposes or to be able to # load the moving averages into the model weights averages = [ema.average(var0), ema.average(var1)] ... def train_step(...): ... # Apply the optimizer. opt.minimize(my_loss, [var0, var1]) # Update the moving averages # of var0 and var1 with additional calls to `apply` ema.apply([var0, var1]) ...train the model by running train_step multiple times... ``` There are several ways to use the moving averages for evaluations: 1. Assign the values of the shadow variables to your model variables with [`Variable.assign(...)`](../variable#assign) before evaluating your model. You can use the `average()` method to get the shadow variable for a given variable. To continue training after using this approach, make sure to record the unaveraged weights and restore them before continuing to train. You can see the tensorflow-addons' MovingAverage optimizer's `swap_weights` method for one example of how to swap variables efficiently in distributed settings: https://github.com/tensorflow/addons/blob/v0.13.0/tensorflow\_addons/optimizers/moving\_average.py#L151 2. Make sure to checkpoint out your moving average variables in your [`tf.train.Checkpoint`](checkpoint). At evaluation time, create your shadow variables and use [`tf.train.Checkpoint`](checkpoint) to restore the moving averages into the shadow variables. Then, load the moving averages into the actual model weights via `var.assign(moving_avg)`. 3. Checkpoint out your moving average variables in your [`tf.train.Checkpoint`](checkpoint). For evaluation, restore your model weights directly from the moving averages instead of from the non-averaged weights. Caution: If you choose this approach, include only the object-graph paths to the averaged path in your checkpoint restore. If you point both the unaveraged and averaged paths in a checkpoint restore to the same variables, it is hard to reason about whether your model will restore the averaged or non-averaged variables. Example of saving out then restoring the shadow variable values: ``` # Create variables. var0 = tf.Variable(...) var1 = tf.Variable(...) # ... use the variables to build a training model... # Create an ExponentialMovingAverage object, create the shadow variables, # and grab the moving averages for checkpointing purposes. # (The ExponentialMovingAverage object itself is not checkpointable) ema = tf.train.ExponentialMovingAverage(decay=0.9999) ema.apply([var0, var1]) avg_var0 = ema.average(var0) avg_var1 = ema.average(var1) # Create a Checkpoint that will manage the model weights and the averages, checkpoint = tf.train.Checkpoint(model_weights=[var0, var1], averaged_weights=[avg_var0, avg_var1]) ... # Do training # Save out the checkpoint including the model weights and the moving averages checkpoint.save(...) ``` Restore option: restore all averaged & non-averaged weights, then load moving averages into the model via `var.assign()` ``` # Create variables. var0 = tf.Variable(...) var1 = tf.Variable(...) # ... use the variables to build a training model... # Create an ExponentialMovingAverage object, create the shadow variables, # and grab the moving averages for checkpoint restore purposes. # (The ExponentialMovingAverage object itself is not checkpointable) ema = tf.train.ExponentialMovingAverage(decay=0.9999) ema.apply([var0, var1]) avg_var0 = ema.average(var0) avg_var1 = ema.average(var1) # Create a Checkpoint that will manage the model weights and the averages, checkpoint = tf.train.Checkpoint(model_weights=[var0, var1], averaged_weights=[avg_var0, avg_var1]) checkpoint.restore(...) var0.assign(avg_var0) var1.assign(avg_var1) # var0 and var1 now hold the moving average values ``` Restore option: Directly restore the moving averages into the model weights. ``` # Create variables. var0 = tf.Variable(...) var1 = tf.Variable(...) # ... use the variables to build a training model... # Create a Checkpoint that will manage two objects with trackable state, checkpoint = tf.train.Checkpoint(averaged_weights=[var0, var1]) checkpoint.restore(...) # var0 and var1 now hold the moving average values ``` | Args | | `decay` | A scalar float value, `Tensor`, or `Variable`. The decay parameter. | | `num_updates` | Optional count of number of updates applied to variables. | | `zero_debias` | If `True`, zero debias moving-averages that are initialized with tensors. (Note: moving averages may not be initialized with non-variable tensors when eager execution is enabled). | | `name` | String. Optional prefix name to use for the name of ops added in `apply()`. | | Attributes | | `name` | The name of this ExponentialMovingAverage object. | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/moving_averages.py#L491-L588) ``` apply( var_list=None ) ``` Maintains moving averages of variables. `var_list` must be a list of `Variable` objects. This method creates shadow variables (holding the moving averages) for all elements of `var_list`, and updates the moving averages using the current `var_list` values. Shadow variables for `Variable` objects are initialized to the variable's initial value. Shadow variables are created with `trainable=False`. To access them you can use the EMA object's `average` method. Note that `EMA` objects are not trackable by checkpoints, so if you want to checkpoint or restore the moving variables you will need to manually grab the shadow variables via `average()` and assign them as [`tf.Module`](../module) properties or directly pass them to your [`tf.train.Checkpoint`](checkpoint). Note that `apply()` can be called multiple times. When eager execution is enabled each call to apply will update the variables once, so this needs to be called in a loop. In legacy TF 1.x graphs, this method returns an op that updates all shadow variables from the current value of their associated variables. In TF 1.x graphs without automatically control dependencies this op needs to be manually run. | Args | | `var_list` | A list of Variable objects. The variables must be of types bfloat16, float16, float32, or float64. (In legacy TF 1.x graphs these may be tensors, but this is unsupported when eager execution is enabled.) | | Returns | | An Operation that updates the moving averages. | | Raises | | `TypeError` | If the arguments are not an allowed type. | ### `average` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/moving_averages.py#L590-L600) ``` average( var ) ``` Returns the `Variable` holding the average of `var`. | Args | | `var` | A `Variable` object. | | Returns | | A `Variable` object or `None` if the moving average of `var` is not maintained. | tensorflow tf.train.FloatList tf.train.FloatList ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Used in [`tf.train.Example`](example) protos. Holds a list of floats. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.FloatList`](https://www.tensorflow.org/api_docs/python/tf/train/FloatList) An `Example` proto is a representation of the following python type: ``` Dict[str, Union[List[bytes], List[int64], List[float]]] ``` This proto implements the `List[float]` portion. ``` from google.protobuf import text_format example = text_format.Parse(''' features { feature {key: "my_feature" value {float_list {value: [1., 2., 3., 4. ]} } } }''', tf.train.Example()) example.features.feature['my_feature'].float_list.value [1.0, 2.0, 3.0, 4.0] ``` Use [`tf.io.parse_example`](../io/parse_example) to extract tensors from a serialized `Example` proto: ``` tf.io.parse_example( example.SerializeToString(), features = {'my_feature': tf.io.RaggedFeature(dtype=tf.float32)}) {'my_feature': <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 2., 3., 4.], dtype=float32)>} ``` See the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample) guide for usage details. | Attributes | | `value` | `repeated float value` | tensorflow tf.train.ClusterDef tf.train.ClusterDef =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/protobuf/cluster.proto) | A ProtocolMessage #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.ClusterDef`](https://www.tensorflow.org/api_docs/python/tf/train/ClusterDef) | Attributes | | `job` | `repeated JobDef job` | tensorflow tf.train.CheckpointManager tf.train.CheckpointManager ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_management.py#L515-L860) | Manages multiple checkpoints by keeping some and deleting unneeded ones. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.CheckpointManager`](https://www.tensorflow.org/api_docs/python/tf/train/CheckpointManager) ``` tf.train.CheckpointManager( checkpoint, directory, max_to_keep, keep_checkpoint_every_n_hours=None, checkpoint_name='ckpt', step_counter=None, checkpoint_interval=None, init_fn=None ) ``` #### Example usage: ``` import tensorflow as tf checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) manager = tf.train.CheckpointManager( checkpoint, directory="/tmp/model", max_to_keep=5) status = checkpoint.restore(manager.latest_checkpoint) while True: # train manager.save() ``` `CheckpointManager` preserves its own state across instantiations (see the `__init__` documentation for details). Only one should be active in a particular directory at a time. | Args | | `checkpoint` | The [`tf.train.Checkpoint`](checkpoint) instance to save and manage checkpoints for. | | `directory` | The path to a directory in which to write checkpoints. A special file named "checkpoint" is also written to this directory (in a human-readable text format) which contains the state of the `CheckpointManager`. | | `max_to_keep` | An integer, the number of checkpoints to keep. Unless preserved by `keep_checkpoint_every_n_hours`, checkpoints will be deleted from the active set, oldest first, until only `max_to_keep` checkpoints remain. If `None`, no checkpoints are deleted and everything stays in the active set. Note that `max_to_keep=None` will keep all checkpoint paths in memory and in the checkpoint state protocol buffer on disk. | | `keep_checkpoint_every_n_hours` | Upon removal from the active set, a checkpoint will be preserved if it has been at least `keep_checkpoint_every_n_hours` since the last preserved checkpoint. The default setting of `None` does not preserve any checkpoints in this way. | | `checkpoint_name` | Custom name for the checkpoint file. | | `step_counter` | A [`tf.Variable`](../variable) instance for checking the current step counter value, in case users want to save checkpoints every N steps. | | `checkpoint_interval` | An integer, indicates the minimum step interval between two checkpoints. | | `init_fn` | Callable. A function to do customized intialization if no checkpoints are in the directory. | | Raises | | `ValueError` | If `max_to_keep` is not a positive integer. | | Attributes | | `checkpoint` | Returns the [`tf.train.Checkpoint`](checkpoint) object. | | `checkpoint_interval` | | | `checkpoints` | A list of managed checkpoints. Note that checkpoints saved due to `keep_checkpoint_every_n_hours` will not show up in this list (to avoid ever-growing filename lists). | | `directory` | | | `latest_checkpoint` | The prefix of the most recent checkpoint in `directory`. Equivalent to [`tf.train.latest_checkpoint(directory)`](latest_checkpoint) where `directory` is the constructor argument to `CheckpointManager`. Suitable for passing to [`tf.train.Checkpoint.restore`](checkpoint#restore) to resume training. | Methods ------- ### `restore_or_initialize` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_management.py#L834-L860) ``` restore_or_initialize() ``` Restore items in `checkpoint` from the latest checkpoint file. This method will first try to restore from the most recent checkpoint in `directory`. If no checkpoints exist in `directory`, and `init_fn` is specified, this method will call `init_fn` to do customized initialization. This can be used to support initialization from pretrained models. Note that unlike [`tf.train.Checkpoint.restore()`](checkpoint#restore), this method doesn't return a load status object that users can run assertions on (e.g. assert\_consumed()). Thus to run assertions, users should directly use [`tf.train.Checkpoint.restore()`](checkpoint#restore) method. | Returns | | The restored checkpoint path if the lastest checkpoint is found and restored. Otherwise None. | ### `save` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_management.py#L752-L832) ``` save( checkpoint_number=None, check_interval=True, options=None ) ``` Creates a new checkpoint and manages it. | Args | | `checkpoint_number` | An optional integer, or an integer-dtype `Variable` or `Tensor`, used to number the checkpoint. If `None` (default), checkpoints are numbered using `checkpoint.save_counter`. Even if `checkpoint_number` is provided, `save_counter` is still incremented. A user-provided `checkpoint_number` is not incremented even if it is a `Variable`. | | `check_interval` | An optional boolean. The argument is only effective when `checkpoint_interval` is passed into the manager. If `True`, the manager will only save the checkpoint if the interval between checkpoints is larger than `checkpoint_interval`. Otherwise it will always save the checkpoint unless a checkpoint has already been saved for the current step. | | `options` | Optional [`tf.train.CheckpointOptions`](checkpointoptions) object. This argument only works with TF2 checkpoint objects. For example, options = tf.saved\_model.SaveOptions(experimental\_io\_device='/job:localhost') | | Returns | | The path to the new checkpoint. It is also recorded in the `checkpoints` and `latest_checkpoint` properties. `None` if no checkpoint is saved. |
programming_docs
tensorflow tf.train.Checkpoint tf.train.Checkpoint =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/util.py#L1976-L2553) | Manages saving/restoring trackable values to disk. ``` tf.train.Checkpoint( root=None, **kwargs ) ``` TensorFlow objects may contain trackable state, such as [`tf.Variable`](../variable)s, [`tf.keras.optimizers.Optimizer`](../keras/optimizers/optimizer) implementations, [`tf.data.Dataset`](../data/dataset) iterators, `tf.keras.Layer` implementations, or [`tf.keras.Model`](../keras/model) implementations. These are called **trackable objects**. A `Checkpoint` object can be constructed to save either a single or group of trackable objects to a checkpoint file. It maintains a `save_counter` for numbering checkpoints. #### Example: ``` model = tf.keras.Model(...) checkpoint = tf.train.Checkpoint(model) # Save a checkpoint to /tmp/training_checkpoints-{save_counter}. Every time # checkpoint.save is called, the save counter is increased. save_path = checkpoint.save('/tmp/training_checkpoints') # Restore the checkpointed values to the `model` object. checkpoint.restore(save_path) ``` #### Example 2: ``` import tensorflow as tf import os checkpoint_directory = "/tmp/training_checkpoints" checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") # Create a Checkpoint that will manage two objects with trackable state, # one we name "optimizer" and the other we name "model". checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) for _ in range(num_training_steps): optimizer.minimize( ... ) # Variables will be restored on creation. status.assert_consumed() # Optional sanity checks. checkpoint.save(file_prefix=checkpoint_prefix) ``` [`Checkpoint.save()`](checkpoint#save) and [`Checkpoint.restore()`](checkpoint#restore) write and read object-based checkpoints, in contrast to TensorFlow 1.x's [`tf.compat.v1.train.Saver`](../compat/v1/train/saver) which writes and reads `variable.name` based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (`Layer`s, `Optimizer`s, `Variable`s, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables. `Checkpoint` objects have dependencies on the objects passed as keyword arguments to their constructors, and each dependency is given a name that is identical to the name of the keyword argument for which it was created. TensorFlow classes like `Layer`s and `Optimizer`s will automatically add dependencies on their own variables (e.g. "kernel" and "bias" for [`tf.keras.layers.Dense`](../keras/layers/dense)). Inheriting from [`tf.keras.Model`](../keras/model) makes managing dependencies easy in user-defined classes, since `Model` hooks into attribute assignment. For example: ``` class Regress(tf.keras.Model): def __init__(self): super(Regress, self).__init__() self.input_transform = tf.keras.layers.Dense(10) # ... def call(self, inputs): x = self.input_transform(inputs) # ... ``` This `Model` has a dependency named "input\_transform" on its `Dense` layer, which in turn depends on its variables. As a result, saving an instance of `Regress` using [`tf.train.Checkpoint`](checkpoint) will also save all the variables created by the `Dense` layer. When variables are assigned to multiple workers, each worker writes its own section of the checkpoint. These sections are then merged/re-indexed to behave as a single checkpoint. This avoids copying all variables to one worker, but does require that all workers see a common filesystem. This function differs slightly from the Keras Model `save_weights` function. [`tf.keras.Model.save_weights`](../keras/model#save_weights) creates a checkpoint file with the name specified in `filepath`, while [`tf.train.Checkpoint`](checkpoint) numbers the checkpoints, using `filepath` as the prefix for the checkpoint file names. Aside from this, `model.save_weights()` and `tf.train.Checkpoint(model).save()` are equivalent. See the [guide to training checkpoints](https://www.tensorflow.org/guide/checkpoint) for details. | Args | | `root` | The root object to checkpoint. `root` may be a trackable object or `WeakRef` of a trackable object. | | `**kwargs` | Keyword arguments are set as attributes of this object, and are saved with the checkpoint. All `kwargs` must be trackable objects, or a nested structure of trackable objects (`list`, `dict`, or `tuple`). | | Raises | | `ValueError` | If `root` or the objects in `kwargs` are not trackable. A `ValueError` is also raised if the `root` object tracks different objects from the ones listed in attributes in kwargs (e.g. `root.child = A` and [`tf.train.Checkpoint(root, child=B)`](checkpoint) are incompatible). | | Attributes | | `save_counter` | Incremented when `save()` is called. Used to number checkpoints. | Methods ------- ### `read` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/util.py#L2376-L2421) ``` read( save_path, options=None ) ``` Reads a training checkpoint written with `write`. Reads this `Checkpoint` and any objects it depends on. This method is just like `restore()` but does not expect the `save_counter` variable in the checkpoint. It only restores the objects that the checkpoint already depends on. The method is primarily intended for use by higher level checkpoint management utilities that use `write()` instead of `save()` and have their own mechanisms to number and track checkpoints. #### Example usage: ``` # Create a checkpoint with write() ckpt = tf.train.Checkpoint(v=tf.Variable(1.)) path = ckpt.write('/tmp/my_checkpoint') # Later, load the checkpoint with read() # With restore() assert_consumed() would have failed. checkpoint.read(path).assert_consumed() # You can also pass options to read(). For example this # runs the IO ops on the localhost: options = tf.train.CheckpointOptions( experimental_io_device="/job:localhost") checkpoint.read(path, options=options) ``` | Args | | `save_path` | The path to the checkpoint as returned by `write`. | | `options` | Optional [`tf.train.CheckpointOptions`](checkpointoptions) object. | | Returns | | A load status object, which can be used to make assertions about the status of a checkpoint restoration. See `restore` for details. | ### `restore` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/util.py#L2423-L2553) ``` restore( save_path, options=None ) ``` Restores a training checkpoint. Restores this `Checkpoint` and any objects it depends on. This method is intended to be used to load checkpoints created by `save()`. For checkpoints created by `write()` use the `read()` method which does not expect the `save_counter` variable added by `save()`. `restore()` either assigns values immediately if variables to restore have been created already, or defers restoration until the variables are created. Dependencies added after this call will be matched if they have a corresponding object in the checkpoint (the restore request will queue in any trackable object waiting for the expected dependency to be added). ``` checkpoint = tf.train.Checkpoint( ... ) checkpoint.restore(path) # You can additionally pass options to restore(): options = tf.CheckpointOptions(experimental_io_device="/job:localhost") checkpoint.restore(path, options=options) ``` To ensure that loading is complete and no more deferred restorations will take place, use the `assert_consumed()` method of the status object returned by `restore()`: ``` checkpoint.restore(path, options=options).assert_consumed() ``` The assert will raise an error if any Python objects in the dependency graph were not found in the checkpoint, or if any checkpointed values do not have a matching Python object. Name-based [`tf.compat.v1.train.Saver`](../compat/v1/train/saver) checkpoints from TensorFlow 1.x can be loaded using this method. Names are used to match variables. Re-encode name-based checkpoints using [`tf.train.Checkpoint.save`](checkpoint#save) as soon as possible. **Loading from SavedModel checkpoints** To load values from a SavedModel, just pass the SavedModel directory to checkpoint.restore: ``` model = tf.keras.Model(...) tf.saved_model.save(model, path) # or model.save(path, save_format='tf') checkpoint = tf.train.Checkpoint(model) checkpoint.restore(path).expect_partial() ``` This example calls `expect_partial()` on the loaded status, since SavedModels saved from Keras often generates extra keys in the checkpoint. Otherwise, the program prints a lot of warnings about unused keys at exit time. | Args | | `save_path` | The path to the checkpoint, as returned by `save` or [`tf.train.latest_checkpoint`](latest_checkpoint). If the checkpoint was written by the name-based [`tf.compat.v1.train.Saver`](../compat/v1/train/saver), names are used to match variables. This path may also be a SavedModel directory. | | `options` | Optional [`tf.train.CheckpointOptions`](checkpointoptions) object. | | Returns | | A load status object, which can be used to make assertions about the status of a checkpoint restoration. The returned status object has the following methods:* `assert_consumed()`: Raises an exception if any variables are unmatched: either checkpointed values which don't have a matching Python object or Python objects in the dependency graph with no values in the checkpoint. This method returns the status object, and so may be chained with other assertions. * `assert_existing_objects_matched()`: Raises an exception if any existing Python objects in the dependency graph are unmatched. Unlike `assert_consumed`, this assertion will pass if values in the checkpoint have no corresponding Python objects. For example a `tf.keras.Layer` object which has not yet been built, and so has not created any variables, will pass this assertion but fail `assert_consumed`. Useful when loading part of a larger checkpoint into a new Python program, e.g. a training checkpoint with a [`tf.compat.v1.train.Optimizer`](../compat/v1/train/optimizer) was saved but only the state required for inference is being loaded. This method returns the status object, and so may be chained with other assertions. * `assert_nontrivial_match()`: Asserts that something aside from the root object was matched. This is a very weak assertion, but is useful for sanity checking in library code where objects may exist in the checkpoint which haven't been created in Python and some Python objects may not have a checkpointed value. * `expect_partial()`: Silence warnings about incomplete checkpoint restores. Warnings are otherwise printed for unused parts of the checkpoint file or object when the `Checkpoint` object is deleted (often at program shutdown). | | Raises | | `NotFoundError` | if the a checkpoint or SavedModel cannot be found at `save_path`. | ### `save` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/util.py#L2276-L2374) ``` save( file_prefix, options=None ) ``` Saves a training checkpoint and provides basic checkpoint management. The saved checkpoint includes variables created by this object and any trackable objects it depends on at the time [`Checkpoint.save()`](checkpoint#save) is called. `save` is a basic convenience wrapper around the `write` method, sequentially numbering checkpoints using `save_counter` and updating the metadata used by [`tf.train.latest_checkpoint`](latest_checkpoint). More advanced checkpoint management, for example garbage collection and custom numbering, may be provided by other utilities which also wrap `write` and `read`. ([`tf.train.CheckpointManager`](checkpointmanager) for example). ``` step = tf.Variable(0, name="step") checkpoint = tf.train.Checkpoint(step=step) checkpoint.save("/tmp/ckpt") # Later, read the checkpoint with restore() checkpoint.restore("/tmp/ckpt-1") # You can also pass options to save() and restore(). For example this # runs the IO ops on the localhost: options = tf.train.CheckpointOptions(experimental_io_device="/job:localhost") checkpoint.save("/tmp/ckpt", options=options) # Later, read the checkpoint with restore() checkpoint.restore("/tmp/ckpt-1", options=options) ``` | Args | | `file_prefix` | A prefix to use for the checkpoint filenames (/path/to/directory/and\_a\_prefix). Names are generated based on this prefix and [`Checkpoint.save_counter`](checkpoint#save_counter). | | `options` | Optional [`tf.train.CheckpointOptions`](checkpointoptions) object. | | Returns | | The full path to the checkpoint. | ### `write` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/util.py#L2176-L2217) ``` write( file_prefix, options=None ) ``` Writes a training checkpoint. The checkpoint includes variables created by this object and any trackable objects it depends on at the time [`Checkpoint.write()`](checkpoint#write) is called. `write` does not number checkpoints, increment `save_counter`, or update the metadata used by [`tf.train.latest_checkpoint`](latest_checkpoint). It is primarily intended for use by higher level checkpoint management utilities. `save` provides a very basic implementation of these features. Checkpoints written with `write` must be read with `read`. #### Example usage: ``` step = tf.Variable(0, name="step") checkpoint = tf.Checkpoint(step=step) checkpoint.write("/tmp/ckpt") # Later, read the checkpoint with read() checkpoint.read("/tmp/ckpt") # You can also pass options to write() and read(). For example this # runs the IO ops on the localhost: options = tf.CheckpointOptions(experimental_io_device="/job:localhost") checkpoint.write("/tmp/ckpt", options=options) # Later, read the checkpoint with read() checkpoint.read("/tmp/ckpt", options=options) ``` | Args | | `file_prefix` | A prefix to use for the checkpoint filenames (/path/to/directory/and\_a\_prefix). | | `options` | Optional [`tf.train.CheckpointOptions`](checkpointoptions) object. | | Returns | | The full path to the checkpoint (i.e. `file_prefix`). | tensorflow tf.train.Int64List tf.train.Int64List ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Used in [`tf.train.Example`](example) protos. Holds a list of Int64s. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.Int64List`](https://www.tensorflow.org/api_docs/python/tf/train/Int64List) An `Example` proto is a representation of the following python type: ``` Dict[str, Union[List[bytes], List[int64], List[float]]] ``` This proto implements the `List[int64]` portion. ``` from google.protobuf import text_format example = text_format.Parse(''' features { feature {key: "my_feature" value {int64_list {value: [1, 2, 3, 4]} } } }''', tf.train.Example()) example.features.feature['my_feature'].int64_list.value [1, 2, 3, 4] ``` Use [`tf.io.parse_example`](../io/parse_example) to extract tensors from a serialized `Example` proto: ``` tf.io.parse_example( example.SerializeToString(), features = {'my_feature': tf.io.RaggedFeature(dtype=tf.int64)}) {'my_feature': <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1, 2, 3, 4], dtype=int64)>} ``` See the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample) guide for usage details. | Attributes | | `value` | `repeated int64 value` | tensorflow tf.train.Features tf.train.Features ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Used in [`tf.train.Example`](example) protos. Contains the mapping from keys to `Feature`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.Features`](https://www.tensorflow.org/api_docs/python/tf/train/Features) An `Example` proto is a representation of the following python type: ``` Dict[str, Union[List[bytes], List[int64], List[float]]] ``` This proto implements the `Dict`. ``` int_feature = tf.train.Feature( int64_list=tf.train.Int64List(value=[1, 2, 3, 4])) float_feature = tf.train.Feature( float_list=tf.train.FloatList(value=[1., 2., 3., 4.])) bytes_feature = tf.train.Feature( bytes_list=tf.train.BytesList(value=[b"abc", b"1234"])) example = tf.train.Example( features=tf.train.Features(feature={ 'my_ints': int_feature, 'my_floats': float_feature, 'my_bytes': bytes_feature, })) ``` Use [`tf.io.parse_example`](../io/parse_example) to extract tensors from a serialized `Example` proto: ``` tf.io.parse_example( example.SerializeToString(), features = { 'my_ints': tf.io.RaggedFeature(dtype=tf.int64), 'my_floats': tf.io.RaggedFeature(dtype=tf.float32), 'my_bytes': tf.io.RaggedFeature(dtype=tf.string)}) {'my_bytes': <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'abc', b'1234'], dtype=object)>, 'my_floats': <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 2., 3., 4.], dtype=float32)>, 'my_ints': <tf.Tensor: shape=(4,), dtype=int64, numpy=array([1, 2, 3, 4])>} ``` | Attributes | | `feature` | `repeated FeatureEntry feature` | Child Classes ------------- [`class FeatureEntry`](features/featureentry) tensorflow tf.train.get_checkpoint_state tf.train.get\_checkpoint\_state =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_management.py#L248-L302) | Returns CheckpointState proto from the "checkpoint" file. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.get_checkpoint_state`](https://www.tensorflow.org/api_docs/python/tf/train/get_checkpoint_state) ``` tf.train.get_checkpoint_state( checkpoint_dir, latest_filename=None ) ``` If the "checkpoint" file contains a valid CheckpointState proto, returns it. | Args | | `checkpoint_dir` | The directory of checkpoints. | | `latest_filename` | Optional name of the checkpoint file. Default to 'checkpoint'. | | Returns | | A CheckpointState if the state was available, None otherwise. | | Raises | | `ValueError` | if the checkpoint read doesn't have model\_checkpoint\_path set. | tensorflow tf.train.list_variables tf.train.list\_variables ======================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_utils.py#L85-L115) | Lists the checkpoint keys and shapes of variables in a checkpoint. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.list_variables`](https://www.tensorflow.org/api_docs/python/tf/train/list_variables) ``` tf.train.list_variables( ckpt_dir_or_file ) ``` Checkpoint keys are paths in a checkpoint graph. #### Example usage: ``` import tensorflow as tf import os ckpt_directory = "/tmp/training_checkpoints/ckpt" ckpt = tf.train.Checkpoint(optimizer=optimizer, model=model) manager = tf.train.CheckpointManager(ckpt, ckpt_directory, max_to_keep=3) train_and_checkpoint(model, manager) tf.train.list_variables(manager.latest_checkpoint) ``` | Args | | `ckpt_dir_or_file` | Directory with checkpoints file or path to checkpoint. | | Returns | | List of tuples `(key, shape)`. |
programming_docs
tensorflow tf.train.BytesList tf.train.BytesList ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Used in [`tf.train.Example`](example) protos. Holds a list of byte-strings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.BytesList`](https://www.tensorflow.org/api_docs/python/tf/train/BytesList) An `Example` proto is a representation of the following python type: ``` Dict[str, Union[List[bytes], List[int64], List[float]]] ``` This proto implements the `List[bytes]` portion. ``` from google.protobuf import text_format example = text_format.Parse(''' features { feature {key: "my_feature" value {bytes_list {value: ['abc', '12345' ]} } } }''', tf.train.Example()) example.features.feature['my_feature'].bytes_list.value ["abc", "12345"] ``` Use [`tf.io.parse_example`](../io/parse_example) to extract tensors from a serialized `Example` proto: ``` tf.io.parse_example( example.SerializeToString(), features = {'my_feature': tf.io.RaggedFeature(dtype=tf.string)}) {'my_feature': <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'abc', b'12345'], dtype=object)>} ``` See the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample) guide for usage details. | Attributes | | `value` | `repeated bytes value` | tensorflow tf.train.Feature tf.train.Feature ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | Used in [`tf.train.Example`](example) protos. Contains a list of values. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.Feature`](https://www.tensorflow.org/api_docs/python/tf/train/Feature) An `Example` proto is a representation of the following python type: ``` Dict[str, Union[List[bytes], List[int64], List[float]]] ``` This proto implements the `Union`. The contained list can be one of three types: * [`tf.train.BytesList`](byteslist) * [`tf.train.FloatList`](floatlist) * [`tf.train.Int64List`](int64list) ``` int_feature = tf.train.Feature( int64_list=tf.train.Int64List(value=[1, 2, 3, 4])) float_feature = tf.train.Feature( float_list=tf.train.FloatList(value=[1., 2., 3., 4.])) bytes_feature = tf.train.Feature( bytes_list=tf.train.BytesList(value=[b"abc", b"1234"])) example = tf.train.Example( features=tf.train.Features(feature={ 'my_ints': int_feature, 'my_floats': float_feature, 'my_bytes': bytes_feature, })) ``` Use [`tf.io.parse_example`](../io/parse_example) to extract tensors from a serialized `Example` proto: ``` tf.io.parse_example( example.SerializeToString(), features = { 'my_ints': tf.io.RaggedFeature(dtype=tf.int64), 'my_floats': tf.io.RaggedFeature(dtype=tf.float32), 'my_bytes': tf.io.RaggedFeature(dtype=tf.string)}) {'my_bytes': <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'abc', b'1234'], dtype=object)>, 'my_floats': <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 2., 3., 4.], dtype=float32)>, 'my_ints': <tf.Tensor: shape=(4,), dtype=int64, numpy=array([1, 2, 3, 4])>} ``` | Attributes | | `bytes_list` | `BytesList bytes_list` | | `float_list` | `FloatList float_list` | | `int64_list` | `Int64List int64_list` | tensorflow tf.train.checkpoints_iterator tf.train.checkpoints\_iterator ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/checkpoint_utils.py#L149-L212) | Continuously yield new checkpoint files as they appear. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.checkpoints_iterator`](https://www.tensorflow.org/api_docs/python/tf/train/checkpoints_iterator) ``` tf.train.checkpoints_iterator( checkpoint_dir, min_interval_secs=0, timeout=None, timeout_fn=None ) ``` The iterator only checks for new checkpoints when control flow has been reverted to it. This means it can miss checkpoints if your code takes longer to run between iterations than `min_interval_secs` or the interval at which new checkpoints are written. The `timeout` argument is the maximum number of seconds to block waiting for a new checkpoint. It is used in combination with the `timeout_fn` as follows: * If the timeout expires and no `timeout_fn` was specified, the iterator stops yielding. * If a `timeout_fn` was specified, that function is called and if it returns a true boolean value the iterator stops yielding. * If the function returns a false boolean value then the iterator resumes the wait for new checkpoints. At this point the timeout logic applies again. This behavior gives control to callers on what to do if checkpoints do not come fast enough or stop being generated. For example, if callers have a way to detect that the training has stopped and know that no new checkpoints will be generated, they can provide a `timeout_fn` that returns `True` when the training has stopped. If they know that the training is still going on they return `False` instead. | Args | | `checkpoint_dir` | The directory in which checkpoints are saved. | | `min_interval_secs` | The minimum number of seconds between yielding checkpoints. | | `timeout` | The maximum number of seconds to wait between checkpoints. If left as `None`, then the process will wait indefinitely. | | `timeout_fn` | Optional function to call after a timeout. If the function returns True, then it means that no new checkpoints will be generated and the iterator will exit. The function is called with no arguments. | | Yields | | String paths to latest checkpoint files as they arrive. | tensorflow tf.train.JobDef.TasksEntry tf.train.JobDef.TasksEntry ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/protobuf/cluster.proto) | A ProtocolMessage #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.JobDef.TasksEntry`](https://www.tensorflow.org/api_docs/python/tf/train/JobDef/TasksEntry) | Attributes | | `key` | `int32 key` | | `value` | `string value` | tensorflow tf.train.experimental.PythonState tf.train.experimental.PythonState ================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/python_state.py#L27-L88) | A mixin for putting Python state in an object-based checkpoint. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.experimental.PythonState`](https://www.tensorflow.org/api_docs/python/tf/train/experimental/PythonState) This is an abstract class which allows extensions to TensorFlow's object-based checkpointing (see [`tf.train.Checkpoint`](../checkpoint)). For example a wrapper for NumPy arrays: ``` import io import numpy class NumpyWrapper(tf.train.experimental.PythonState): def __init__(self, array): self.array = array def serialize(self): string_file = io.BytesIO() try: numpy.save(string_file, self.array, allow_pickle=False) serialized = string_file.getvalue() finally: string_file.close() return serialized def deserialize(self, string_value): string_file = io.BytesIO(string_value) try: self.array = numpy.load(string_file, allow_pickle=False) finally: string_file.close() ``` Instances of `NumpyWrapper` are checkpointable objects, and will be saved and restored from checkpoints along with TensorFlow state like variables. ``` root = tf.train.Checkpoint(numpy=NumpyWrapper(numpy.array([1.]))) save_path = root.save(prefix) root.numpy.array *= 2. assert [2.] == root.numpy.array root.restore(save_path) assert [1.] == root.numpy.array ``` Methods ------- ### `deserialize` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/python_state.py#L77-L79) ``` @abc.abstractmethod deserialize( string_value ) ``` Callback to deserialize the object. ### `serialize` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/training/tracking/python_state.py#L73-L75) ``` @abc.abstractmethod serialize() ``` Callback to serialize the object. Returns a string. tensorflow tf.train.Features.FeatureEntry tf.train.Features.FeatureEntry ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | A ProtocolMessage #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.Features.FeatureEntry`](https://www.tensorflow.org/api_docs/python/tf/train/Features/FeatureEntry) | Attributes | | `key` | `string key` | | `value` | `Feature value` | tensorflow tf.train.FeatureLists.FeatureListEntry tf.train.FeatureLists.FeatureListEntry ====================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/core/example/feature.proto) | A ProtocolMessage #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.train.FeatureLists.FeatureListEntry`](https://www.tensorflow.org/api_docs/python/tf/train/FeatureLists/FeatureListEntry) | Attributes | | `key` | `string key` | | `value` | `FeatureList value` | tensorflow tf.queue.PaddingFIFOQueue tf.queue.PaddingFIFOQueue ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L851-L920) | A FIFOQueue that supports batching variable-sized tensors by padding. Inherits From: [`QueueBase`](queuebase) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.PaddingFIFOQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/PaddingFIFOQueue), [`tf.compat.v1.io.PaddingFIFOQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/PaddingFIFOQueue), [`tf.compat.v1.queue.PaddingFIFOQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/PaddingFIFOQueue) ``` tf.queue.PaddingFIFOQueue( capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue' ) ``` A `PaddingFIFOQueue` may contain components with dynamic shape, while also supporting `dequeue_many`. See the constructor for more details. See [`tf.queue.QueueBase`](queuebase) for a description of the methods on this class. | Args | | `capacity` | An integer. The upper bound on the number of elements that may be stored in this queue. | | `dtypes` | A list of `DType` objects. The length of `dtypes` must equal the number of tensors in each queue element. | | `shapes` | A list of `TensorShape` objects, with the same length as `dtypes`. Any dimension in the `TensorShape` containing value `None` is dynamic and allows values to be enqueued with variable size in that dimension. | | `names` | (Optional.) A list of string naming the components in the queue with the same length as `dtypes`, or `None`. If specified the dequeue methods return a dictionary with the names as keys. | | `shared_name` | (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions. | | `name` | Optional name for the queue operation. | | Raises | | `ValueError` | If shapes is not a list of shapes, or the lengths of dtypes and shapes do not match, or if names is specified and the lengths of dtypes and names do not match. | | Attributes | | `dtypes` | The list of dtypes for each component of a queue element. | | `name` | The name of the underlying queue. | | `names` | The list of names for each component of a queue element. | | `queue_ref` | The underlying queue reference. | | `shapes` | The list of shapes for each component of a queue element. | Methods ------- ### `close` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L545-L578) ``` close( cancel_pending_enqueues=False, name=None ) ``` Closes this queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent `enqueue` and `enqueue_many` operations will fail. Subsequent `dequeue` and `dequeue_many` operations will continue to succeed if sufficient elements remain in the queue. Subsequently dequeue and dequeue\_many operations that would otherwise block waiting for more elements (if close hadn't been called) will now fail immediately. If `cancel_pending_enqueues` is `True`, all pending requests will also be canceled. | Args | | `cancel_pending_enqueues` | (Optional.) A boolean, defaulting to `False` (described above). | | `name` | A name for the operation (optional). | | Returns | | The operation that closes the queue. | ### `dequeue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L423-L459) ``` dequeue( name=None ) ``` Dequeues one element from this queue. If the queue is empty when this operation executes, it will block until there is an element to dequeue. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue is empty, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `name` | A name for the operation (optional). | | Returns | | The tuple of tensors that was dequeued. | ### `dequeue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L461-L502) ``` dequeue_many( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are less than `n` elements left, then an `OutOfRange` exception is raised. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue contains fewer than `n` elements, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The list of concatenated tensors that was dequeued. | ### `dequeue_up_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L504-L543) ``` dequeue_up_to( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. > > **Note:** This operation is not supported by all queues. If a queue does not support DequeueUpTo, then a [`tf.errors.UnimplementedError`](../errors/unimplementederror) is raised. > This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. If the queue has not been closed, all of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are more than `0` but fewer than `n` elements remaining, then instead of raising a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) like `tf.QueueBase.dequeue_many`, less than `n` elements are returned immediately. If the queue is closed and there are `0` elements left in the queue, then a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) is raised just like in `dequeue_many`. Otherwise the behavior is identical to `dequeue_many`. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The tuple of concatenated tensors that was dequeued. | ### `enqueue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L313-L350) ``` enqueue( vals, name=None ) ``` Enqueues one element to this queue. If the queue is full when this operation executes, it will block until the element has been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a new tuple of tensors to the queue. | ### `enqueue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L352-L398) ``` enqueue_many( vals, name=None ) ``` Enqueues zero or more elements to this queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tensors in `vals` must have the same size in the 0th dimension. If the queue is full when this operation executes, it will block until all of the elements have been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a batch of tuples of tensors to the queue. | ### `from_list` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L186-L225) ``` @staticmethod from_list( index, queues ) ``` Create a queue using the queue reference from `queues[index]`. | Args | | `index` | An integer scalar tensor that determines the input that gets selected. | | `queues` | A list of `QueueBase` objects. | | Returns | | A `QueueBase` object. | | Raises | | `TypeError` | When `queues` is not a list of `QueueBase` objects, or when the data types of `queues` are not all the same. | ### `is_closed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L580-L597) ``` is_closed( name=None ) ``` Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open. | Args | | `name` | A name for the operation (optional). | | Returns | | True if the queue is closed and false if the queue is open. | ### `size` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L599-L613) ``` size( name=None ) ``` Compute the number of elements in this queue. | Args | | `name` | A name for the operation (optional). | | Returns | | A scalar tensor containing the number of elements in this queue. |
programming_docs
tensorflow tf.queue.FIFOQueue tf.queue.FIFOQueue ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L713-L767) | A queue implementation that dequeues elements in first-in first-out order. Inherits From: [`QueueBase`](queuebase) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.FIFOQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/FIFOQueue), [`tf.compat.v1.queue.FIFOQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/FIFOQueue) ``` tf.queue.FIFOQueue( capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue' ) ``` See [`tf.queue.QueueBase`](queuebase) for a description of the methods on this class. | Args | | `capacity` | An integer. The upper bound on the number of elements that may be stored in this queue. | | `dtypes` | A list of `DType` objects. The length of `dtypes` must equal the number of tensors in each queue element. | | `shapes` | (Optional.) A list of fully-defined `TensorShape` objects with the same length as `dtypes`, or `None`. | | `names` | (Optional.) A list of string naming the components in the queue with the same length as `dtypes`, or `None`. If specified the dequeue methods return a dictionary with the names as keys. | | `shared_name` | (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions. | | `name` | Optional name for the queue operation. | | Attributes | | `dtypes` | The list of dtypes for each component of a queue element. | | `name` | The name of the underlying queue. | | `names` | The list of names for each component of a queue element. | | `queue_ref` | The underlying queue reference. | | `shapes` | The list of shapes for each component of a queue element. | Methods ------- ### `close` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L545-L578) ``` close( cancel_pending_enqueues=False, name=None ) ``` Closes this queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent `enqueue` and `enqueue_many` operations will fail. Subsequent `dequeue` and `dequeue_many` operations will continue to succeed if sufficient elements remain in the queue. Subsequently dequeue and dequeue\_many operations that would otherwise block waiting for more elements (if close hadn't been called) will now fail immediately. If `cancel_pending_enqueues` is `True`, all pending requests will also be canceled. | Args | | `cancel_pending_enqueues` | (Optional.) A boolean, defaulting to `False` (described above). | | `name` | A name for the operation (optional). | | Returns | | The operation that closes the queue. | ### `dequeue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L423-L459) ``` dequeue( name=None ) ``` Dequeues one element from this queue. If the queue is empty when this operation executes, it will block until there is an element to dequeue. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue is empty, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `name` | A name for the operation (optional). | | Returns | | The tuple of tensors that was dequeued. | ### `dequeue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L461-L502) ``` dequeue_many( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are less than `n` elements left, then an `OutOfRange` exception is raised. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue contains fewer than `n` elements, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The list of concatenated tensors that was dequeued. | ### `dequeue_up_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L504-L543) ``` dequeue_up_to( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. > > **Note:** This operation is not supported by all queues. If a queue does not support DequeueUpTo, then a [`tf.errors.UnimplementedError`](../errors/unimplementederror) is raised. > This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. If the queue has not been closed, all of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are more than `0` but fewer than `n` elements remaining, then instead of raising a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) like `tf.QueueBase.dequeue_many`, less than `n` elements are returned immediately. If the queue is closed and there are `0` elements left in the queue, then a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) is raised just like in `dequeue_many`. Otherwise the behavior is identical to `dequeue_many`. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The tuple of concatenated tensors that was dequeued. | ### `enqueue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L313-L350) ``` enqueue( vals, name=None ) ``` Enqueues one element to this queue. If the queue is full when this operation executes, it will block until the element has been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a new tuple of tensors to the queue. | ### `enqueue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L352-L398) ``` enqueue_many( vals, name=None ) ``` Enqueues zero or more elements to this queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tensors in `vals` must have the same size in the 0th dimension. If the queue is full when this operation executes, it will block until all of the elements have been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a batch of tuples of tensors to the queue. | ### `from_list` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L186-L225) ``` @staticmethod from_list( index, queues ) ``` Create a queue using the queue reference from `queues[index]`. | Args | | `index` | An integer scalar tensor that determines the input that gets selected. | | `queues` | A list of `QueueBase` objects. | | Returns | | A `QueueBase` object. | | Raises | | `TypeError` | When `queues` is not a list of `QueueBase` objects, or when the data types of `queues` are not all the same. | ### `is_closed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L580-L597) ``` is_closed( name=None ) ``` Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open. | Args | | `name` | A name for the operation (optional). | | Returns | | True if the queue is closed and false if the queue is open. | ### `size` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L599-L613) ``` size( name=None ) ``` Compute the number of elements in this queue. | Args | | `name` | A name for the operation (optional). | | Returns | | A scalar tensor containing the number of elements in this queue. | tensorflow tf.queue.PriorityQueue tf.queue.PriorityQueue ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L926-L989) | A queue implementation that dequeues elements in prioritized order. Inherits From: [`QueueBase`](queuebase) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.PriorityQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/PriorityQueue), [`tf.compat.v1.io.PriorityQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/PriorityQueue), [`tf.compat.v1.queue.PriorityQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/PriorityQueue) ``` tf.queue.PriorityQueue( capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue' ) ``` See [`tf.queue.QueueBase`](queuebase) for a description of the methods on this class. | Args | | `capacity` | An integer. The upper bound on the number of elements that may be stored in this queue. | | `types` | A list of `DType` objects. The length of `types` must equal the number of tensors in each queue element, except the first priority element. The first tensor in each element is the priority, which must be type int64. | | `shapes` | (Optional.) A list of fully-defined `TensorShape` objects, with the same length as `types`, or `None`. | | `names` | (Optional.) A list of strings naming the components in the queue with the same length as `dtypes`, or `None`. If specified, the dequeue methods return a dictionary with the names as keys. | | `shared_name` | (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions. | | `name` | Optional name for the queue operation. | | Attributes | | `dtypes` | The list of dtypes for each component of a queue element. | | `name` | The name of the underlying queue. | | `names` | The list of names for each component of a queue element. | | `queue_ref` | The underlying queue reference. | | `shapes` | The list of shapes for each component of a queue element. | Methods ------- ### `close` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L545-L578) ``` close( cancel_pending_enqueues=False, name=None ) ``` Closes this queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent `enqueue` and `enqueue_many` operations will fail. Subsequent `dequeue` and `dequeue_many` operations will continue to succeed if sufficient elements remain in the queue. Subsequently dequeue and dequeue\_many operations that would otherwise block waiting for more elements (if close hadn't been called) will now fail immediately. If `cancel_pending_enqueues` is `True`, all pending requests will also be canceled. | Args | | `cancel_pending_enqueues` | (Optional.) A boolean, defaulting to `False` (described above). | | `name` | A name for the operation (optional). | | Returns | | The operation that closes the queue. | ### `dequeue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L423-L459) ``` dequeue( name=None ) ``` Dequeues one element from this queue. If the queue is empty when this operation executes, it will block until there is an element to dequeue. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue is empty, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `name` | A name for the operation (optional). | | Returns | | The tuple of tensors that was dequeued. | ### `dequeue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L461-L502) ``` dequeue_many( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are less than `n` elements left, then an `OutOfRange` exception is raised. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue contains fewer than `n` elements, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The list of concatenated tensors that was dequeued. | ### `dequeue_up_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L504-L543) ``` dequeue_up_to( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. > > **Note:** This operation is not supported by all queues. If a queue does not support DequeueUpTo, then a [`tf.errors.UnimplementedError`](../errors/unimplementederror) is raised. > This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. If the queue has not been closed, all of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are more than `0` but fewer than `n` elements remaining, then instead of raising a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) like `tf.QueueBase.dequeue_many`, less than `n` elements are returned immediately. If the queue is closed and there are `0` elements left in the queue, then a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) is raised just like in `dequeue_many`. Otherwise the behavior is identical to `dequeue_many`. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The tuple of concatenated tensors that was dequeued. | ### `enqueue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L313-L350) ``` enqueue( vals, name=None ) ``` Enqueues one element to this queue. If the queue is full when this operation executes, it will block until the element has been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a new tuple of tensors to the queue. | ### `enqueue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L352-L398) ``` enqueue_many( vals, name=None ) ``` Enqueues zero or more elements to this queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tensors in `vals` must have the same size in the 0th dimension. If the queue is full when this operation executes, it will block until all of the elements have been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a batch of tuples of tensors to the queue. | ### `from_list` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L186-L225) ``` @staticmethod from_list( index, queues ) ``` Create a queue using the queue reference from `queues[index]`. | Args | | `index` | An integer scalar tensor that determines the input that gets selected. | | `queues` | A list of `QueueBase` objects. | | Returns | | A `QueueBase` object. | | Raises | | `TypeError` | When `queues` is not a list of `QueueBase` objects, or when the data types of `queues` are not all the same. | ### `is_closed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L580-L597) ``` is_closed( name=None ) ``` Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open. | Args | | `name` | A name for the operation (optional). | | Returns | | True if the queue is closed and false if the queue is open. | ### `size` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L599-L613) ``` size( name=None ) ``` Compute the number of elements in this queue. | Args | | `name` | A name for the operation (optional). | | Returns | | A scalar tensor containing the number of elements in this queue. |
programming_docs
tensorflow tf.queue.RandomShuffleQueue tf.queue.RandomShuffleQueue =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L627-L708) | A queue implementation that dequeues elements in a random order. Inherits From: [`QueueBase`](queuebase) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.RandomShuffleQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/RandomShuffleQueue), [`tf.compat.v1.io.RandomShuffleQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/RandomShuffleQueue), [`tf.compat.v1.queue.RandomShuffleQueue`](https://www.tensorflow.org/api_docs/python/tf/queue/RandomShuffleQueue) ``` tf.queue.RandomShuffleQueue( capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue' ) ``` See [`tf.queue.QueueBase`](queuebase) for a description of the methods on this class. | Args | | `capacity` | An integer. The upper bound on the number of elements that may be stored in this queue. | | `min_after_dequeue` | An integer (described above). | | `dtypes` | A list of `DType` objects. The length of `dtypes` must equal the number of tensors in each queue element. | | `shapes` | (Optional.) A list of fully-defined `TensorShape` objects with the same length as `dtypes`, or `None`. | | `names` | (Optional.) A list of string naming the components in the queue with the same length as `dtypes`, or `None`. If specified the dequeue methods return a dictionary with the names as keys. | | `seed` | A Python integer. Used to create a random seed. See [`tf.compat.v1.set_random_seed`](../compat/v1/set_random_seed) for behavior. | | `shared_name` | (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions. | | `name` | Optional name for the queue operation. | | Attributes | | `dtypes` | The list of dtypes for each component of a queue element. | | `name` | The name of the underlying queue. | | `names` | The list of names for each component of a queue element. | | `queue_ref` | The underlying queue reference. | | `shapes` | The list of shapes for each component of a queue element. | Methods ------- ### `close` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L545-L578) ``` close( cancel_pending_enqueues=False, name=None ) ``` Closes this queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent `enqueue` and `enqueue_many` operations will fail. Subsequent `dequeue` and `dequeue_many` operations will continue to succeed if sufficient elements remain in the queue. Subsequently dequeue and dequeue\_many operations that would otherwise block waiting for more elements (if close hadn't been called) will now fail immediately. If `cancel_pending_enqueues` is `True`, all pending requests will also be canceled. | Args | | `cancel_pending_enqueues` | (Optional.) A boolean, defaulting to `False` (described above). | | `name` | A name for the operation (optional). | | Returns | | The operation that closes the queue. | ### `dequeue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L423-L459) ``` dequeue( name=None ) ``` Dequeues one element from this queue. If the queue is empty when this operation executes, it will block until there is an element to dequeue. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue is empty, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `name` | A name for the operation (optional). | | Returns | | The tuple of tensors that was dequeued. | ### `dequeue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L461-L502) ``` dequeue_many( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are less than `n` elements left, then an `OutOfRange` exception is raised. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue contains fewer than `n` elements, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The list of concatenated tensors that was dequeued. | ### `dequeue_up_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L504-L543) ``` dequeue_up_to( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. > > **Note:** This operation is not supported by all queues. If a queue does not support DequeueUpTo, then a [`tf.errors.UnimplementedError`](../errors/unimplementederror) is raised. > This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. If the queue has not been closed, all of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are more than `0` but fewer than `n` elements remaining, then instead of raising a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) like `tf.QueueBase.dequeue_many`, less than `n` elements are returned immediately. If the queue is closed and there are `0` elements left in the queue, then a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) is raised just like in `dequeue_many`. Otherwise the behavior is identical to `dequeue_many`. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The tuple of concatenated tensors that was dequeued. | ### `enqueue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L313-L350) ``` enqueue( vals, name=None ) ``` Enqueues one element to this queue. If the queue is full when this operation executes, it will block until the element has been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a new tuple of tensors to the queue. | ### `enqueue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L352-L398) ``` enqueue_many( vals, name=None ) ``` Enqueues zero or more elements to this queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tensors in `vals` must have the same size in the 0th dimension. If the queue is full when this operation executes, it will block until all of the elements have been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a batch of tuples of tensors to the queue. | ### `from_list` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L186-L225) ``` @staticmethod from_list( index, queues ) ``` Create a queue using the queue reference from `queues[index]`. | Args | | `index` | An integer scalar tensor that determines the input that gets selected. | | `queues` | A list of `QueueBase` objects. | | Returns | | A `QueueBase` object. | | Raises | | `TypeError` | When `queues` is not a list of `QueueBase` objects, or when the data types of `queues` are not all the same. | ### `is_closed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L580-L597) ``` is_closed( name=None ) ``` Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open. | Args | | `name` | A name for the operation (optional). | | Returns | | True if the queue is closed and false if the queue is open. | ### `size` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L599-L613) ``` size( name=None ) ``` Compute the number of elements in this queue. | Args | | `name` | A name for the operation (optional). | | Returns | | A scalar tensor containing the number of elements in this queue. | tensorflow tf.queue.QueueBase tf.queue.QueueBase ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L117-L613) | Base class for queue implementations. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.QueueBase`](https://www.tensorflow.org/api_docs/python/tf/queue/QueueBase), [`tf.compat.v1.io.QueueBase`](https://www.tensorflow.org/api_docs/python/tf/queue/QueueBase), [`tf.compat.v1.queue.QueueBase`](https://www.tensorflow.org/api_docs/python/tf/queue/QueueBase) ``` tf.queue.QueueBase( dtypes, shapes, names, queue_ref ) ``` A queue is a TensorFlow data structure that stores tensors across multiple steps, and exposes operations that enqueue and dequeue tensors. Each queue element is a tuple of one or more tensors, where each tuple component has a static dtype, and may have a static shape. The queue implementations support versions of enqueue and dequeue that handle single elements, versions that support enqueuing and dequeuing a batch of elements at once. See [`tf.queue.FIFOQueue`](fifoqueue) and [`tf.queue.RandomShuffleQueue`](randomshufflequeue) for concrete implementations of this class, and instructions on how to create them. | Args | | `dtypes` | A list of types. The length of dtypes must equal the number of tensors in each element. | | `shapes` | Constraints on the shapes of tensors in an element: A list of shape tuples or None. This list is the same length as dtypes. If the shape of any tensors in the element are constrained, all must be; shapes can be None if the shapes should not be constrained. | | `names` | Optional list of names. If provided, the `enqueue()` and `dequeue()` methods will use dictionaries with these names as keys. Must be None or a list or tuple of the same length as `dtypes`. | | `queue_ref` | The queue reference, i.e. the output of the queue op. | | Raises | | `ValueError` | If one of the arguments is invalid. | | Attributes | | `dtypes` | The list of dtypes for each component of a queue element. | | `name` | The name of the underlying queue. | | `names` | The list of names for each component of a queue element. | | `queue_ref` | The underlying queue reference. | | `shapes` | The list of shapes for each component of a queue element. | Methods ------- ### `close` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L545-L578) ``` close( cancel_pending_enqueues=False, name=None ) ``` Closes this queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent `enqueue` and `enqueue_many` operations will fail. Subsequent `dequeue` and `dequeue_many` operations will continue to succeed if sufficient elements remain in the queue. Subsequently dequeue and dequeue\_many operations that would otherwise block waiting for more elements (if close hadn't been called) will now fail immediately. If `cancel_pending_enqueues` is `True`, all pending requests will also be canceled. | Args | | `cancel_pending_enqueues` | (Optional.) A boolean, defaulting to `False` (described above). | | `name` | A name for the operation (optional). | | Returns | | The operation that closes the queue. | ### `dequeue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L423-L459) ``` dequeue( name=None ) ``` Dequeues one element from this queue. If the queue is empty when this operation executes, it will block until there is an element to dequeue. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue is empty, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `name` | A name for the operation (optional). | | Returns | | The tuple of tensors that was dequeued. | ### `dequeue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L461-L502) ``` dequeue_many( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are less than `n` elements left, then an `OutOfRange` exception is raised. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed, the queue contains fewer than `n` elements, and there are no pending enqueue operations that can fulfill this request, [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) will be raised. If the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The list of concatenated tensors that was dequeued. | ### `dequeue_up_to` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L504-L543) ``` dequeue_up_to( n, name=None ) ``` Dequeues and concatenates `n` elements from this queue. > > **Note:** This operation is not supported by all queues. If a queue does not support DequeueUpTo, then a [`tf.errors.UnimplementedError`](../errors/unimplementederror) is raised. > This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. If the queue has not been closed, all of the components in the dequeued tuple will have size `n` in the 0th dimension. If the queue is closed and there are more than `0` but fewer than `n` elements remaining, then instead of raising a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) like `tf.QueueBase.dequeue_many`, less than `n` elements are returned immediately. If the queue is closed and there are `0` elements left in the queue, then a [`tf.errors.OutOfRangeError`](../errors/outofrangeerror) is raised just like in `dequeue_many`. Otherwise the behavior is identical to `dequeue_many`. | Args | | `n` | A scalar `Tensor` containing the number of elements to dequeue. | | `name` | A name for the operation (optional). | | Returns | | The tuple of concatenated tensors that was dequeued. | ### `enqueue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L313-L350) ``` enqueue( vals, name=None ) ``` Enqueues one element to this queue. If the queue is full when this operation executes, it will block until the element has been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a new tuple of tensors to the queue. | ### `enqueue_many` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L352-L398) ``` enqueue_many( vals, name=None ) ``` Enqueues zero or more elements to this queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tensors in `vals` must have the same size in the 0th dimension. If the queue is full when this operation executes, it will block until all of the elements have been enqueued. At runtime, this operation may raise an error if the queue is `tf.QueueBase.close` before or during its execution. If the queue is closed before this operation runs, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with `cancel_pending_enqueues=True`, or (ii) the session is `tf.Session.close`, [`tf.errors.CancelledError`](../errors/cancellederror) will be raised. | Args | | `vals` | A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken. | | `name` | A name for the operation (optional). | | Returns | | The operation that enqueues a batch of tuples of tensors to the queue. | ### `from_list` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L186-L225) ``` @staticmethod from_list( index, queues ) ``` Create a queue using the queue reference from `queues[index]`. | Args | | `index` | An integer scalar tensor that determines the input that gets selected. | | `queues` | A list of `QueueBase` objects. | | Returns | | A `QueueBase` object. | | Raises | | `TypeError` | When `queues` is not a list of `QueueBase` objects, or when the data types of `queues` are not all the same. | ### `is_closed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L580-L597) ``` is_closed( name=None ) ``` Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open. | Args | | `name` | A name for the operation (optional). | | Returns | | True if the queue is closed and false if the queue is open. | ### `size` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/data_flow_ops.py#L599-L613) ``` size( name=None ) ``` Compute the number of elements in this queue. | Args | | `name` | A name for the operation (optional). | | Returns | | A scalar tensor containing the number of elements in this queue. |
programming_docs
tensorflow Module: tf.profiler.experimental Module: tf.profiler.experimental ================================ Public API for tf.profiler.experimental namespace. Modules ------- [`client`](experimental/client) module: Public API for tf.profiler.experimental.client namespace. [`server`](experimental/server) module: Public API for tf.profiler.experimental.server namespace. Classes ------- [`class Profile`](experimental/profile): Context-manager profile API. [`class ProfilerOptions`](experimental/profileroptions): Options for finer control over the profiler. [`class Trace`](experimental/trace): Context manager that generates a trace event in the profiler. Functions --------- [`start(...)`](experimental/start): Start profiling TensorFlow performance. [`stop(...)`](experimental/stop): Stops the current profiling session. tensorflow tf.profiler.experimental.Profile tf.profiler.experimental.Profile ================================ Context-manager profile API. ``` tf.profiler.experimental.Profile( logdir, options=None ) ``` Profiling will start when entering the scope, and stop and save the results to the logdir when exits the scope. Open TensorBoard profile tab to view results. #### Example usage: ``` with tf.profiler.experimental.Profile("/path/to/logdir"): # do some work ``` | Args | | `logdir` | profile data will save to this directory. | | `options` | An optional [`tf.profiler.experimental.ProfilerOptions`](profileroptions) can be provided to fine tune the profiler's behavior. | Methods ------- ### `__enter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/profiler/profiler_v2.py#L209-L210) ``` __enter__() ``` ### `__exit__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/profiler/profiler_v2.py#L212-L213) ``` __exit__( typ, value, tb ) ``` tensorflow tf.profiler.experimental.ProfilerOptions tf.profiler.experimental.ProfilerOptions ======================================== Options for finer control over the profiler. ``` tf.profiler.experimental.ProfilerOptions( host_tracer_level=2, python_tracer_level=0, device_tracer_level=1, delay_ms=None ) ``` Use [`tf.profiler.experimental.ProfilerOptions`](profileroptions) to control [`tf.profiler`](../../profiler) behavior. #### Fields: * **`host_tracer_level`**: Adjust CPU tracing level. Values are: 1 - critical info only, 2 - info, 3 - verbose. [default value is 2] * **`python_tracer_level`**: Toggle tracing of Python function calls. Values are: 1 * enabled, 0 - disabled [default value is 0] * **`device_tracer_level`**: Adjust device (TPU/GPU) tracing level. Values are: 1 - enabled, 0 - disabled [default value is 1] * **`delay_ms`**: Requests for all hosts to start profiling at a timestamp that is `delay_ms` away from the current time. `delay_ms` is in milliseconds. If zero, each host will start profiling immediately upon receiving the request. Default value is None, allowing the profiler guess the best value. | Attributes | | `host_tracer_level` | A `namedtuple` alias for field number 0 | | `python_tracer_level` | A `namedtuple` alias for field number 1 | | `device_tracer_level` | A `namedtuple` alias for field number 2 | | `delay_ms` | A `namedtuple` alias for field number 3 | tensorflow tf.profiler.experimental.stop tf.profiler.experimental.stop ============================= Stops the current profiling session. ``` tf.profiler.experimental.stop( save=True ) ``` The profiler session will be stopped and profile results can be saved. | Args | | `save` | An optional variable to save the results to TensorBoard. Default True. | | Raises | | `UnavailableError` | If there is no active profiling session. | tensorflow Module: tf.profiler.experimental.client Module: tf.profiler.experimental.client ======================================= Public API for tf.profiler.experimental.client namespace. Functions --------- [`monitor(...)`](client/monitor): Sends grpc requests to profiler server to perform on-demand monitoring. [`trace(...)`](client/trace): Sends gRPC requests to one or more profiler servers to perform on-demand profiling. tensorflow tf.profiler.experimental.start tf.profiler.experimental.start ============================== Start profiling TensorFlow performance. ``` tf.profiler.experimental.start( logdir, options=None ) ``` | Args | | `logdir` | Profiling results log directory. | | `options` | `ProfilerOptions` namedtuple to specify miscellaneous profiler options. See example usage below. | | Raises | | `AlreadyExistsError` | If a profiling session is already running. | #### Example usage: ``` options = tf.profiler.experimental.ProfilerOptions(host_tracer_level = 3, python_tracer_level = 1, device_tracer_level = 1) tf.profiler.experimental.start('logdir_path', options = options) # Training code here tf.profiler.experimental.stop() ``` To view the profiling results, launch TensorBoard and point it to `logdir`. Open your browser and go to `localhost:6006/#profile` to view profiling results. tensorflow Module: tf.profiler.experimental.server Module: tf.profiler.experimental.server ======================================= Public API for tf.profiler.experimental.server namespace. Functions --------- [`start(...)`](server/start): Start a profiler grpc server that listens to given port. tensorflow tf.profiler.experimental.Trace tf.profiler.experimental.Trace ============================== Context manager that generates a trace event in the profiler. ``` tf.profiler.experimental.Trace( name, **kwargs ) ``` A trace event will start when entering the context, and stop and save the result to the profiler when exiting the context. Open TensorBoard Profile tab and choose trace viewer to view the trace event in the timeline. Trace events are created only when the profiler is enabled. More information on how to use the profiler can be found at <https://tensorflow.org/guide/profiler> #### Example usage: ``` tf.profiler.experimental.start('logdir') for step in range(num_steps): # Creates a trace event for each training step with the step number. with tf.profiler.experimental.Trace("Train", step_num=step, _r=1): train_fn() tf.profiler.experimental.stop() ``` | Args | | `name` | The name of the trace event. | | `**kwargs` | Keyword arguments added to the trace event. Both the key and value are of types that can be converted to strings, which will be interpreted by the profiler according to the traceme name. Example usage: ``` tf.profiler.experimental.start('logdir') for step in range(num_steps): # Creates a trace event for each training step with the # step number. with tf.profiler.experimental.Trace("Train", step_num=step): train_fn() tf.profiler.experimental.stop() ``` The example above uses the keyword argument "step\_num" to specify the training step being traced. | Methods ------- ### `set_metadata` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/profiler/trace.py#L87-L119) ``` set_metadata( **kwargs ) ``` Sets metadata in this trace event. | Args | | `**kwargs` | metadata in key-value pairs. | This method enables setting metadata in a trace event after it is created. #### Example usage: ``` def call(function): with tf.profiler.experimental.Trace("call", function_name=function.name) as tm: binary, in_cache = jit_compile(function) tm.set_metadata(in_cache=in_cache) execute(binary) ``` In this example, we want to trace how much time spent on calling a function, which includes compilation and execution. The compilation can be either getting a cached copy of the binary or actually generating the binary, which is indicated by the boolean "in\_cache" returned by jit\_compile(). We need to use set\_metadata() to pass in\_cache because we did not know the in\_cache value when the trace was created (and we cannot create the trace after jit\_compile(), because we want to measure the entire duration of call()). ### `__enter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/profiler/trace.py#L83-L85) ``` __enter__() ``` ### `__exit__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/profiler/trace.py#L121-L123) ``` __exit__( exc_type, exc_val, exc_tb ) ``` tensorflow tf.profiler.experimental.server.start tf.profiler.experimental.server.start ===================================== Start a profiler grpc server that listens to given port. ``` tf.profiler.experimental.server.start( port ) ``` The profiler server will exit when the process finishes. The service is defined in tensorflow/core/profiler/profiler\_service.proto. | Args | | `port` | port profiler server listens to. | Example usage: ```python tf.profiler.experimental.server.start(6009) # do your training here. tensorflow tf.profiler.experimental.client.monitor tf.profiler.experimental.client.monitor ======================================= Sends grpc requests to profiler server to perform on-demand monitoring. ``` tf.profiler.experimental.client.monitor( service_addr, duration_ms, level=1 ) ``` The monitoring result is a light weight performance summary of your model execution. This method will block the caller thread until it receives the monitoring result. This method currently supports Cloud TPU only. | Args | | `service_addr` | gRPC address of profiler service e.g. grpc://10.0.0.2:8466. | | `duration_ms` | Duration of monitoring in ms. | | `level` | Choose a monitoring level between 1 and 2 to monitor your job. Level 2 is more verbose than level 1 and shows more metrics. | | Returns | | A string of monitoring output. | #### Example usage: ``` # Continuously send gRPC requests to the Cloud TPU to monitor the model # execution. for query in range(0, 100): print( tf.profiler.experimental.client.monitor('grpc://10.0.0.2:8466', 1000)) ``` tensorflow tf.profiler.experimental.client.trace tf.profiler.experimental.client.trace ===================================== Sends gRPC requests to one or more profiler servers to perform on-demand profiling. ``` tf.profiler.experimental.client.trace( service_addr, logdir, duration_ms, worker_list='', num_tracing_attempts=3, options=None ) ``` This method will block the calling thread until it receives responses from all servers or until deadline expiration. Both single host and multiple host profiling are supported on CPU, GPU, and TPU. The profiled results will be saved by each server to the specified TensorBoard log directory (i.e. the directory you save your model checkpoints). Use the TensorBoard profile plugin to view the visualization and analysis results. | Args | | `service_addr` | A comma delimited string of gRPC addresses of the workers to profile. e.g. service\_addr='grpc://localhost:6009' service\_addr='grpc://10.0.0.2:8466,grpc://10.0.0.3:8466' service\_addr='grpc://localhost:12345,grpc://localhost:23456' | | `logdir` | Path to save profile data to, typically a TensorBoard log directory. This path must be accessible to both the client and server. e.g. logdir='gs://your\_tb\_dir' | | `duration_ms` | Duration of tracing or monitoring in milliseconds. Must be greater than zero. | | `worker_list` | An optional TPU only configuration. The list of workers to profile in the current session. | | `num_tracing_attempts` | Optional. Automatically retry N times when no trace event is collected (default 3). | | `options` | profiler.experimental.ProfilerOptions namedtuple for miscellaneous profiler options. | | Raises | | `InvalidArgumentError` | For when arguments fail validation checks. | | `UnavailableError` | If no trace event was collected. | Example usage (CPU/GPU): ``` # Start a profiler server before your model runs. tf.profiler.experimental.server.start(6009) # (Model code goes here). # Send gRPC request to the profiler server to collect a trace of your model. tf.profiler.experimental.client.trace('grpc://localhost:6009', '/nfs/tb_log', 2000) ``` Example usage (Multiple GPUs): ``` # E.g. your worker IP addresses are 10.0.0.2, 10.0.0.3, 10.0.0.4, and you # would like to schedule start of profiling 1 second from now, for a # duration of 2 seconds. options['delay_ms'] = 1000 tf.profiler.experimental.client.trace( 'grpc://10.0.0.2:8466,grpc://10.0.0.3:8466,grpc://10.0.0.4:8466', 'gs://your_tb_dir', 2000, options=options) ``` Example usage (TPU): ``` # Send gRPC request to a TPU worker to collect a trace of your model. A # profiler service has been started in the TPU worker at port 8466. # E.g. your TPU IP address is 10.0.0.2 and you want to profile for 2 seconds # . tf.profiler.experimental.client.trace('grpc://10.0.0.2:8466', 'gs://your_tb_dir', 2000) ``` Example usage (Multiple TPUs): ``` # Send gRPC request to a TPU pod to collect a trace of your model on # multiple TPUs. A profiler service has been started in all the TPU workers # at the port 8466. # E.g. your TPU IP addresses are 10.0.0.2, 10.0.0.3, 10.0.0.4, and you want # to profile for 2 seconds. tf.profiler.experimental.client.trace( 'grpc://10.0.0.2:8466', 'gs://your_tb_dir', 2000, '10.0.0.2:8466,10.0.0.3:8466,10.0.0.4:8466') ``` Launch TensorBoard and point it to the same logdir you provided to this API. ``` # logdir can be gs://your_tb_dir as in the above examples. $ tensorboard --logdir=/tmp/tb_log ``` Open your browser and go to localhost:6006/#profile to view profiling results. tensorflow tf.sparse.to_indicator tf.sparse.to\_indicator ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1720-L1782) | Converts a `SparseTensor` of ids into a dense bool indicator tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.to_indicator`](https://www.tensorflow.org/api_docs/python/tf/sparse/to_indicator), [`tf.compat.v1.sparse_to_indicator`](https://www.tensorflow.org/api_docs/python/tf/sparse/to_indicator) ``` tf.sparse.to_indicator( sp_input, vocab_size, name=None ) ``` The last dimension of `sp_input.indices` is discarded and replaced with the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`, then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where ``` output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True ``` and False elsewhere in `output`. For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values: ``` [0, 0, 0]: 0 [0, 1, 0]: 10 [1, 0, 3]: 103 [1, 1, 1]: 150 [1, 1, 2]: 149 [1, 1, 3]: 150 [1, 2, 1]: 121 ``` and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool tensor with False everywhere except at positions ``` (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150), (1, 2, 121). ``` Note that repeats are allowed in the input SparseTensor. This op is useful for converting `SparseTensor`s into dense formats for compatibility with ops that expect dense tensors. The input `SparseTensor` must be in row-major order. | Args | | `sp_input` | A `SparseTensor` with `values` property of type `int32` or `int64`. | | `vocab_size` | A scalar int64 Tensor (or Python int) containing the new size of the last dimension, `all(0 <= sp_input.values < vocab_size)`. | | `name` | A name prefix for the returned tensors (optional) | | Returns | | A dense bool indicator tensor representing the indices with specified value. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.maximum tf.sparse.maximum ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2709-L2752) | Returns the element-wise max of two SparseTensors. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.maximum`](https://www.tensorflow.org/api_docs/python/tf/sparse/maximum), [`tf.compat.v1.sparse_maximum`](https://www.tensorflow.org/api_docs/python/tf/sparse/maximum) ``` tf.sparse.maximum( sp_a, sp_b, name=None ) ``` Assumes the two SparseTensors have the same shape, i.e., no broadcasting. #### Example: ``` sp_zero = tf.sparse.SparseTensor([[0]], [0], [7]) sp_one = tf.sparse.SparseTensor([[1]], [1], [7]) res = tf.sparse.maximum(sp_zero, sp_one) res.indices <tf.Tensor: shape=(2, 1), dtype=int64, numpy= array([[0], [1]])> res.values <tf.Tensor: shape=(2,), dtype=int32, numpy=array([0, 1], dtype=int32)> res.dense_shape <tf.Tensor: shape=(1,), dtype=int64, numpy=array([7])> ``` The reduction version of this elementwise operation is [`tf.sparse.reduce_max`](reduce_max) | Args | | `sp_a` | a `SparseTensor` operand whose dtype is real, and indices lexicographically ordered. | | `sp_b` | the other `SparseTensor` operand with the same requirements (and the same shape). | | `name` | optional name of the operation. | | Returns | | `output` | the output SparseTensor. | tensorflow tf.sparse.cross tf.sparse.cross =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L599-L651) | Generates sparse cross from a list of sparse and dense tensors. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.cross`](https://www.tensorflow.org/api_docs/python/tf/sparse/cross) ``` tf.sparse.cross( inputs, name=None, separator=None ) ``` For example, if the inputs are ``` * inputs[0]: SparseTensor with shape = [2, 2] [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" * inputs[1]: SparseTensor with shape = [2, 1] [0, 0]: "d" [1, 0]: "e" * inputs[2]: Tensor [["f"], ["g"]] ``` then the output will be: ``` shape = [2, 2] [0, 0]: "a_X_d_X_f" [1, 0]: "b_X_e_X_g" [1, 1]: "c_X_e_X_g" ``` Customized separator "*Y*": ``` inp_0 = tf.constant([['a'], ['b']]) inp_1 = tf.constant([['c'], ['d']]) output = tf.sparse.cross([inp_0, inp_1], separator='_Y_') output.values <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'a_Y_c', b'b_Y_d'], dtype=object)> ``` | Args | | `inputs` | An iterable of `Tensor` or `SparseTensor`. | | `name` | Optional name for the op. | | `separator` | A string added between each string being joined. Defaults to '*X*'. | | Returns | | A `SparseTensor` of type `string`. | tensorflow tf.sparse.reorder tf.sparse.reorder ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L810-L858) | Reorders a `SparseTensor` into the canonical, row-major ordering. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.reorder`](https://www.tensorflow.org/api_docs/python/tf/sparse/reorder), [`tf.compat.v1.sparse_reorder`](https://www.tensorflow.org/api_docs/python/tf/sparse/reorder) ``` tf.sparse.reorder( sp_input, name=None ) ``` Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values to add entries. Reordering does not affect the shape of the `SparseTensor`. For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`: ``` [0, 3]: b [0, 1]: a [3, 1]: d [2, 0]: c ``` then the output will be a `SparseTensor` of shape `[4, 5]` and `indices` / `values`: ``` [0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d ``` | Args | | `sp_input` | The input `SparseTensor`. | | `name` | A name prefix for the returned tensors (optional) | | Returns | | A `SparseTensor` with the same shape and non-empty values, but in canonical ordering. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. |
programming_docs
tensorflow tf.sparse.reduce_max tf.sparse.reduce\_max ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1229-L1321) | Computes [`tf.sparse.maximum`](maximum) of elements across dimensions of a SparseTensor. ``` tf.sparse.reduce_max( sp_input, axis=None, keepdims=None, output_is_sparse=False, name=None ) ``` This is the reduction operation for the elementwise [`tf.sparse.maximum`](maximum) op. This Op takes a SparseTensor and is the sparse counterpart to [`tf.reduce_max()`](../math/reduce_max). In particular, this Op also returns a dense `Tensor` if `output_is_sparse` is `False`, or a `SparseTensor` if `output_is_sparse` is `True`. > > **Note:** A gradient is not defined for this function, so it can't be used in training models that need gradient descent. > Reduces `sp_input` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, similar to the indexing rules in Python. The values not defined in `sp_input` don't participate in the reduce max, as opposed to be implicitly assumed 0 -- hence it can return negative values for sparse `axis`. But, in case there are no values in `axis`, it will reduce to 0. See second example below. #### For example: 'x' represents [[1, ?, 2] ========================= [?, 3, ?]] ========== where ? is implicitly-zero. =========================== ``` x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 2, 3], [2, 3]) tf.sparse.reduce_max(x) <tf.Tensor: shape=(), dtype=int32, numpy=3> tf.sparse.reduce_max(x, 0) <tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 3, 2], dtype=int32)> tf.sparse.reduce_max(x, 1) <tf.Tensor: shape=(2,), dtype=int32, numpy=array([2, 3], dtype=int32)> tf.sparse.reduce_max(x, 1, keepdims=True) <tf.Tensor: shape=(2, 1), dtype=int32, numpy= array([[2], [3]], dtype=int32)> tf.sparse.reduce_max(x, [0, 1]) <tf.Tensor: shape=(), dtype=int32, numpy=3> ``` 'y' represents [[-7, ?] ======================= [ 4, 3] ======= [ ?, ?] ======= ``` y = tf.sparse.SparseTensor([[0, 0,], [1, 0], [1, 1]], [-7, 4, 3], [3, 2]) tf.sparse.reduce_max(y, 1) <tf.Tensor: shape=(3,), dtype=int32, numpy=array([-7, 4, 0], dtype=int32)> ``` | Args | | `sp_input` | The SparseTensor to reduce. Should have numeric type. | | `axis` | The dimensions to reduce; list or scalar. If `None` (the default), reduces all dimensions. | | `keepdims` | If true, retain reduced dimensions with length 1. | | `output_is_sparse` | If true, returns a `SparseTensor` instead of a dense `Tensor` (the default). | | `name` | A name for the operation (optional). | | Returns | | The reduced Tensor or the reduced SparseTensor if `output_is_sparse` is True. | tensorflow tf.sparse.split tf.sparse.split =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1051-L1116) | Split a `SparseTensor` into `num_split` tensors along `axis`. ``` tf.sparse.split( sp_input=None, num_split=None, axis=None, name=None ) ``` If the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split` each slice starting from 0:`shape[axis] % num_split` gets extra one dimension. For example: ``` indices = [[0, 2], [0, 4], [0, 5], [1, 0], [1, 1]] values = [1, 2, 3, 4, 5] t = tf.SparseTensor(indices=indices, values=values, dense_shape=[2, 7]) tf.sparse.to_dense(t) <tf.Tensor: shape=(2, 7), dtype=int32, numpy= array([[0, 0, 1, 0, 2, 3, 0], [4, 5, 0, 0, 0, 0, 0]], dtype=int32)> ``` ``` output = tf.sparse.split(sp_input=t, num_split=2, axis=1) tf.sparse.to_dense(output[0]) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[0, 0, 1, 0], [4, 5, 0, 0]], dtype=int32)> tf.sparse.to_dense(output[1]) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[2, 3, 0], [0, 0, 0]], dtype=int32)> ``` ``` output = tf.sparse.split(sp_input=t, num_split=2, axis=0) tf.sparse.to_dense(output[0]) <tf.Tensor: shape=(1, 7), dtype=int32, numpy=array([[0, 0, 1, 0, 2, 3, 0]], dtype=int32)> tf.sparse.to_dense(output[1]) <tf.Tensor: shape=(1, 7), dtype=int32, numpy=array([[4, 5, 0, 0, 0, 0, 0]], dtype=int32)> ``` ``` output = tf.sparse.split(sp_input=t, num_split=2, axis=-1) tf.sparse.to_dense(output[0]) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[0, 0, 1, 0], [4, 5, 0, 0]], dtype=int32)> tf.sparse.to_dense(output[1]) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[2, 3, 0], [0, 0, 0]], dtype=int32)> ``` | Args | | `sp_input` | The `SparseTensor` to split. | | `num_split` | A Python integer. The number of ways to split. | | `axis` | A 0-D `int32` `Tensor`. The dimension along which to split. Must be in range [-rank, rank), where rank is the number of dimensions in the input `SparseTensor`. | | `name` | A name for the operation (optional). | | Returns | | `num_split` `SparseTensor` objects resulting from splitting `value`. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.eye tf.sparse.eye ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L240-L271) | Creates a two-dimensional sparse tensor with ones along the diagonal. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.eye`](https://www.tensorflow.org/api_docs/python/tf/sparse/eye) ``` tf.sparse.eye( num_rows, num_columns=None, dtype=tf.dtypes.float32, name=None ) ``` | Args | | `num_rows` | Non-negative integer or `int32` scalar `tensor` giving the number of rows in the resulting matrix. | | `num_columns` | Optional non-negative integer or `int32` scalar `tensor` giving the number of columns in the resulting matrix. Defaults to `num_rows`. | | `dtype` | The type of element in the resulting `Tensor`. | | `name` | A name for this `Op`. Defaults to "eye". | | Returns | | A `SparseTensor` of shape [num\_rows, num\_columns] with ones along the diagonal. | tensorflow tf.sparse.segment_sum tf.sparse.segment\_sum ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4735-L4800) | Computes the sum along sparse segments of a tensor. ``` tf.sparse.segment_sum( data, indices, segment_ids, num_segments=None, name=None ) ``` Read [the section on segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) for an explanation of segments. Like [`tf.math.segment_sum`](../math/segment_sum), but `segment_ids` can have rank less than `data`'s first dimension, selecting a subset of dimension 0, specified by `indices`. `segment_ids` is allowed to have missing ids, in which case the output will be zeros at those indices. In those cases `num_segments` is used to determine the size of the output. #### For example: ``` c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) # Select two rows, one segment. tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) # => [[0 0 0 0]] # Select two rows, two segment. tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) # => [[ 1 2 3 4] # [-1 -2 -3 -4]] # With missing segment ids. tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 2]), num_segments=4) # => [[ 1 2 3 4] # [ 0 0 0 0] # [-1 -2 -3 -4] # [ 0 0 0 0]] # Select all rows, two segments. tf.sparse.segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) # => [[0 0 0 0] # [5 6 7 8]] # Which is equivalent to: tf.math.segment_sum(c, tf.constant([0, 0, 1])) ``` | Args | | `data` | A `Tensor` with data that will be assembled in the output. | | `indices` | A 1-D `Tensor` with indices into `data`. Has same rank as `segment_ids`. | | `segment_ids` | A 1-D `Tensor` with indices into the output `Tensor`. Values should be sorted and can be repeated. | | `num_segments` | An optional int32 scalar. Indicates the size of the output `Tensor`. | | `name` | A name for the operation (optional). | | Returns | | A `tensor` of the shape as data, except for dimension 0 which has size `k`, the number of segments specified via `num_segments` or inferred for the last element in `segments_ids`. | tensorflow tf.sparse.add tf.sparse.add ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L515-L596) | Adds two tensors, at least one of each is a `SparseTensor`. ``` tf.sparse.add( a, b, threshold=0 ) ``` If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order of arguments does not matter. Use vanilla [`tf.add()`](../math/add) for adding two dense `Tensor`s. The shapes of the two operands must match: broadcasting is not supported. The indices of any input `SparseTensor` are assumed ordered in standard lexicographic order. If this is not the case, before this step run `SparseReorder` to restore index ordering. If both arguments are sparse, we perform "clipping" as follows. By default, if two values sum to zero at some index, the output `SparseTensor` would still include that particular location in its index, storing a zero in the corresponding value slot. To override this, callers can specify `threshold`, indicating that if the sum has a magnitude strictly smaller than `threshold`, its corresponding value and index would then not be included. In particular, `threshold == 0.0` (default) means everything is kept and actual thresholding happens only for a positive value. For example, suppose the logical sum of two sparse operands is (densified): ``` [ 2] [.1 0] [ 6 -.2] ``` Then, * `threshold == 0` (the default): all 5 index/value pairs will be returned. * `threshold == 0.11`: only .1 and 0 will vanish, and the remaining three index/value pairs will be returned. * `threshold == 0.21`: .1, 0, and -.2 will vanish. | Args | | `a` | The first operand; `SparseTensor` or `Tensor`. | | `b` | The second operand; `SparseTensor` or `Tensor`. At least one operand must be sparse. | | `threshold` | A 0-D `Tensor`. The magnitude threshold that determines if an output value/index pair takes space. Its dtype should match that of the values if they are real; if the latter are complex64/complex128, then the dtype should be float32/float64, correspondingly. | | Returns | | A `SparseTensor` or a `Tensor`, representing the sum. | | Raises | | `TypeError` | If both `a` and `b` are `Tensor`s. Use [`tf.add()`](../math/add) instead. | tensorflow tf.sparse.expand_dims tf.sparse.expand\_dims ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L134-L237) | Returns a tensor with an length 1 axis inserted at index `axis`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.expand_dims`](https://www.tensorflow.org/api_docs/python/tf/sparse/expand_dims) ``` tf.sparse.expand_dims( sp_input, axis=None, name=None ) ``` Given a tensor `input`, this operation inserts a dimension of length 1 at the dimension index `axis` of `input`'s shape. The dimension index follows python indexing rules: It's zero-based, a negative index it is counted backward from the end. This operation is useful to: * Add an outer "batch" dimension to a single element. * Align axes for broadcasting. * To add an inner vector length axis to a tensor of scalars. #### For example: If you have a sparse tensor with shape `[height, width, depth]`: ``` sp = tf.sparse.SparseTensor(indices=[[3,4,1]], values=[7,], dense_shape=[10,10,3]) ``` You can add an outer `batch` axis by passing `axis=0`: ``` tf.sparse.expand_dims(sp, axis=0).shape.as_list() [1, 10, 10, 3] ``` The new axis location matches Python `list.insert(axis, 1)`: ``` tf.sparse.expand_dims(sp, axis=1).shape.as_list() [10, 1, 10, 3] ``` Following standard python indexing rules, a negative `axis` counts from the end so `axis=-1` adds an inner most dimension: ``` tf.sparse.expand_dims(sp, axis=-1).shape.as_list() [10, 10, 3, 1] ``` > > **Note:** Unlike [`tf.expand_dims`](../expand_dims) this function includes a default value for the `axis`: `-1`. So if `axis is not specified, an inner dimension is added. > ``` sp.shape.as_list() [10, 10, 3] tf.sparse.expand_dims(sp).shape.as_list() [10, 10, 3, 1] ``` This operation requires that `axis` is a valid index for `input.shape`, following python indexing rules: ``` -1-tf.rank(input) <= axis <= tf.rank(input) ``` This operation is related to: * [`tf.expand_dims`](../expand_dims), which provides this functionality for dense tensors. * [`tf.squeeze`](../squeeze), which removes dimensions of size 1, from dense tensors. * [`tf.sparse.reshape`](reshape), which provides more flexible reshaping capability. | Args | | `sp_input` | A `SparseTensor`. | | `axis` | 0-D (scalar). Specifies the dimension index at which to expand the shape of `input`. Must be in the range `[-rank(sp_input) - 1, rank(sp_input)]`. Defaults to `-1`. | | `name` | The name of the output `SparseTensor`. | | Returns | | A `SparseTensor` with the same data as `sp_input`, but its shape has an additional dimension of size 1 added. | tensorflow tf.sparse.fill_empty_rows tf.sparse.fill\_empty\_rows =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2091-L2155) | Fills empty rows in the input 2-D `SparseTensor` with a default value. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.fill_empty_rows`](https://www.tensorflow.org/api_docs/python/tf/sparse/fill_empty_rows), [`tf.compat.v1.sparse_fill_empty_rows`](https://www.tensorflow.org/api_docs/python/tf/sparse/fill_empty_rows) ``` tf.sparse.fill_empty_rows( sp_input, default_value, name=None ) ``` This op adds entries with the specified `default_value` at index `[row, 0]` for any row in the input that does not already have a value. For example, suppose `sp_input` has shape `[5, 6]` and non-empty values: ``` [0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d ``` Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values: ``` [0, 1]: a [0, 3]: b [1, 0]: default_value [2, 0]: c [3, 1]: d [4, 0]: default_value ``` Note that the input may have empty columns at the end, with no effect on this op. The output `SparseTensor` will be in row-major order and will have the same shape as the input. This op also returns an indicator vector such that ``` empty_row_indicator[i] = True iff row i was an empty row. ``` | Args | | `sp_input` | A `SparseTensor` with shape `[N, M]`. | | `default_value` | The value to fill for empty rows, with the same type as `sp_input.` | | `name` | A name prefix for the returned tensors (optional) | | Returns | | `sp_ordered_output` | A `SparseTensor` with shape `[N, M]`, and with all empty rows filled in with `default_value`. | | `empty_row_indicator` | A bool vector of length `N` indicating whether each input row was empty. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.mask tf.sparse.mask ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/array_ops.py#L1981-L2022) | Masks elements of `IndexedSlices`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.mask`](https://www.tensorflow.org/api_docs/python/tf/sparse/mask), [`tf.compat.v1.sparse_mask`](https://www.tensorflow.org/api_docs/python/tf/sparse/mask) ``` tf.sparse.mask( a, mask_indices, name=None ) ``` Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that contains a subset of the slices of `a`. Only the slices at indices not specified in `mask_indices` are returned. This is useful when you need to extract a subset of slices in an `IndexedSlices` object. #### For example: ``` # `a` contains slices at indices [12, 26, 37, 45] from a large tensor # with shape [1000, 10] a.indices # [12, 26, 37, 45] tf.shape(a.values) # [4, 10] # `b` will be the subset of `a` slices at its second and third indices, so # we want to mask its first and last indices (which are at absolute # indices 12, 45) b = tf.sparse.mask(a, [12, 45]) b.indices # [26, 37] tf.shape(b.values) # [2, 10] ``` | Args | | `a` | An `IndexedSlices` instance. | | `mask_indices` | Indices of elements to mask. | | `name` | A name for the operation (optional). | | Returns | | The masked `IndexedSlices` instance. | tensorflow tf.sparse.softmax tf.sparse.softmax ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2653-L2706) | Applies softmax to a batched N-D `SparseTensor`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.softmax`](https://www.tensorflow.org/api_docs/python/tf/sparse/softmax), [`tf.compat.v1.sparse_softmax`](https://www.tensorflow.org/api_docs/python/tf/sparse/softmax) ``` tf.sparse.softmax( sp_input, name=None ) ``` The inputs represent an N-D SparseTensor with logical shape `[..., B, C]` (where `N >= 2`), and with indices sorted in the canonical lexicographic order. This op is equivalent to applying the normal [`tf.nn.softmax()`](../nn/softmax) to each innermost logical submatrix with shape `[B, C]`, but with the catch that *the implicitly zero elements do not participate*. Specifically, the algorithm is equivalent to: (1) Applies [`tf.nn.softmax()`](../nn/softmax) to a densified view of each innermost submatrix with shape `[B, C]`, along the size-C dimension; (2) Masks out the original implicitly-zero locations; (3) Renormalizes the remaining elements. Hence, the `SparseTensor` result has exactly the same non-zero indices and shape. #### Example: ``` # First batch: # [? e.] # [1. ? ] # Second batch: # [e ? ] # [e e ] shape = [2, 2, 2] # 3-D SparseTensor values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) indices = np.vstack(np.where(values)).astype(np.int64).T result = tf.sparse.softmax(tf.sparse.SparseTensor(indices, values, shape)) # ...returning a 3-D SparseTensor, equivalent to: # [? 1.] [1 ?] # [1. ? ] and [.5 .5] # where ? means implicitly zero. ``` | Args | | `sp_input` | N-D `SparseTensor`, where `N >= 2`. | | `name` | optional name of the operation. | | Returns | | `output` | N-D `SparseTensor` representing the results. | tensorflow tf.sparse.sparse_dense_matmul tf.sparse.sparse\_dense\_matmul =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2412-L2650) | Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.matmul`](https://www.tensorflow.org/api_docs/python/tf/sparse/sparse_dense_matmul), [`tf.compat.v1.sparse.sparse_dense_matmul`](https://www.tensorflow.org/api_docs/python/tf/sparse/sparse_dense_matmul), [`tf.compat.v1.sparse_tensor_dense_matmul`](https://www.tensorflow.org/api_docs/python/tf/sparse/sparse_dense_matmul) ``` tf.sparse.sparse_dense_matmul( sp_a, b, adjoint_a=False, adjoint_b=False, name=None ) ``` (or SparseTensor) "B". Please note that one and only one of the inputs MUST be a SparseTensor and the other MUST be a dense matrix. The following input format is recommended (but not required) for optimal performance: * If `adjoint_a == false`: `A` should be sorted in lexicographically increasing order. Use [`sparse.reorder`](reorder) if you're not sure. * If `adjoint_a == true`: `A` should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order). | Args | | `sp_a` | SparseTensor (or dense Matrix) A, of rank 2. | | `b` | dense Matrix (or SparseTensor) B, with the same dtype as sp\_a. | | `adjoint_a` | Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A). | | `adjoint_b` | Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B). | | `name` | A name prefix for the returned tensors (optional) | | Returns | | A dense matrix (pseudo-code in dense np.matrix notation): `A = A.H if adjoint_a else A` `B = B.H if adjoint_b else B` `return A*B` | #### Notes: Using [`tf.nn.embedding_lookup_sparse`](../nn/embedding_lookup_sparse) for sparse multiplication: It's not obvious but you can consider `embedding_lookup_sparse` as another sparse and dense multiplication. In some situations, you may prefer to use `embedding_lookup_sparse` even though you're not dealing with embeddings. There are two questions to ask in the decision process: Do you need gradients computed as sparse too? Is your sparse data represented as two `SparseTensor`s: ids and values? There is more explanation about data format below. If you answer any of these questions as yes, consider using [`tf.nn.embedding_lookup_sparse`](../nn/embedding_lookup_sparse). Following explains differences between the expected SparseTensors: For example if dense form of your sparse data has shape `[3, 5]` and values: ``` [[ a ] [b c] [ d ]] ``` `SparseTensor` format expected by `sparse_tensor_dense_matmul`: `sp_a` (indices, values): ``` [0, 1]: a [1, 0]: b [1, 4]: c [2, 2]: d ``` `SparseTensor` format expected by `embedding_lookup_sparse`: `sp_ids` `sp_weights` ``` [0, 0]: 1 [0, 0]: a [1, 0]: 0 [1, 0]: b [1, 1]: 4 [1, 1]: c [2, 0]: 2 [2, 0]: d ``` Deciding when to use `sparse_tensor_dense_matmul` vs. `matmul`(a\_is\_sparse=True): There are a number of questions to ask in the decision process, including: * Will the SparseTensor `A` fit in memory if densified? * Is the column count of the product large (>> 1)? * Is the density of `A` larger than approximately 15%? If the answer to several of these questions is yes, consider converting the `SparseTensor` to a dense one and using [`tf.matmul`](../linalg/matmul) with `a_is_sparse=True`. This operation tends to perform well when `A` is more sparse, if the column size of the product is small (e.g. matrix-vector multiplication), if `sp_a.dense_shape` takes on large values. Below is a rough speed comparison between `sparse_tensor_dense_matmul`, labeled 'sparse', and `matmul`(a\_is\_sparse=True), labeled 'dense'. For purposes of the comparison, the time spent converting from a `SparseTensor` to a dense `Tensor` is not included, so it is overly conservative with respect to the time ratio. #### Benchmark system: CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB GPU: NVidia Tesla k40c #### Compiled with: `-c opt --config=cuda --copt=-mavx` ``` tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks A sparse [m, k] with % nonzero values between 1% and 80% B dense [k, n] % nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) 0.01 1 True 100 100 0.000221166 0.00010154 0.459112 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 0.01 1 False 100 100 0.000208085 0.000107603 0.51711 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 0.01 10 True 100 100 0.000218522 0.000105537 0.482958 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 0.01 25 True 100 100 0.000207806 0.000105977 0.509981 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 0.01 25 True 1000 100 0.00038262 0.00014158 0.370035 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 0.01 25 False 100 100 0.000209401 0.000104696 0.499979 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 0.2 10 True 100 100 0.000211692 0.000109903 0.519165 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 0.2 10 False 100 100 0.000215727 0.000110502 0.512231 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 0.2 25 True 100 100 0.000218705 0.000129913 0.594009 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 0.2 25 False 100 100 0.000221494 0.0001306 0.589632 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 0.5 1 False 100 100 0.000224196 0.000101423 0.452386 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 0.5 10 True 100 100 0.000222125 0.000112308 0.505608 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 0.5 10 False 100 100 0.000232083 0.000114978 0.495418 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 0.5 25 True 100 100 0.00023429 0.000151703 0.647501 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 0.5 25 False 100 100 0.000228981 0.000155334 0.678371 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 0.8 1 True 100 100 0.000222037 0.000105301 0.47425 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 0.8 1 False 100 100 0.000214079 0.000107486 0.502085 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 0.8 10 True 100 100 0.000229159 0.00011825 0.516017 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 0.8 10 False 100 100 0.000230783 0.000124958 0.541452 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 0.8 25 True 100 100 0.000233496 0.000175241 0.75051 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 0.8 25 False 100 100 0.000240243 0.000175047 0.728625 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 ```
programming_docs
tensorflow tf.sparse.bincount tf.sparse.bincount ================== Count the number of times an integer value appears in a tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.bincount`](https://www.tensorflow.org/api_docs/python/tf/sparse/bincount) ``` tf.sparse.bincount( values, weights=None, axis=0, minlength=None, maxlength=None, binary_output=False, name=None ) ``` This op takes an N-dimensional `Tensor`, `RaggedTensor`, or `SparseTensor`, and returns an N-dimensional int64 SparseTensor where element `[i0...i[axis], j]` contains the number of times the value `j` appears in slice `[i0...i[axis], :]` of the input tensor. Currently, only N=0 and N=-1 are supported. | Args | | `values` | A Tensor, RaggedTensor, or SparseTensor whose values should be counted. These tensors must have a rank of 2 if `axis=-1`. | | `weights` | If non-None, must be the same shape as arr. For each value in `value`, the bin will be incremented by the corresponding weight instead of 1. | | `axis` | The axis to slice over. Axes at and below `axis` will be flattened before bin counting. Currently, only `0`, and `-1` are supported. If None, all axes will be flattened (identical to passing `0`). | | `minlength` | If given, ensures the output has length at least `minlength`, padding with zeros at the end if necessary. | | `maxlength` | If given, skips values in `values` that are equal or greater than `maxlength`, ensuring that the output has length at most `maxlength`. | | `binary_output` | If True, this op will output 1 instead of the number of times a token appears (equivalent to one\_hot + reduce\_any instead of one\_hot + reduce\_add). Defaults to False. | | `name` | A name for this op. | | Returns | | A SparseTensor with `output.shape = values.shape[:axis] + [N]`, where `N` is * `maxlength` (if set); * `minlength` (if set, and `minlength > reduce_max(values)`); * `0` (if `values` is empty); * `reduce_max(values) + 1` otherwise. | #### Examples: **Bin-counting every item in individual batches** This example takes an input (which could be a Tensor, RaggedTensor, or SparseTensor) and returns a SparseTensor where the value of (i,j) is the number of times value j appears in batch i. ``` data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64) output = tf.sparse.bincount(data, axis=-1) print(output) SparseTensor(indices=tf.Tensor( [[ 0 10] [ 0 20] [ 0 30] [ 1 11] [ 1 101] [ 1 10001]], shape=(6, 2), dtype=int64), values=tf.Tensor([1 2 1 2 1 1], shape=(6,), dtype=int64), dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64)) ``` **Bin-counting with defined output shape** This example takes an input (which could be a Tensor, RaggedTensor, or SparseTensor) and returns a SparseTensor where the value of (i,j) is the number of times value j appears in batch i. However, all values of j above 'maxlength' are ignored. The dense\_shape of the output sparse tensor is set to 'minlength'. Note that, while the input is identical to the example above, the value '10001' in batch item 2 is dropped, and the dense shape is [2, 500] instead of [2,10002] or [2, 102]. ``` minlength = maxlength = 500 data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64) output = tf.sparse.bincount( data, axis=-1, minlength=minlength, maxlength=maxlength) print(output) SparseTensor(indices=tf.Tensor( [[ 0 10] [ 0 20] [ 0 30] [ 1 11] [ 1 101]], shape=(5, 2), dtype=int64), values=tf.Tensor([1 2 1 2 1], shape=(5,), dtype=int64), dense_shape=tf.Tensor([ 2 500], shape=(2,), dtype=int64)) ``` **Binary bin-counting** This example takes an input (which could be a Tensor, RaggedTensor, or SparseTensor) and returns a SparseTensor where (i,j) is 1 if the value j appears in batch i at least once and is 0 otherwise. Note that, even though some values (like 20 in batch 1 and 11 in batch 2) appear more than once, the 'values' tensor is all 1s. ``` data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64) output = tf.sparse.bincount(data, binary_output=True, axis=-1) print(output) SparseTensor(indices=tf.Tensor( [[ 0 10] [ 0 20] [ 0 30] [ 1 11] [ 1 101] [ 1 10001]], shape=(6, 2), dtype=int64), values=tf.Tensor([1 1 1 1 1 1], shape=(6,), dtype=int64), dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64)) ``` **Weighted bin-counting** This example takes two inputs - a values tensor and a weights tensor. These tensors must be identically shaped, and have the same row splits or indices in the case of RaggedTensors or SparseTensors. When performing a weighted count, the op will output a SparseTensor where the value of (i, j) is the sum of the values in the weight tensor's batch i in the locations where the values tensor has the value j. In this case, the output dtype is the same as the dtype of the weights tensor. ``` data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64) weights = [[2, 0.25, 15, 0.5], [2, 17, 3, 0.9]] output = tf.sparse.bincount(data, weights=weights, axis=-1) print(output) SparseTensor(indices=tf.Tensor( [[ 0 10] [ 0 20] [ 0 30] [ 1 11] [ 1 101] [ 1 10001]], shape=(6, 2), dtype=int64), values=tf.Tensor([2. 0.75 15. 5. 17. 0.9], shape=(6,), dtype=float32), dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64)) ``` tensorflow tf.sparse.cross_hashed tf.sparse.cross\_hashed ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L657-L701) | Generates hashed sparse cross from a list of sparse and dense tensors. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.cross_hashed`](https://www.tensorflow.org/api_docs/python/tf/sparse/cross_hashed) ``` tf.sparse.cross_hashed( inputs, num_buckets=0, hash_key=None, name=None ) ``` For example, if the inputs are ``` * inputs[0]: SparseTensor with shape = [2, 2] [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" * inputs[1]: SparseTensor with shape = [2, 1] [0, 0]: "d" [1, 0]: "e" * inputs[2]: Tensor [["f"], ["g"]] ``` then the output will be: ``` shape = [2, 2] [0, 0]: FingerprintCat64( Fingerprint64("f"), FingerprintCat64( Fingerprint64("d"), Fingerprint64("a"))) [1, 0]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("b"))) [1, 1]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("c"))) ``` | Args | | `inputs` | An iterable of `Tensor` or `SparseTensor`. | | `num_buckets` | An `int` that is `>= 0`. output = hashed\_value%num\_buckets if num\_buckets > 0 else hashed\_value. | | `hash_key` | Integer hash\_key that will be used by the `FingerprintCat64` function. If not given, will use a default key. | | `name` | Optional name for the op. | | Returns | | A `SparseTensor` of type `int64`. | tensorflow tf.sparse.minimum tf.sparse.minimum ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2755-L2796) | Returns the element-wise min of two SparseTensors. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.minimum`](https://www.tensorflow.org/api_docs/python/tf/sparse/minimum), [`tf.compat.v1.sparse_minimum`](https://www.tensorflow.org/api_docs/python/tf/sparse/minimum) ``` tf.sparse.minimum( sp_a, sp_b, name=None ) ``` Assumes the two SparseTensors have the same shape, i.e., no broadcasting. #### Example: ``` sp_zero = tf.sparse.SparseTensor([[0]], [0], [7]) sp_one = tf.sparse.SparseTensor([[1]], [1], [7]) res = tf.sparse.minimum(sp_zero, sp_one) res.indices <tf.Tensor: shape=(2, 1), dtype=int64, numpy= array([[0], [1]])> res.values <tf.Tensor: shape=(2,), dtype=int32, numpy=array([0, 0], dtype=int32)> res.dense_shape <tf.Tensor: shape=(1,), dtype=int64, numpy=array([7])> ``` | Args | | `sp_a` | a `SparseTensor` operand whose dtype is real, and indices lexicographically ordered. | | `sp_b` | the other `SparseTensor` operand with the same requirements (and the same shape). | | `name` | optional name of the operation. | | Returns | | `output` | the output SparseTensor. | tensorflow tf.sparse.slice tf.sparse.slice =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1119-L1166) | Slice a `SparseTensor` based on the `start` and `size`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.slice`](https://www.tensorflow.org/api_docs/python/tf/sparse/slice), [`tf.compat.v1.sparse_slice`](https://www.tensorflow.org/api_docs/python/tf/sparse/slice) ``` tf.sparse.slice( sp_input, start, size, name=None ) ``` For example, if the input is ``` input_tensor = shape = [2, 7] [ a d e ] [b c ] ``` Graphically the output tensors are: ``` sparse.slice([0, 0], [2, 4]) = shape = [2, 4] [ a ] [b c ] sparse.slice([0, 4], [2, 3]) = shape = [2, 3] [ d e ] [ ] ``` | Args | | `sp_input` | The `SparseTensor` to split. | | `start` | 1-D. tensor represents the start of the slice. | | `size` | 1-D. tensor represents the size of the slice. | | `name` | A name for the operation (optional). | | Returns | | A `SparseTensor` objects resulting from splicing. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.transpose tf.sparse.transpose =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L2799-L2858) | Transposes a `SparseTensor` #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.transpose`](https://www.tensorflow.org/api_docs/python/tf/sparse/transpose), [`tf.compat.v1.sparse_transpose`](https://www.tensorflow.org/api_docs/python/tf/sparse/transpose) ``` tf.sparse.transpose( sp_input, perm=None, name=None ) ``` The returned tensor's dimension i will correspond to the input dimension `perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors. For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`: ``` [0, 3]: b [0, 1]: a [3, 1]: d [2, 0]: c ``` then the output will be a `SparseTensor` of shape `[5, 4]` and `indices` / `values`: ``` [0, 2]: c [1, 0]: a [1, 3]: d [3, 0]: b ``` | Args | | `sp_input` | The input `SparseTensor`. | | `perm` | A permutation of the dimensions of `sp_input`. | | `name` | A name prefix for the returned tensors (optional) | | Returns | | A transposed `SparseTensor`. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.SparseTensor tf.sparse.SparseTensor ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/sparse_tensor.py#L43-L282) | Represents a sparse tensor. #### View aliases **Main aliases** [`tf.SparseTensor`](https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.SparseTensor`](https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor), [`tf.compat.v1.sparse.SparseTensor`](https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor) ``` tf.sparse.SparseTensor( indices, values, dense_shape ) ``` TensorFlow represents a sparse tensor as three separate dense tensors: `indices`, `values`, and `dense_shape`. In Python, the three tensors are collected into a `SparseTensor` class for ease of use. If you have separate `indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor` object before passing to the ops below. Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)` comprises the following components, where `N` and `ndims` are the number of values and number of dimensions in the `SparseTensor`, respectively: * `indices`: A 2-D int64 tensor of shape `[N, ndims]`, which specifies the indices of the elements in the sparse tensor that contain nonzero values (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]` specifies that the elements with indexes of [1,3] and [2,4] have nonzero values. * `values`: A 1-D tensor of any type and shape `[N]`, which supplies the values for each element in `indices`. For example, given `indices=[[1,3], [2,4]]`, the parameter `values=[18, 3.6]` specifies that element [1,3] of the sparse tensor has a value of 18, and element [2,4] of the tensor has a value of 3.6. * `dense_shape`: A 1-D int64 tensor of shape `[ndims]`, which specifies the dense\_shape of the sparse tensor. Takes a list indicating the number of elements in each dimension. For example, `dense_shape=[3,6]` specifies a two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a one-dimensional tensor with 9 elements. The corresponding dense tensor satisfies: ``` dense.shape = dense_shape dense[tuple(indices[i])] = values[i] ``` By convention, `indices` should be sorted in row-major order (or equivalently lexicographic order on the tuples `indices[i]`). This is not enforced when `SparseTensor` objects are constructed, but most ops assume correct ordering. If the ordering of sparse tensor `st` is wrong, a fixed version can be obtained by calling [`tf.sparse.reorder(st)`](reorder). Example: The sparse tensor ``` SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]) ``` represents the dense tensor ``` [[1, 0, 0, 0] [0, 0, 2, 0] [0, 0, 0, 0]] ``` | Args | | `indices` | A 2-D int64 tensor of shape `[N, ndims]`. | | `values` | A 1-D tensor of any type and shape `[N]`. | | `dense_shape` | A 1-D int64 tensor of shape `[ndims]`. | | Raises | | `ValueError` | When building an eager SparseTensor if `dense_shape` is unknown or contains unknown elements (None or -1). | | Attributes | | `dense_shape` | A 1-D Tensor of int64 representing the shape of the dense tensor. | | `dtype` | The `DType` of elements in this tensor. | | `graph` | The `Graph` that contains the index, value, and dense\_shape tensors. | | `indices` | The indices of non-zero values in the represented dense tensor. | | `op` | The `Operation` that produces `values` as an output. | | `shape` | Get the `TensorShape` representing the shape of the dense tensor. | | `values` | The non-zero values in the represented dense tensor. | Methods ------- ### `consumers` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/sparse_tensor.py#L281-L282) ``` consumers() ``` ### `eval` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/sparse_tensor.py#L235-L258) ``` eval( feed_dict=None, session=None ) ``` Evaluates this sparse tensor in a `Session`. Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor. > > **Note:** Before invoking [`SparseTensor.eval()`](sparsetensor#eval), its graph must have been launched in a session, and either a default session must be available, or `session` must be specified explicitly. > | Args | | `feed_dict` | A dictionary that maps `Tensor` objects to feed values. See `tf.Session.run` for a description of the valid feed values. | | `session` | (Optional.) The `Session` to be used to evaluate this sparse tensor. If none, the default session will be used. | | Returns | | A `SparseTensorValue` object. | ### `from_value` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/sparse_tensor.py#L102-L110) ``` @classmethod from_value( sparse_tensor_value ) ``` ### `get_shape` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/sparse_tensor.py#L150-L156) ``` get_shape() ``` Get the `TensorShape` representing the shape of the dense tensor. | Returns | | A `TensorShape` object. | ### `with_values` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/sparse_tensor.py#L177-L200) ``` with_values( new_values ) ``` Returns a copy of `self` with `values` replaced by `new_values`. This method produces a new `SparseTensor` that has the same nonzero `indices` and same `dense_shape`, but updated values. | Args | | `new_values` | The values of the new `SparseTensor`. Needs to have the same shape as the current `.values` `Tensor`. May have a different type than the current `values`. | | Returns | | A `SparseTensor` with identical indices and shape but updated values. | #### Example usage: ``` st = tf.sparse.from_dense([[1, 0, 2, 0], [3, 0, 0, 4]]) tf.sparse.to_dense(st.with_values([10, 20, 30, 40])) # 4 nonzero values <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[10, 0, 20, 0], [30, 0, 0, 40]], dtype=int32)> ``` ### `__div__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1426-L1433) ``` __div__( y ) ``` Component-wise divides a SparseTensor by a dense Tensor. *Limitation*: this Op only broadcasts the dense side to the sparse side, but not the other direction. | Args | | `sp_indices` | A `Tensor` of type `int64`. 2-D. `N x R` matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. | | `sp_values` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. 1-D. `N` non-empty values corresponding to `sp_indices`. | | `sp_shape` | A `Tensor` of type `int64`. 1-D. Shape of the input SparseTensor. | | `dense` | A `Tensor`. Must have the same type as `sp_values`. `R`-D. The dense Tensor operand. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `sp_values`. | ### `__mul__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1426-L1433) ``` __mul__( y ) ``` Component-wise multiplies a SparseTensor by a dense Tensor. The output locations corresponding to the implicitly zero elements in the sparse tensor will be zero (i.e., will not take up storage space), regardless of the contents of the dense tensor (even if it's +/-INF and that INF\*0 == NaN). *Limitation*: this Op only broadcasts the dense side to the sparse side, but not the other direction. | Args | | `sp_indices` | A `Tensor` of type `int64`. 2-D. `N x R` matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. | | `sp_values` | A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. 1-D. `N` non-empty values corresponding to `sp_indices`. | | `sp_shape` | A `Tensor` of type `int64`. 1-D. Shape of the input SparseTensor. | | `dense` | A `Tensor`. Must have the same type as `sp_values`. `R`-D. The dense Tensor operand. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `sp_values`. | ### `__truediv__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L1426-L1433) ``` __truediv__( y ) ``` Internal helper function for 'sp\_t / dense\_t'.
programming_docs
tensorflow tf.sparse.map_values tf.sparse.map\_values ===================== Applies `op` to the `.values` tensor of one or more `SparseTensor`s. ``` tf.sparse.map_values( op, *args, **kwargs ) ``` Replaces any `SparseTensor` in `args` or `kwargs` with its `values` tensor (which contains the non-default values for the SparseTensor), and then calls `op`. Returns a `SparseTensor` that is constructed from the input `SparseTensor`s' `indices`, `dense_shape`, and the value returned by the `op`. If the input arguments contain multiple `SparseTensor`s, then they must have equal `indices` and dense shapes. #### Examples: ``` s = tf.sparse.from_dense([[1, 2, 0], [0, 4, 0], [1, 0, 0]]) tf.sparse.to_dense(tf.sparse.map_values(tf.ones_like, s)).numpy() array([[1, 1, 0], [0, 1, 0], [1, 0, 0]], dtype=int32) ``` ``` tf.sparse.to_dense(tf.sparse.map_values(tf.multiply, s, s)).numpy() array([[ 1, 4, 0], [ 0, 16, 0], [ 1, 0, 0]], dtype=int32) ``` ``` tf.sparse.to_dense(tf.sparse.map_values(tf.add, s, 5)).numpy() array([[6, 7, 0], [0, 9, 0], [6, 0, 0]], dtype=int32) ``` > > **Note:** even though `tf.add(0, 5) != 0`, implicit zeros will remain unchanged. However, if the sparse tensor contains any explict zeros, these will be affected by the mapping! > | Args | | `op` | The operation that should be applied to the SparseTensor `values`. `op` is typically an element-wise operation (such as math\_ops.add), but any operation that preserves the shape can be used. | | `*args` | Arguments for `op`. | | `**kwargs` | Keyword arguments for `op`. | | Returns | | A `SparseTensor` whose `indices` and `dense_shape` matches the `indices` and `dense_shape` of all input `SparseTensor`s. | | Raises | | `ValueError` | If args contains no `SparseTensor`, or if the `indices` or `dense_shape`s of the input `SparseTensor`s are not equal. | tensorflow tf.sparse.segment_sqrt_n tf.sparse.segment\_sqrt\_n ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4926-L4957) | Computes the sum along sparse segments of a tensor divided by the sqrt(N). ``` tf.sparse.segment_sqrt_n( data, indices, segment_ids, num_segments=None, name=None ) ``` Read [the section on segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) for an explanation of segments. Like [`tf.sparse.segment_mean`](segment_mean), but instead of dividing by the size of the segment, `N`, divide by `sqrt(N)` instead. | Args | | `data` | A `Tensor` with data that will be assembled in the output. | | `indices` | A 1-D `Tensor` with indices into `data`. Has same rank as `segment_ids`. | | `segment_ids` | A 1-D `Tensor` with indices into the output `Tensor`. Values should be sorted and can be repeated. | | `num_segments` | An optional int32 scalar. Indicates the size of the output `Tensor`. | | `name` | A name for the operation (optional). | | Returns | | A `tensor` of the shape as data, except for dimension 0 which has size `k`, the number of segments specified via `num_segments` or inferred for the last element in `segments_ids`. | tensorflow tf.sparse.retain tf.sparse.retain ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1939-L1983) | Retains specified non-empty values within a `SparseTensor`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.retain`](https://www.tensorflow.org/api_docs/python/tf/sparse/retain), [`tf.compat.v1.sparse_retain`](https://www.tensorflow.org/api_docs/python/tf/sparse/retain) ``` tf.sparse.retain( sp_input, to_retain ) ``` For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values: ``` [0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d ``` and `to_retain = [True, False, False, True]`, then the output will be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values: ``` [0, 1]: a [3, 1]: d ``` | Args | | `sp_input` | The input `SparseTensor` with `N` non-empty elements. | | `to_retain` | A bool vector of length `N` with `M` true values. | | Returns | | A `SparseTensor` with the same shape as the input and `M` non-empty elements corresponding to the true positions in `to_retain`. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.reduce_sum tf.sparse.reduce\_sum ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1463-L1539) | Computes [`tf.sparse.add`](add) of elements across dimensions of a SparseTensor. ``` tf.sparse.reduce_sum( sp_input, axis=None, keepdims=None, output_is_sparse=False, name=None ) ``` This is the reduction operation for the elementwise [`tf.sparse.add`](add) op. This Op takes a SparseTensor and is the sparse counterpart to [`tf.reduce_sum()`](../math/reduce_sum). In particular, this Op also returns a dense `Tensor` if `output_is_sparse` is `False`, or a `SparseTensor` if `output_is_sparse` is `True`. > > **Note:** if `output_is_sparse` is True, a gradient is not defined for this function, so it can't be used in training models that need gradient descent. > Reduces `sp_input` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, similar to the indexing rules in Python. #### For example: 'x' represents [[1, ?, 1] ========================= [?, 1, ?]] ========== where ? is implicitly-zero. =========================== ``` x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 1, 1], [2, 3]) tf.sparse.reduce_sum(x) <tf.Tensor: shape=(), dtype=int32, numpy=3> tf.sparse.reduce_sum(x, 0) <tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 1], dtype=int32)> tf.sparse.reduce_sum(x, 1) # Can also use -1 as the axis <tf.Tensor: shape=(2,), dtype=int32, numpy=array([2, 1], dtype=int32)> tf.sparse.reduce_sum(x, 1, keepdims=True) <tf.Tensor: shape=(2, 1), dtype=int32, numpy= array([[2], [1]], dtype=int32)> tf.sparse.reduce_sum(x, [0, 1]) <tf.Tensor: shape=(), dtype=int32, numpy=3> ``` | Args | | `sp_input` | The SparseTensor to reduce. Should have numeric type. | | `axis` | The dimensions to reduce; list or scalar. If `None` (the default), reduces all dimensions. | | `keepdims` | If true, retain reduced dimensions with length 1. | | `output_is_sparse` | If true, returns a `SparseTensor` instead of a dense `Tensor` (the default). | | `name` | A name for the operation (optional). | | Returns | | The reduced Tensor or the reduced SparseTensor if `output_is_sparse` is True. | tensorflow tf.sparse.reshape tf.sparse.reshape ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L861-L964) | Reshapes a `SparseTensor` to represent values in a new dense shape. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.reshape`](https://www.tensorflow.org/api_docs/python/tf/sparse/reshape), [`tf.compat.v1.sparse_reshape`](https://www.tensorflow.org/api_docs/python/tf/sparse/reshape) ``` tf.sparse.reshape( sp_input, shape, name=None ) ``` This operation has the same semantics as `reshape` on the represented dense tensor. The indices of non-empty values in `sp_input` are recomputed based on the new dense shape, and a new `SparseTensor` is returned containing the new indices and new shape. The order of non-empty values in `sp_input` is unchanged. If one component of `shape` is the special value -1, the size of that dimension is computed so that the total dense size remains constant. At most one component of `shape` can be -1. The number of dense elements implied by `shape` must be the same as the number of dense elements originally represented by `sp_input`. For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`: ``` [0, 0, 0]: a [0, 0, 1]: b [0, 1, 0]: c [1, 0, 0]: d [1, 2, 3]: e ``` and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of shape `[9, 4]` and `indices` / `values`: ``` [0, 0]: a [0, 1]: b [1, 2]: c [4, 2]: d [8, 1]: e ``` | Args | | `sp_input` | The input `SparseTensor`. | | `shape` | A 1-D (vector) int64 `Tensor` specifying the new dense shape of the represented `SparseTensor`. | | `name` | A name prefix for the returned tensors (optional) | | Returns | | A `SparseTensor` with the same non-empty values but with indices calculated by the new dense shape. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | | `ValueError` | If argument `shape` requests a `SparseTensor` with a different number of elements than `sp_input`. | | `ValueError` | If `shape` has more than one inferred (== -1) dimension. | tensorflow tf.sparse.from_dense tf.sparse.from\_dense ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L102-L131) | Converts a dense tensor into a sparse tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.from_dense`](https://www.tensorflow.org/api_docs/python/tf/sparse/from_dense) ``` tf.sparse.from_dense( tensor, name=None ) ``` Only elements not equal to zero will be present in the result. The resulting `SparseTensor` has the same dtype and shape as the input. ``` sp = tf.sparse.from_dense([0, 0, 3, 0, 1]) sp.shape.as_list() [5] sp.values.numpy() array([3, 1], dtype=int32) sp.indices.numpy() array([[2], [4]]) ``` | Args | | `tensor` | A dense `Tensor` to be converted to a `SparseTensor`. | | `name` | Optional name for the op. | | Returns | | The `SparseTensor`. | tensorflow tf.sparse.concat tf.sparse.concat ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L391-L442) | Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments) ``` tf.sparse.concat( axis, sp_inputs, expand_nonconcat_dims=False, name=None ) ``` Concatenation is with respect to the dense versions of each sparse input. It is assumed that each inputs is a `SparseTensor` whose elements are ordered along increasing dimension number. If expand\_nonconcat\_dim is False, all inputs' shapes must match, except for the concat dimension. If expand\_nonconcat\_dim is True, then inputs' shapes are allowed to vary among all inputs. The `indices`, `values`, and `shapes` lists must have the same length. If expand\_nonconcat\_dim is False, then the output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension. If expand\_nonconcat\_dim is True, then the output shape along the non-concat dimensions will be expand to be the largest among all inputs, and it is the sum of the inputs sizes along the concat dimension. The output elements will be resorted to preserve the sort order along increasing dimension number. This op runs in `O(M log M)` time, where `M` is the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension. For example, if `axis = 1` and the inputs are ``` sp_inputs[0]: shape = [2, 3] [0, 2]: "a" [1, 0]: "b" [1, 1]: "c" sp_inputs[1]: shape = [2, 4] [0, 1]: "d" [0, 2]: "e" ``` then the output will be ``` shape = [2, 7] [0, 2]: "a" [0, 4]: "d" [0, 5]: "e" [1, 0]: "b" [1, 1]: "c" ``` Graphically this is equivalent to doing ``` [ a] concat [ d e ] = [ a d e ] [b c ] [ ] [b c ] ``` Another example, if 'axis = 1' and the inputs are ``` sp_inputs[0]: shape = [3, 3] [0, 2]: "a" [1, 0]: "b" [2, 1]: "c" sp_inputs[1]: shape = [2, 4] [0, 1]: "d" [0, 2]: "e" ``` if expand\_nonconcat\_dim = False, this will result in an error. But if expand\_nonconcat\_dim = True, this will result in: ``` shape = [3, 7] [0, 2]: "a" [0, 4]: "d" [0, 5]: "e" [1, 0]: "b" [2, 1]: "c" ``` Graphically this is equivalent to doing ``` [ a] concat [ d e ] = [ a d e ] [b ] [ ] [b ] [ c ] [ c ] ``` | Args | | `axis` | Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input `SparseTensor`. | | `sp_inputs` | List of `SparseTensor` to concatenate. | | `name` | A name prefix for the returned tensors (optional). | | `expand_nonconcat_dim` | Whether to allow the expansion in the non-concat dimensions. Defaulted to False. | | `concat_dim` | The old (deprecated) name for axis. | | `expand_nonconcat_dims` | alias for expand\_nonconcat\_dim | | Returns | | A `SparseTensor` with the concatenated output. | | Raises | | `TypeError` | If `sp_inputs` is not a list of `SparseTensor`. | tensorflow tf.sparse.segment_mean tf.sparse.segment\_mean ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/math_ops.py#L4850-L4885) | Computes the mean along sparse segments of a tensor. ``` tf.sparse.segment_mean( data, indices, segment_ids, num_segments=None, name=None ) ``` Read [the section on segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) for an explanation of segments. Like [`tf.math.segment_mean`](../math/segment_mean), but `segment_ids` can have rank less than `data`'s first dimension, selecting a subset of dimension 0, specified by `indices`. `segment_ids` is allowed to have missing ids, in which case the output will be zeros at those indices. In those cases `num_segments` is used to determine the size of the output. | Args | | `data` | A `Tensor` with data that will be assembled in the output. | | `indices` | A 1-D `Tensor` with indices into `data`. Has same rank as `segment_ids`. | | `segment_ids` | A 1-D `Tensor` with indices into the output `Tensor`. Values should be sorted and can be repeated. | | `num_segments` | An optional int32 scalar. Indicates the size of the output `Tensor`. | | `name` | A name for the operation (optional). | | Returns | | A `tensor` of the shape as data, except for dimension 0 which has size `k`, the number of segments specified via `num_segments` or inferred for the last element in `segments_ids`. | tensorflow tf.sparse.to_dense tf.sparse.to\_dense =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1664-L1717) | Converts a `SparseTensor` into a dense tensor. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.to_dense`](https://www.tensorflow.org/api_docs/python/tf/sparse/to_dense), [`tf.compat.v1.sparse_tensor_to_dense`](https://www.tensorflow.org/api_docs/python/tf/sparse/to_dense) ``` tf.sparse.to_dense( sp_input, default_value=None, validate_indices=True, name=None ) ``` For this sparse tensor with three non-empty values: ``` sp_input = tf.SparseTensor( dense_shape=[3, 5], values=[7, 8, 9], indices =[[0, 1], [0, 3], [2, 0]]) ``` The output will be a dense `[3, 5]` tensor with values: ``` tf.sparse.to_dense(sp_input).numpy() array([[0, 7, 0, 8, 0], [0, 0, 0, 0, 0], [9, 0, 0, 0, 0]], dtype=int32) ``` > > **Note:** Indices must be without repeats. This is only tested if `validate_indices` is `True`. > | Args | | `sp_input` | The input `SparseTensor`. | | `default_value` | Scalar value to set for indices not specified in `sp_input`. Defaults to zero. | | `validate_indices` | A boolean value. If `True`, indices are checked to make sure they are sorted in lexicographic order and that there are no repeats. | | `name` | A name prefix for the returned tensors (optional). | | Returns | | A dense tensor with shape `sp_input.dense_shape` and values specified by the non-empty values in `sp_input`. Indices not in `sp_input` are assigned `default_value`. | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | tensorflow tf.sparse.reset_shape tf.sparse.reset\_shape ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/sparse_ops.py#L1986-L2088) | Resets the shape of a `SparseTensor` with indices and values unchanged. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.sparse.reset_shape`](https://www.tensorflow.org/api_docs/python/tf/sparse/reset_shape), [`tf.compat.v1.sparse_reset_shape`](https://www.tensorflow.org/api_docs/python/tf/sparse/reset_shape) ``` tf.sparse.reset_shape( sp_input, new_shape=None ) ``` If `new_shape` is None, returns a copy of `sp_input` with its shape reset to the tight bounding box of `sp_input`. This will be a shape consisting of all zeros if sp\_input has no values. If `new_shape` is provided, then it must be larger or equal in all dimensions compared to the shape of `sp_input`. When this condition is met, the returned SparseTensor will have its shape reset to `new_shape` and its indices and values unchanged from that of `sp_input.` #### For example: Consider a `sp_input` with shape [2, 3, 5]: * It is an error to set `new_shape` as [3, 7] since this represents a rank-2 tensor while `sp_input` is rank-3. This is either a ValueError during graph construction (if both shapes are known) or an OpError during run time. * Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or equal in every dimension compared to the original shape [2, 3, 5]. * On the other hand, setting new\_shape as [2, 3, 4] is also an error: The third dimension is smaller than the original shape [2, 3, 5](and%20an%0a%60invalidargumenterror%60%20will%20be%20raised). * If `new_shape` is None, the returned SparseTensor will have a shape [2, 3, 4], which is the tight bounding box of `sp_input`. | Args | | `sp_input` | The input `SparseTensor`. | | `new_shape` | None or a vector representing the new shape for the returned `SparseTensor`. | | Returns | | A `SparseTensor` indices and values unchanged from `sp_input`. Its shape is `new_shape` if that is set. Otherwise it is the tight bounding box of `sp_input` | | Raises | | `TypeError` | If `sp_input` is not a `SparseTensor`. | | `ValueError` | If `new_shape` represents a tensor with a different rank from that of `sp_input` (if shapes are known when graph is constructed). | | `ValueError` | If `new_shape` is determined during graph build to have dimension sizes that are too small. | | `OpError` | * If `new_shape` has dimension sizes that are too small. * If shapes are not known during graph construction time, and during run time it is found out that the ranks do not match. | tensorflow Module: tf.tpu.experimental Module: tf.tpu.experimental =========================== Public API for tf.tpu.experimental namespace. Modules ------- [`embedding`](experimental/embedding) module: Public API for tf.tpu.experimental.embedding namespace. Classes ------- [`class DeviceAssignment`](experimental/deviceassignment): Mapping from logical cores in a computation to the physical TPU topology. [`class HardwareFeature`](experimental/hardwarefeature): class holds all the feature info about the TPU. [`class TPUSystemMetadata`](experimental/tpusystemmetadata): Describes some metadata about the TPU system. [`class Topology`](experimental/topology): Describes a set of TPU devices. Functions --------- [`initialize_tpu_system(...)`](experimental/initialize_tpu_system): Initialize the TPU devices. [`shutdown_tpu_system(...)`](experimental/shutdown_tpu_system): Shuts down the TPU devices.
programming_docs
tensorflow tf.tpu.XLAOptions tf.tpu.XLAOptions ================= XLA compilation options. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.XLAOptions`](https://www.tensorflow.org/api_docs/python/tf/tpu/XLAOptions) ``` tf.tpu.XLAOptions( use_spmd_for_xla_partitioning=True, enable_xla_dynamic_padder=True ) ``` | Attributes | | `use_spmd_for_xla_partitioning` | Boolean. Whether to use XLA's SPMD partitioner instead of MPMD partitioner when compiler partitioning is requested. | | `enable_xla_dynamic_padder` | Boolean. Whether to enable XLA dynamic padder infrastructure to handle dynamic shapes inputs inside XLA. True by default. Disabling this may cause correctness issues with dynamic shapes inputs, as XLA will just assume the inputs are with padded shapes. However users can optionally set it to False to improve device time if masking is already handled in the user side. | tensorflow tf.tpu.experimental.Topology tf.tpu.experimental.Topology ============================ Describes a set of TPU devices. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.Topology`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/Topology) ``` tf.tpu.experimental.Topology( serialized=None, mesh_shape=None, device_coordinates=None ) ``` Represents both the shape of the physical mesh, and the mapping between TensorFlow TPU devices to physical mesh coordinates. | Args | | `serialized` | A serialized `TopologyProto`, or `None`. If not `None`, the serialized proto is parsed to discover the topology. | | `mesh_shape` | A sequence of 4 positive integers, or `None`. If not `None`, the shape of the TPU topology, in number of cores. Ignored if `serialized` is not `None`. | | `device_coordinates` | A rank 4 numpy array that describes the mapping from TensorFlow TPU devices to TPU fabric coordinates, or `None`. Ignored if `serialized is not`None`. | | Raises | | `ValueError` | If `serialized` does not describe a well-formed topology. | | `ValueError` | If `serialized` is `None` and `mesh_shape` is not a sequence of 4 positive integers. | | `ValueError` | If `serialized` is `None` and `device_coordinates` is not a rank 4 numpy int32 array that describes a valid coordinate mapping. | | Attributes | | `device_coordinates` | Describes the mapping from TPU devices to topology coordinates. | | `mesh_rank` | Returns the number of dimensions in the mesh. | | `mesh_shape` | A rank 1 int32 array describing the shape of the TPU topology. | | `missing_devices` | Array of indices of missing devices. | | `num_tasks` | Returns the number of TensorFlow tasks in the TPU slice. | | `num_tpus_per_task` | Returns the number of TPU devices per task in the TPU slice. | Methods ------- ### `cpu_device_name_at_coordinates` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/topology.py#L202-L205) ``` cpu_device_name_at_coordinates( device_coordinates, job=None ) ``` Returns the CPU device attached to a logical core. ### `serialized` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/topology.py#L223-L233) ``` serialized() ``` Returns the serialized form of the topology. ### `task_ordinal_at_coordinates` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/topology.py#L176-L187) ``` task_ordinal_at_coordinates( device_coordinates ) ``` Returns the TensorFlow task number attached to `device_coordinates`. | Args | | `device_coordinates` | An integer sequence describing a device's physical coordinates in the TPU fabric. | | Returns | | Returns the TensorFlow task number that contains the TPU device with those physical coordinates. | ### `tpu_device_name_at_coordinates` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/topology.py#L207-L211) ``` tpu_device_name_at_coordinates( device_coordinates, job=None ) ``` Returns the name of the TPU device assigned to a logical core. ### `tpu_device_ordinal_at_coordinates` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/topology.py#L189-L200) ``` tpu_device_ordinal_at_coordinates( device_coordinates ) ``` Returns the TensorFlow device number at `device_coordinates`. | Args | | `device_coordinates` | An integer sequence describing a device's physical coordinates in the TPU fabric. | | Returns | | Returns the TensorFlow device number within the task corresponding to attached to the device with those physical coordinates. | tensorflow tf.tpu.experimental.DeviceAssignment tf.tpu.experimental.DeviceAssignment ==================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L57-L179) | Mapping from logical cores in a computation to the physical TPU topology. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.DeviceAssignment`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/DeviceAssignment) ``` tf.tpu.experimental.DeviceAssignment( topology: tf.tpu.experimental.Topology, core_assignment: np.ndarray ) ``` Prefer to use the [`DeviceAssignment.build()`](deviceassignment#build) helper to construct a `DeviceAssignment`; it is easier if less flexible than constructing a `DeviceAssignment` directly. | Args | | `topology` | A `Topology` object that describes the physical TPU topology. | | `core_assignment` | A logical to physical core mapping, represented as a rank 3 numpy array. See the description of the `core_assignment` property for more details. | | Raises | | `ValueError` | If `topology` is not `Topology` object. | | `ValueError` | If `core_assignment` is not a rank 3 numpy array. | | Attributes | | `core_assignment` | The logical to physical core mapping. | | `num_cores_per_replica` | The number of cores per replica. | | `num_replicas` | The number of replicas of the computation. | | `topology` | A `Topology` that describes the TPU topology. | Methods ------- ### `build` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L173-L179) ``` @staticmethod build( topology: tf.tpu.experimental.Topology, computation_shape: Optional[np.ndarray] = None, computation_stride: Optional[np.ndarray] = None, num_replicas: int = 1 ) -> 'DeviceAssignment' ``` ### `coordinates` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L128-L130) ``` coordinates( replica: int, logical_core: int ) -> Tuple ``` Returns the physical topology coordinates of a logical core. ### `host_device` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L157-L163) ``` host_device( replica: int = 0, logical_core: int = 0, job: Optional[Text] = None ) -> Text ``` Returns the CPU device attached to a logical core. ### `lookup_replicas` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L132-L150) ``` lookup_replicas( task_id: int, logical_core: int ) -> List[int] ``` Lookup replica ids by task number and logical core. | Args | | `task_id` | TensorFlow task number. | | `logical_core` | An integer, identifying a logical core. | | Returns | | A sorted list of the replicas that are attached to that task and logical\_core. | | Raises | | `ValueError` | If no replica exists in the task which contains the logical core. | ### `tpu_device` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L165-L171) ``` tpu_device( replica: int = 0, logical_core: int = 0, job: Optional[Text] = None ) -> Text ``` Returns the name of the TPU device assigned to a logical core. ### `tpu_ordinal` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/device_assignment.py#L152-L155) ``` tpu_ordinal( replica: int = 0, logical_core: int = 0 ) -> int ``` Returns the ordinal of the TPU device assigned to a logical core. tensorflow tf.tpu.experimental.TPUSystemMetadata tf.tpu.experimental.TPUSystemMetadata ===================================== Describes some metadata about the TPU system. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.TPUSystemMetadata`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/TPUSystemMetadata) ``` tf.tpu.experimental.TPUSystemMetadata( num_cores, num_hosts, num_of_cores_per_host, topology, devices ) ``` | Attributes | | `num_cores` | interger. Total number of TPU cores in the TPU system. | | `num_hosts` | interger. Total number of hosts (TPU workers) in the TPU system. | | `num_of_cores_per_host` | interger. Number of TPU cores per host (TPU worker). | | `topology` | an instance of [`tf.tpu.experimental.Topology`](topology), which describes the physical topology of TPU system. | | `devices` | a tuple of strings, which describes all the TPU devices in the system. | tensorflow tf.tpu.experimental.HardwareFeature tf.tpu.experimental.HardwareFeature =================================== class holds all the feature info about the TPU. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.HardwareFeature`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/HardwareFeature) ``` tf.tpu.experimental.HardwareFeature( tpu_hardware_feature_proto ) ``` | Args | | `tpu_hardware_feature_proto` | protobuf which describe the tpu hardware feature. | | Attributes | | `embedding_feature` | TPU embedding feature. | Child Classes ------------- [`class EmbeddingFeature`](hardwarefeature/embeddingfeature) tensorflow tf.tpu.experimental.shutdown_tpu_system tf.tpu.experimental.shutdown\_tpu\_system ========================================= Shuts down the TPU devices. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.shutdown_tpu_system`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/shutdown_tpu_system) ``` tf.tpu.experimental.shutdown_tpu_system( cluster_resolver=None ) ``` This will clear all caches, even those that are maintained through sequential calls to tf.tpu.experimental.initialize\_tpu\_system, such as the compilation cache. | Args | | `cluster_resolver` | A tf.distribute.cluster\_resolver.TPUClusterResolver, which provides information about the TPU cluster. | | Raises | | `RuntimeError` | If no TPU devices found for eager execution or if run in a tf.function. | tensorflow Module: tf.tpu.experimental.embedding Module: tf.tpu.experimental.embedding ===================================== Public API for tf.tpu.experimental.embedding namespace. Classes ------- [`class Adagrad`](embedding/adagrad): Optimization parameters for Adagrad with TPU embeddings. [`class AdagradMomentum`](embedding/adagradmomentum): Optimization parameters for Adagrad + Momentum with TPU embeddings. [`class Adam`](embedding/adam): Optimization parameters for Adam with TPU embeddings. [`class FTRL`](embedding/ftrl): Optimization parameters for FTRL with TPU embeddings. [`class FeatureConfig`](embedding/featureconfig): Configuration data for one embedding feature. [`class SGD`](embedding/sgd): Optimization parameters for stochastic gradient descent for TPU embeddings. [`class TPUEmbedding`](embedding/tpuembedding): The TPUEmbedding mid level API. [`class TPUEmbeddingForServing`](embedding/tpuembeddingforserving): The TPUEmbedding mid level API running on CPU for serving. [`class TPUEmbeddingV0`](embedding/tpuembeddingv0): The TPUEmbedding mid level API running on TPU without Embedding accelerator. [`class TableConfig`](embedding/tableconfig): Configuration data for one embedding table. Functions --------- [`serving_embedding_lookup(...)`](embedding/serving_embedding_lookup): Apply standard lookup ops with [`tf.tpu.experimental.embedding`](embedding) configs. tensorflow tf.tpu.experimental.initialize_tpu_system tf.tpu.experimental.initialize\_tpu\_system =========================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_strategy_util.py#L38-L147) | Initialize the TPU devices. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.initialize_tpu_system`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/initialize_tpu_system) ``` tf.tpu.experimental.initialize_tpu_system( cluster_resolver=None ) ``` | Args | | `cluster_resolver` | A tf.distribute.cluster\_resolver.TPUClusterResolver, which provides information about the TPU cluster. | | Returns | | The tf.tpu.Topology object for the topology of the TPU cluster. If called inside tf.function, it returns the serialized topology object instead. | | Raises | | `RuntimeError` | If running inside a tf.function. | | `NotFoundError` | If no TPU devices found in eager mode. | tensorflow tf.tpu.experimental.embedding.serving_embedding_lookup tf.tpu.experimental.embedding.serving\_embedding\_lookup ======================================================== Apply standard lookup ops with [`tf.tpu.experimental.embedding`](../embedding) configs. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.serving_embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/serving_embedding_lookup) ``` tf.tpu.experimental.embedding.serving_embedding_lookup( inputs: Any, weights: Optional[Any], tables: Dict[tf.tpu.experimental.embedding.TableConfig, tf.Variable], feature_config: Union[tf.tpu.experimental.embedding.FeatureConfig, Iterable] ) -> Any ``` This function is a utility which allows using the [`tf.tpu.experimental.embedding`](../embedding) config objects with standard lookup functions. This can be used when exporting a model which uses [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) for serving on CPU. In particular [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) only supports lookups on TPUs and should not be part of your serving graph. Note that TPU specific options (such as `max_sequence_length`) in the configuration objects will be ignored. In the following example we take a trained model (see the documentation for [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) for the context) and create a saved model with a serving function that will perform the embedding lookup and pass the results to your model: ``` model = model_fn(...) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=1024, optimizer=tf.tpu.experimental.embedding.SGD(0.1)) checkpoint = tf.train.Checkpoint(model=model, embedding=embedding) checkpoint.restore(...) @tf.function(input_signature=[{'feature_one': tf.TensorSpec(...), 'feature_two': tf.TensorSpec(...), 'feature_three': tf.TensorSpec(...)}]) def serve_tensors(embedding_features): embedded_features = tf.tpu.experimental.embedding.serving_embedding_lookup( embedding_features, None, embedding.embedding_tables, feature_config) return model(embedded_features) model.embedding_api = embedding tf.saved_model.save(model, export_dir=..., signatures={'serving_default': serve_tensors}) ``` > > **Note:** It's important to assign the embedding API object to a member of your model as [`tf.saved_model.save`](../../../saved_model/save) only supports saving variables as one `Trackable` object. Since the model's weights are in `model` and the embedding table are managed by `embedding`, we assign `embedding` to an attribute of `model` so that tf.saved\_model.save can find the embedding variables. > > > **Note:** The same `serve_tensors` function and [`tf.saved_model.save`](../../../saved_model/save) call will work directly from training. > | Args | | `inputs` | a nested structure of Tensors, SparseTensors or RaggedTensors. | | `weights` | a nested structure of Tensors, SparseTensors or RaggedTensors or None for no weights. If not None, structure must match that of inputs, but entries are allowed to be None. | | `tables` | a dict of mapping TableConfig objects to Variables. | | `feature_config` | a nested structure of FeatureConfig objects with the same structure as inputs. | | Returns | | A nested structure of Tensors with the same structure as inputs. | tensorflow tf.tpu.experimental.embedding.SGD tf.tpu.experimental.embedding.SGD ================================= Optimization parameters for stochastic gradient descent for TPU embeddings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.SGD`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/SGD) ``` tf.tpu.experimental.embedding.SGD( learning_rate: Union[float, Callable[[], float]] = 0.01, use_gradient_accumulation: bool = True, clip_weight_min: Optional[float] = None, clip_weight_max: Optional[float] = None, weight_decay_factor: Optional[float] = None, multiply_weight_decay_factor_by_learning_rate: bool = None, clipvalue: Optional[ClipValueType] = None ) ``` Pass this to [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) via the `optimizer` argument to set the global optimizer and its parameters: ``` embedding = tf.tpu.experimental.embedding.TPUEmbedding( ... optimizer=tf.tpu.experimental.embedding.SGD(0.1)) ``` This can also be used in a [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) as the optimizer parameter to set a table specific optimizer. This will override the optimizer and parameters for global embedding optimizer defined above: ``` table_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=..., optimizer=tf.tpu.experimental.embedding.SGD(0.2)) table_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = ( tf.tpu.experimental.embedding.FeatureConfig( table=table_one), tf.tpu.experimental.embedding.FeatureConfig( table=table_two)) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.SGD(0.1)) ``` In the above example, the first feature will be looked up in a table that has a learning rate of 0.2 while the second feature will be looked up in a table that has a learning rate of 0.1. See 'tensorflow/core/protobuf/tpu/optimization\_parameters.proto' for a complete description of these parameters and their impacts on the optimizer algorithm. | Args | | `learning_rate` | The learning rate. It should be a floating point value or a callable taking no arguments for a dynamic learning rate. | | `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. | | `clip_weight_min` | the minimum value to clip by; None means -infinity. | | `clip_weight_max` | the maximum value to clip by; None means +infinity. | | `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. Weights are decayed by multiplying the weight by this factor each step. | | `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. | | `clipvalue` | Controls clipping of the gradient. Set to either a single positive scalar value to get clipping or a tiple of scalar values (min, max) to set a separate maximum or minimum. If one of the two entries is None, then there will be no clipping that direction. Note if this is set, you may see a decrease in performance as gradient accumulation will be enabled (it is normally off for SGD as it has no affect on accuracy). See 'tensorflow/core/protobuf/tpu/optimization\_parameters.proto' for more information on gradient accumulation and its impact on tpu embeddings. |
programming_docs
tensorflow tf.tpu.experimental.embedding.FeatureConfig tf.tpu.experimental.embedding.FeatureConfig =========================================== Configuration data for one embedding feature. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.FeatureConfig`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/FeatureConfig) ``` tf.tpu.experimental.embedding.FeatureConfig( table: tf.tpu.experimental.embedding.TableConfig, max_sequence_length: int = 0, validate_weights_and_indices: bool = True, output_shape: Optional[Union[List[int], tf.TensorShape]] = None, name: Optional[Text] = None ) ``` This class holds the configuration data for a single embedding feature. The main use is to assign features to [`tf.tpu.experimental.embedding.TableConfig`](tableconfig)s via the table parameter: ``` table_config_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) table_config_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = { 'feature_one': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_one), 'feature_two': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_one), 'feature_three': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_two)} embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.Adam(0.1)) ``` The above configuration has 2 tables, and three features. The first two features will be looked up in the first table and the third feature will be looked up in the second table. You can also specify the output shape for each feature. The output shape should be the expected activation shape excluding the table dimension. For dense and sparse tensor, the output shape should be the same as the input shape excluding the last dimension. For ragged tensor, the output shape can mismatch the input shape. > > **Note:** The `max_sequence_length` will be only used when the input tensor has rank 2 and the `output_shape` is not set in the feature config. > When feeding features into `embedding.enqueue` they can be [`tf.Tensor`](../../../tensor)s, [`tf.SparseTensor`](../../../sparse/sparsetensor)s or [`tf.RaggedTensor`](../../../raggedtensor)s. When the argument `max_sequence_length` is 0, the default, you should expect a output of `embedding.dequeue` for this feature of shape `(batch_size, dim)`. If `max_sequence_length` is greater than 0, the feature is embedded as a sequence and padded up to the given length. The shape of the output for this feature will be `(batch_size, max_sequence_length, dim)`. | Args | | `table` | An instance of [`tf.tpu.experimental.embedding.TableConfig`](tableconfig), describing the table in which this feature should be looked up. | | `max_sequence_length` | If positive, the feature is a sequence feature with the corresponding maximum sequence length. If the sequence is longer than this, it will be truncated. If 0, the feature is not a sequence feature. | | `validate_weights_and_indices` | If true, uses safe\_embedding\_lookup during serving which ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. | | `output_shape` | Optional argument to config the output shape of the feature activation. If provided, the feature feeding to the `embedding.enqueue` has to match the shape (for ragged tensor, the input shape and output shape can mismatch). If not provided, the shape can be either provided to the `embedding.build` or auto detected at the runtime. | | `name` | An optional name for the feature, useful for debugging. | | Raises | | `ValueError` | if `table` is not an instance of [`tf.tpu.experimental.embedding.TableConfig`](tableconfig). | | `ValueError` | if `max_sequence_length` not an integer or is negative. | tensorflow tf.tpu.experimental.embedding.Adagrad tf.tpu.experimental.embedding.Adagrad ===================================== Optimization parameters for Adagrad with TPU embeddings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.Adagrad`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/Adagrad) ``` tf.tpu.experimental.embedding.Adagrad( learning_rate: Union[float, Callable[[], float]] = 0.001, initial_accumulator_value: float = 0.1, use_gradient_accumulation: bool = True, clip_weight_min: Optional[float] = None, clip_weight_max: Optional[float] = None, weight_decay_factor: Optional[float] = None, multiply_weight_decay_factor_by_learning_rate: bool = None, slot_variable_creation_fn: Optional[SlotVarCreationFnType] = None, clipvalue: Optional[ClipValueType] = None ) ``` Pass this to [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) via the `optimizer` argument to set the global optimizer and its parameters: ``` embedding = tf.tpu.experimental.embedding.TPUEmbedding( ... optimizer=tf.tpu.experimental.embedding.Adagrad(0.1)) ``` This can also be used in a [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) as the optimizer parameter to set a table specific optimizer. This will override the optimizer and parameters for global embedding optimizer defined above: ``` table_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=..., optimizer=tf.tpu.experimental.embedding.Adagrad(0.2)) table_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = ( tf.tpu.experimental.embedding.FeatureConfig( table=table_one), tf.tpu.experimental.embedding.FeatureConfig( table=table_two)) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.Adagrad(0.1)) ``` In the above example, the first feature will be looked up in a table that has a learning rate of 0.2 while the second feature will be looked up in a table that has a learning rate of 0.1. See 'tensorflow/core/protobuf/tpu/optimization\_parameters.proto' for a complete description of these parameters and their impacts on the optimizer algorithm. | Args | | `learning_rate` | The learning rate. It should be a floating point value or a callable taking no arguments for a dynamic learning rate. | | `initial_accumulator_value` | initial accumulator for Adagrad. | | `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. | | `clip_weight_min` | the minimum value to clip by; None means -infinity. | | `clip_weight_max` | the maximum value to clip by; None means +infinity. | | `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. | | `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. | | `slot_variable_creation_fn` | If you wish do directly control the creation of the slot variables, set this to a callable taking three parameters: a table variable, a list of slot names to create for it, and a list of initializers. This function should return a dict with the slot names as keys and the created variables as values with types matching the table variable. When set to None (the default), uses the built-in variable creation. | | `clipvalue` | Controls clipping of the gradient. Set to either a single positive scalar value to get clipping or a tuple of scalar values (min, max) to set a separate maximum or minimum. If one of the two entries is None, then there will be no clipping that direction. | tensorflow tf.tpu.experimental.embedding.TableConfig tf.tpu.experimental.embedding.TableConfig ========================================= Configuration data for one embedding table. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.TableConfig`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/TableConfig) ``` tf.tpu.experimental.embedding.TableConfig( vocabulary_size: int, dim: int, initializer: Optional[Callable[[Any], None]] = None, optimizer: Optional[_Optimizer] = None, combiner: Text = 'mean', name: Optional[Text] = None ) ``` This class holds the configuration data for a single embedding table. It is used as the `table` parameter of a [`tf.tpu.experimental.embedding.FeatureConfig`](featureconfig). Multiple [`tf.tpu.experimental.embedding.FeatureConfig`](featureconfig) objects can use the same [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) object. In this case a shared table will be created for those feature lookups. ``` table_config_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) table_config_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = { 'feature_one': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_one), 'feature_two': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_one), 'feature_three': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_two)} embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.Adam(0.1)) ``` The above configuration has 2 tables, and three features. The first two features will be looked up in the first table and the third feature will be looked up in the second table. | Args | | `vocabulary_size` | Size of the table's vocabulary (number of rows). | | `dim` | The embedding dimension (width) of the table. | | `initializer` | A callable initializer taking one parameter, the shape of the variable that will be initialized. Will be called once per task, to initialize that task's shard of the embedding table. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dim)`. | | `optimizer` | An optional instance of an optimizer parameters class, instance of one of [`tf.tpu.experimental.embedding.SGD`](sgd), [`tf.tpu.experimental.embedding.Adagrad`](adagrad) or [`tf.tpu.experimental.embedding.Adam`](adam). It set will override the global optimizer passed to [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding). | | `combiner` | A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn', 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. For more information, see [`tf.nn.embedding_lookup_sparse`](../../../nn/embedding_lookup_sparse). | | `name` | An optional string used to name the table. Useful for debugging. | | Raises | | `ValueError` | if `vocabulary_size` is not a positive integer. | | `ValueError` | if `dim` is not a positive integer. | | `ValueError` | if `initializer` is specified and is not callable. | | `ValueError` | if `combiner` is not supported. | tensorflow tf.tpu.experimental.embedding.AdagradMomentum tf.tpu.experimental.embedding.AdagradMomentum ============================================= Optimization parameters for Adagrad + Momentum with TPU embeddings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.AdagradMomentum`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/AdagradMomentum) ``` tf.tpu.experimental.embedding.AdagradMomentum( learning_rate: Union[float, Callable[[], float]] = 0.001, momentum: float = 0.0, use_nesterov: bool = False, exponent: float = 2, beta2: float = 1, epsilon: float = 1e-10, use_gradient_accumulation: bool = True, clip_weight_min: Optional[float] = None, clip_weight_max: Optional[float] = None, weight_decay_factor: Optional[float] = None, multiply_weight_decay_factor_by_learning_rate: bool = None, slot_variable_creation_fn: Optional[SlotVarCreationFnType] = None, clipvalue: Optional[ClipValueType] = None ) ``` Pass this to [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) via the `optimizer` argument to set the global optimizer and its parameters: ``` embedding = tf.tpu.experimental.embedding.TPUEmbedding( ... optimizer=tf.tpu.experimental.embedding.AdagradMomentum(0.1)) ``` This can also be used in a [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) as the optimizer parameter to set a table specific optimizer. This will override the optimizer and parameters for global embedding optimizer defined above: ``` table_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=..., optimizer=tf.tpu.experimental.embedding.AdagradMomentum(0.2)) table_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = ( tf.tpu.experimental.embedding.FeatureConfig( table=table_one), tf.tpu.experimental.embedding.FeatureConfig( table=table_two)) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.AdagradMomentum(0.1)) ``` In the above example, the first feature will be looked up in a table that has a learning rate of 0.2 while the second feature will be looked up in a table that has a learning rate of 0.1. See 'tensorflow/core/protobuf/tpu/optimization\_parameters.proto' for a complete description of these parameters and their impacts on the optimizer algorithm. | Args | | `learning_rate` | The learning rate. It should be a floating point value or a callable taking no arguments for a dynamic learning rate. | | `momentum` | Moving average parameter for the momentum accumulator. | | `use_nesterov` | Whether to use the Nesterov variant of momentum. See Sutskever et al., 2013. | | `exponent` | Exponent for the Adagrad accumulator. | | `beta2` | Moving average parameter for the Adagrad accumulator. | | `epsilon` | initial accumulator for Adagrad accumulator. | | `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. | | `clip_weight_min` | the minimum value to clip by; None means -infinity. | | `clip_weight_max` | the maximum value to clip by; None means +infinity. | | `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. | | `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. | | `slot_variable_creation_fn` | If you wish do directly control the creation of the slot variables, set this to a callable taking three parameters: a table variable, a list of slot names to create for it, and a list of initializers. This function should return a dict with the slot names as keys and the created variables as values with types matching the table variable. When set to None (the default), uses the built-in variable creation. | | `clipvalue` | Controls clipping of the gradient. Set to either a single positive scalar value to get clipping or a tuple of scalar values (min, max) to set a separate maximum or minimum. If one of the two entries is None, then there will be no clipping that direction. | tensorflow tf.tpu.experimental.embedding.FTRL tf.tpu.experimental.embedding.FTRL ================================== Optimization parameters for FTRL with TPU embeddings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.FTRL`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/FTRL) ``` tf.tpu.experimental.embedding.FTRL( learning_rate: Union[float, Callable[[], float]] = 0.001, learning_rate_power: float = -0.5, l1_regularization_strength: float = 0.0, l2_regularization_strength: float = 0.0, beta: float = 0.0, initial_accumulator_value: float = 0.1, use_gradient_accumulation: bool = True, clip_weight_min: Optional[float] = None, clip_weight_max: Optional[float] = None, weight_decay_factor: Optional[float] = None, multiply_weight_decay_factor_by_learning_rate: bool = None, slot_variable_creation_fn: Optional[SlotVarCreationFnType] = None, clipvalue: Optional[ClipValueType] = None, multiply_linear_by_learning_rate: bool = False, allow_zero_accumulator: bool = False ) ``` See Algorithm 1 of this [paper](https://research.google.com/pubs/archive/41159.pdf). Pass this to [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) via the `optimizer` argument to set the global optimizer and its parameters: ``` embedding = tf.tpu.experimental.embedding.TPUEmbedding( ... optimizer=tf.tpu.experimental.embedding.FTRL(0.1)) ``` This can also be used in a [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) as the optimizer parameter to set a table specific optimizer. This will override the optimizer and parameters for global embedding optimizer defined above: ``` table_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=..., optimizer=tf.tpu.experimental.embedding.FTRL(0.2)) table_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = ( tf.tpu.experimental.embedding.FeatureConfig( table=table_one), tf.tpu.experimental.embedding.FeatureConfig( table=table_two)) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.FTRL(0.1)) ``` In the above example, the first feature will be looked up in a table that has a learning rate of 0.2 while the second feature will be looked up in a table that has a learning rate of 0.1. See 'tensorflow/core/protobuf/tpu/optimization\_parameters.proto' for a complete description of these parameters and their impacts on the optimizer algorithm. | Args | | `learning_rate` | The learning rate. It should be a floating point value or a callable taking no arguments for a dynamic learning rate. | | `learning_rate_power` | A float value, must be less or equal to zero. Controls how the learning rate decreases during training. Use zero for a fixed learning rate. | | `l1_regularization_strength` | A float value, must be greater than or equal to zero. | | `l2_regularization_strength` | A float value, must be greater than or equal to zero. | | `beta` | A float value, representing the beta value from the paper. | | `initial_accumulator_value` | The starting value for accumulators. Only zero or positive values are allowed. | | `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. | | `clip_weight_min` | the minimum value to clip by; None means -infinity. | | `clip_weight_max` | the maximum value to clip by; None means +infinity. | | `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. | | `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. | | `slot_variable_creation_fn` | If you wish do directly control the creation of the slot variables, set this to a callable taking three parameters: a table variable, a list of slot names to create for it, and a list of initializers. This function should return a dict with the slot names as keys and the created variables as values with types matching the table variable. When set to None (the default), uses the built-in variable creation. | | `clipvalue` | Controls clipping of the gradient. Set to either a single positive scalar value to get clipping or a tuple of scalar values (min, max) to set a separate maximum or minimum. If one of the two entries is None, then there will be no clipping that direction. | | `multiply_linear_by_learning_rate` | If set to True, a modified formula is used for FTRL that treats the "linear" accumulator as being pre-multiplied by the learning rate (i.e., the accumulator named "linear" actually stores "linear \* learning\_rate"). Other than checkpoint compatibility, this is mathematically equivalent for a static learning rate; for a dynamic learning rate, it is nearly the same as long as the learning rate does not change quickly. The benefit of this is that the modified formula handles zero and near-zero learning rates without producing NaNs, improving flexibility for learning rate ramp-up. | | `allow_zero_accumulator` | If set to True, changes some internal formulas to allow zero and near-zero accumulator values at the cost of some performance; this only needs to be set if you are using an initial accumulator value of zero, which is uncommon. |
programming_docs
tensorflow tf.tpu.experimental.embedding.Adam tf.tpu.experimental.embedding.Adam ================================== Optimization parameters for Adam with TPU embeddings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.Adam`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/Adam) ``` tf.tpu.experimental.embedding.Adam( learning_rate: Union[float, Callable[[], float]] = 0.001, beta_1: float = 0.9, beta_2: float = 0.999, epsilon: float = 1e-07, lazy_adam: bool = True, sum_inside_sqrt: bool = True, use_gradient_accumulation: bool = True, clip_weight_min: Optional[float] = None, clip_weight_max: Optional[float] = None, weight_decay_factor: Optional[float] = None, multiply_weight_decay_factor_by_learning_rate: bool = None, slot_variable_creation_fn: Optional[SlotVarCreationFnType] = None, clipvalue: Optional[ClipValueType] = None ) ``` Pass this to [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) via the `optimizer` argument to set the global optimizer and its parameters: > > **Note:** By default this optimizer is lazy, i.e. it will not apply the gradient update of zero to rows that were not looked up. You can change this behavior by setting `lazy_adam` to `False`. > ``` embedding = tf.tpu.experimental.embedding.TPUEmbedding( ... optimizer=tf.tpu.experimental.embedding.Adam(0.1)) ``` This can also be used in a [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) as the optimizer parameter to set a table specific optimizer. This will override the optimizer and parameters for global embedding optimizer defined above: ``` table_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=..., optimizer=tf.tpu.experimental.embedding.Adam(0.2)) table_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = ( tf.tpu.experimental.embedding.FeatureConfig( table=table_one), tf.tpu.experimental.embedding.FeatureConfig( table=table_two)) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, batch_size=... optimizer=tf.tpu.experimental.embedding.Adam(0.1)) ``` In the above example, the first feature will be looked up in a table that has a learning rate of 0.2 while the second feature will be looked up in a table that has a learning rate of 0.1. See 'tensorflow/core/protobuf/tpu/optimization\_parameters.proto' for a complete description of these parameters and their impacts on the optimizer algorithm. | Args | | `learning_rate` | The learning rate. It should be a floating point value or a callable taking no arguments for a dynamic learning rate. | | `beta_1` | A float value. The exponential decay rate for the 1st moment estimates. | | `beta_2` | A float value. The exponential decay rate for the 2nd moment estimates. | | `epsilon` | A small constant for numerical stability. | | `lazy_adam` | Use lazy Adam instead of Adam. Lazy Adam trains faster. | | `sum_inside_sqrt` | When this is true, the Adam update formula is changed from `m / (sqrt(v) + epsilon)` to `m / sqrt(v + epsilon**2)`. This option improves the performance of TPU training and is not expected to harm model quality. | | `use_gradient_accumulation` | Setting this to `False` makes embedding gradients calculation less accurate but faster. | | `clip_weight_min` | the minimum value to clip by; None means -infinity. | | `clip_weight_max` | the maximum value to clip by; None means +infinity. | | `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. | | `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. | | `slot_variable_creation_fn` | If you wish do directly control the creation of the slot variables, set this to a callable taking three parameters: a table variable, a list of slot names to create for it, and a list of initializers. This function should return a dict with the slot names as keys and the created variables as values with types matching the table variable. When set to None (the default), uses the built-in variable creation. | | `clipvalue` | Controls clipping of the gradient. Set to either a single positive scalar value to get clipping or a tiple of scalar values (min, max) to set a separate maximum or minimum. If one of the two entries is None, then there will be no clipping that direction. | tensorflow tf.tpu.experimental.embedding.TPUEmbedding tf.tpu.experimental.embedding.TPUEmbedding ========================================== The TPUEmbedding mid level API. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.TPUEmbedding`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/TPUEmbedding) ``` tf.tpu.experimental.embedding.TPUEmbedding( feature_config: Union[tf.tpu.experimental.embedding.FeatureConfig, Iterable], optimizer: Optional[tpu_embedding_v2_utils._Optimizer], pipeline_execution_with_tensor_core: bool = False ) ``` > > **Note:** When instantiated under a TPUStrategy, this class can only be created once per call to [`tf.tpu.experimental.initialize_tpu_system`](../initialize_tpu_system). If you wish to re-initialize the embedding engine you must re-initialize the tpu as well. Doing this will clear any variables from TPU, so ensure you have checkpointed before you do this. If a further instances of the class are needed, set the `initialize_tpu_embedding` argument to `False`. > This class can be used to support training large embeddings on TPU. When creating an instance of this class, you must specify the complete set of tables and features you expect to lookup in those tables. See the documentation of [`tf.tpu.experimental.embedding.TableConfig`](tableconfig) and [`tf.tpu.experimental.embedding.FeatureConfig`](featureconfig) for more details on the complete set of options. We will cover the basic usage here. > > **Note:** multiple `FeatureConfig` objects can use the same `TableConfig` object, allowing different features to share the same table: > ``` table_config_one = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) table_config_two = tf.tpu.experimental.embedding.TableConfig( vocabulary_size=..., dim=...) feature_config = { 'feature_one': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_one), 'feature_two': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_one), 'feature_three': tf.tpu.experimental.embedding.FeatureConfig( table=table_config_two)} ``` There are two modes under which the `TPUEmbedding` class can used. This depends on if the class was created under a `TPUStrategy` scope or not. Under `TPUStrategy`, we allow access to the method `enqueue`, `dequeue` and `apply_gradients`. We will show examples below of how to use these to train and evaluate your model. Under CPU, we only access to the `embedding_tables` property which allow access to the embedding tables so that you can use them to run model evaluation/prediction on CPU. First lets look at the `TPUStrategy` mode. Initial setup looks like: ``` strategy = tf.distribute.TPUStrategy(...) with strategy.scope(): embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, optimizer=tf.tpu.experimental.embedding.SGD(0.1)) ``` When creating a distributed dataset that is to be passed to the enqueue operation a special input option must be specified: ``` distributed_dataset = ( strategy.distribute_datasets_from_function( dataset_fn=..., options=tf.distribute.InputOptions( experimental_fetch_to_device=False)) dataset_iterator = iter(distributed_dataset) ``` Different feature inputs can have different shapes. For dense and sparse tensor, rank 2 and above is supported. For ragged tensor, although only rank 2 is supported, you can specify the output shape to be rank 2 and above. The output shape specified in the FeatureConfig has the first priority. The input shape passed in build method has second priority and the input shapes auto detected from input feature has the lowest priority. The latter two will be converted to output shapes by omitting the last dimension. If the lower priority one has output shapes which don't match the former one. A ValueError will be raised. Only when the former one has undefined output shapes, the latter one can override. > > **Note:** All batches passed to the layer can have different input shapes. But these input shapes need to match with the output shapes set by either `FeatureConfig` or build method except for ragged tensor. Only 2D ragged tensor with output shape set to higher dimensions is allowed as long as the total number of elements matches. All subsequent calls must have the same input shapes. In the event that the input shapes cannot be automatically determined by the enqueue method, you must call the build method with the input shapes or provide output shapes in the `FeatureConfig` to initialize the layer. > To use this API on TPU you should use a custom training loop. Below is an example of a training and evaluation step: ``` @tf.function def training_step(dataset_iterator, num_steps): def tpu_step(tpu_features): with tf.GradientTape() as tape: activations = embedding.dequeue() tape.watch(activations) model_output = model(activations) loss = ... # some function of labels and model_output embedding_gradients = tape.gradient(loss, activations) embedding.apply_gradients(embedding_gradients) # Insert your model gradient and optimizer application here for _ in tf.range(num_steps): embedding_features, tpu_features = next(dataset_iterator) embedding.enqueue(embedding_features, training=True) strategy.run(tpu_step, args=(tpu_features, )) @tf.function def evalution_step(dataset_iterator, num_steps): def tpu_step(tpu_features): activations = embedding.dequeue() model_output = model(activations) # Insert your evaluation code here. for _ in tf.range(num_steps): embedding_features, tpu_features = next(dataset_iterator) embedding.enqueue(embedding_features, training=False) strategy.run(tpu_step, args=(tpu_features, )) ``` > > **Note:** The calls to `enqueue` have `training` set to `True` when `embedding.apply_gradients` is used and set to `False` when `embedding.apply_gradients` is not present in the function. If you don't follow this pattern you may cause an error to be raised or the tpu may deadlock. > In the above examples, we assume that the user has a dataset which returns a tuple where the first element of the tuple matches the structure of what was passed as the `feature_config` argument to the object initializer. Also we utilize [`tf.range`](../../../range) to get a [`tf.while_loop`](../../../while_loop) in order to increase performance. When checkpointing your model, you should include your [`tf.tpu.experimental.embedding.TPUEmbedding`](tpuembedding) object in the checkpoint. It is a trackable object and saving it will save the embedding tables and their optimizer slot variables: ``` checkpoint = tf.train.Checkpoint(model=model, embedding=embedding) checkpoint.save(...) ``` On CPU, only the `embedding_table` property is usable. This will allow you to restore a checkpoint to the object and have access to the table variables: ``` model = model_fn(...) embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, optimizer=tf.tpu.experimental.embedding.SGD(0.1)) checkpoint = tf.train.Checkpoint(model=model, embedding=embedding) checkpoint.restore(...) tables = embedding.embedding_tables ``` You can now use table in functions like [`tf.nn.embedding_lookup`](../../../nn/embedding_lookup) to perform your embedding lookup and pass to your model. | Args | | `feature_config` | A nested structure of [`tf.tpu.experimental.embedding.FeatureConfig`](featureconfig) configs. | | `optimizer` | An instance of one of [`tf.tpu.experimental.embedding.SGD`](sgd), [`tf.tpu.experimental.embedding.Adagrad`](adagrad) or [`tf.tpu.experimental.embedding.Adam`](adam). When not created under TPUStrategy may be set to None to avoid the creation of the optimizer slot variables, useful for optimizing memory consumption when exporting the model for serving where slot variables aren't needed. | | `pipeline_execution_with_tensor_core` | If True, the TPU embedding computations will overlap with the TensorCore computations (and hence will be one step old). Set to True for improved performance. | | Raises | | `ValueError` | If optimizer is not one of tf.tpu.experimental.embedding.(SGD, Adam or Adagrad) or None when created under a TPUStrategy. | | Attributes | | `embedding_tables` | Returns a dict of embedding tables, keyed by `TableConfig`. This property only works when the `TPUEmbedding` object is created under a non-TPU strategy. This is intended to be used to for CPU based lookup when creating a serving checkpoint. | Methods ------- ### `apply_gradients` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_v2.py#L614-L715) ``` apply_gradients( gradients, name: Optional[Text] = None ) ``` Applies the gradient update to the embedding tables. If a gradient of `None` is passed in any position of the nested structure, then an gradient update with a zero gradient is applied for that feature. For optimizers like SGD or Adagrad, this is the same as applying no update at all. For lazy Adam and other sparsely applied optimizers with decay, ensure you understand the effect of applying a zero gradient. ``` strategy = tf.distribute.TPUStrategy(...) with strategy.scope(): embedding = tf.tpu.experimental.embedding.TPUEmbedding(...) distributed_dataset = ( strategy.distribute_datasets_from_function( dataset_fn=..., options=tf.distribute.InputOptions( experimental_fetch_to_device=False)) dataset_iterator = iter(distributed_dataset) @tf.function def training_step(): def tpu_step(tpu_features): with tf.GradientTape() as tape: activations = embedding.dequeue() tape.watch(activations) loss = ... # some computation involving activations embedding_gradients = tape.gradient(loss, activations) embedding.apply_gradients(embedding_gradients) embedding_features, tpu_features = next(dataset_iterator) embedding.enqueue(embedding_features, training=True) strategy.run(tpu_step, args=(tpu_features, )) training_step() ``` | Args | | `gradients` | A nested structure of gradients, with structure matching the `feature_config` passed to this object. | | `name` | A name for the underlying op. | | Raises | | `RuntimeError` | If called when object wasn't created under a `TPUStrategy` or if not built (either by manually calling build or calling enqueue). | | `ValueError` | If a non-[`tf.Tensor`](../../../tensor) non-`None` gradient is passed in, or a [`tf.Tensor`](../../../tensor) of the incorrect shape is passed in. Also if the size of any sequence in `gradients` does not match corresponding sequence in `feature_config`. | | `TypeError` | If the type of any sequence in `gradients` does not match corresponding sequence in `feature_config`. | ### `build` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_v2.py#L341-L409) ``` build( per_replica_input_shapes=None, per_replica_batch_size=None ) ``` Create the underlying variables and initializes the TPU for embeddings. This method creates the underlying variables (including slot variables). If created under a TPUStrategy, this will also initialize the TPU for embeddings. This function will automatically get called by enqueue, which will try to determine your output shapes. If this fails, you must manually call this method before you call enqueue. | Args | | `per_replica_input_shapes` | A nested structure of The per replica input shapes that matches the structure of the feature config. The input shapes should be the same as the input shape of the feature (except for ragged tensor) Note that it is fixed and the same per replica input shapes must be used for both training and evaluation. If you want to calculate this from the global input shapes, you can use `num_replicas_in_sync` property of your strategy object. May be set to None if not created under a TPUStrategy. | | `per_replica_batch_size` | (Deprecated) The per replica batch size that you intend to use. Note that is fixed and the same batch size must be used for both training and evaluation. If you want to calculate this from the global batch size, you can use `num_replicas_in_sync` property of your strategy object. May be set to None if not created under a TPUStrategy. | | Raises | | `ValueError` | If per\_replica\_input\_shapes is inconsistent with the output shapes stored in the feature config or the output shapes get from the input shapes are not fully defined. | | `RuntimeError` | If tpu embedding is already initialized on TPU. | ### `dequeue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_v2.py#L717-L792) ``` dequeue( name: Optional[Text] = None ) ``` Get the embedding results. Returns a nested structure of [`tf.Tensor`](../../../tensor) objects, matching the structure of the `feature_config` argument to the `TPUEmbedding` class. The output shape of the tensors is `(*output_shape, dim)`, `dim` is the dimension of the corresponding `TableConfig`. For output\_shape, there are three places where it can be set. 1. FeatureConfig provided in the **init** function. 2. Per\_replica\_output\_shapes by directly calling the build method after initializing the tpu embedding class. 3. Auto detected from the shapes of the input feature. The priority of these places is the exact same order. ``` strategy = tf.distribute.TPUStrategy(...) with strategy.scope(): embedding = tf.tpu.experimental.embedding.TPUEmbedding(...) distributed_dataset = ( strategy.distribute_datasets_from_function( dataset_fn=..., options=tf.distribute.InputOptions( experimental_fetch_to_device=False)) dataset_iterator = iter(distributed_dataset) @tf.function def training_step(): def tpu_step(tpu_features): with tf.GradientTape() as tape: activations = embedding.dequeue() tape.watch(activations) loss = ... # some computation involving activations embedding_gradients = tape.gradient(loss, activations) embedding.apply_gradients(embedding_gradients) embedding_features, tpu_features = next(dataset_iterator) embedding.enqueue(embedding_features, training=True) strategy.run(tpu_step, args=(tpu_features, )) training_step() ``` | Args | | `name` | A name for the underlying op. | | Returns | | A nested structure of tensors, with the same structure as `feature_config` | passed to this instance of the `TPUEmbedding` object. | Raises | | `RuntimeError` | If called when object wasn't created under a `TPUStrategy` or if not built (either by manually calling build or calling enqueue). | ### `enqueue` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_v2.py#L1107-L1347) ``` enqueue( features, weights=None, training: bool = True, name: Optional[Text] = None, device: Optional[Text] = None ) ``` Enqueues id tensors for embedding lookup. This function enqueues a structure of features to be looked up in the embedding tables. We expect that the input shapes of each of the tensors in features matches the output shapes set via FeatureConfig or build method (if any). the output shapes will be auto detected based on the input shapes with the max\_sequence\_length or output shape setting in the FeatureConfig. Note that the output shapes is based on per replica batch size. If your input dataset is batched to the global batch size and you use [`tf.distribute.TPUStrategy`](../../../distribute/tpustrategy)'s `experimental_distribute_dataset` or if you use `distribute_datasets_from_function` and batch to the per core batch size computed by the context passed to your input function, the output shapes should match automatically. The auto detected the output shapes: 1. For dense tensor, if rank 2 or above, make sure the tensor has last dimension as 1. The output shape will be the input shape excluding the last dimension. 2. For sparse tensor, make sure the tensor has rank 2 and above. a. If feature config has max\_sequence\_length equals 0 or output shape set (the max\_sequence\_length setting will be ignored), the output shape will be the input shape excluding the last dimension. b. Otherwize if the tensor is rank 2, the output shape will be input shape with last dimension set as max\_sequence\_length. If the tensor is above rank 2, the output shape will be the input shape excluding the last dimension and the last dimension of the output shape will be set to max\_sequence\_length. 3. For ragged tensor, make sure the tensor has rank 2. a. If feature config has max\_sequence\_length equals 0 or output shape set (the max\_sequence\_length setting will be ignored), the output shape will be the input shape excluding the last dimension. b. Otherwise, the output shape will be the input shape excluding the last dimension and the last dimension of the output shape will be set to max\_sequence\_length. ``` strategy = tf.distribute.TPUStrategy(...) with strategy.scope(): embedding = tf.tpu.experimental.embedding.TPUEmbedding(...) distributed_dataset = ( strategy.distribute_datasets_from_function( dataset_fn=..., options=tf.distribute.InputOptions( experimental_fetch_to_device=False)) dataset_iterator = iter(distributed_dataset) @tf.function def training_step(): def tpu_step(tpu_features): with tf.GradientTape() as tape: activations = embedding.dequeue() tape.watch(activations) loss = ... # some computation involving activations embedding_gradients = tape.gradient(loss, activations) embedding.apply_gradients(embedding_gradients) embedding_features, tpu_features = next(dataset_iterator) embedding.enqueue(embedding_features, training=True) strategy.run(tpu_step, args=(tpu_features,)) training_step() ``` > > **Note:** You should specify `training=True` when using `embedding.apply_gradients` as above and `training=False` when not using `embedding.apply_gradients` (e.g. for frozen embeddings or when doing evaluation). > For finer grained control, in the above example the line ``` embedding.enqueue(embedding_features, training=True) ``` may be replaced with ``` per_core_embedding_features = self.strategy.experimental_local_results( embedding_features) def per_core_enqueue(ctx): core_id = ctx.replica_id_in_sync_group device = strategy.extended.worker_devices[core_id] embedding.enqueue(per_core_embedding_features[core_id], device=device) strategy.experimental_distribute_values_from_function( per_core_queue_inputs) ``` | Args | | `features` | A nested structure of [`tf.Tensor`](../../../tensor)s, [`tf.SparseTensor`](../../../sparse/sparsetensor)s or [`tf.RaggedTensor`](../../../raggedtensor)s, with the same structure as `feature_config`. Inputs will be downcast to [`tf.int32`](../../../../tf#int32). Only one type out of [`tf.SparseTensor`](../../../sparse/sparsetensor) or [`tf.RaggedTensor`](../../../raggedtensor) is supported per call. | | `weights` | If not `None`, a nested structure of [`tf.Tensor`](../../../tensor)s, [`tf.SparseTensor`](../../../sparse/sparsetensor)s or [`tf.RaggedTensor`](../../../raggedtensor)s, matching the above, except that the tensors should be of float type (and they will be downcast to [`tf.float32`](../../../../tf#float32)). For [`tf.SparseTensor`](../../../sparse/sparsetensor)s we assume the `indices` are the same for the parallel entries from `features` and similarly for [`tf.RaggedTensor`](../../../raggedtensor)s we assume the row\_splits are the same. | | `training` | Defaults to `True`. If `False`, enqueue the batch as inference batch (forward pass only). Do not call `apply_gradients` when this is `False` as this may lead to a deadlock. name: A name for the underlying op. device: The device name (e.g. '/task:0/device:TPU:2') where this batch should be enqueued. This should be set if and only if features is not a [`tf.distribute.DistributedValues`](../../../distribute/distributedvalues) and enqueue is not being called inside a TPU context (e.g. inside [`TPUStrategy.run`](../../../distribute/tpustrategy#run)). | | Raises | | `ValueError` | When called inside a strategy.run call and input is not directly taken from the args of the `strategy.run` call. Also if the size of any sequence in `features` does not match corresponding sequence in `feature_config`. Similarly for `weights`, if not `None`. If input shapes of features is unequal or different from a previous call. | | `RuntimeError` | When called inside a strategy.run call and inside XLA control flow. If batch\_size is not able to be determined and build was not called. | | `TypeError` | If the type of any sequence in `features` does not match corresponding sequence in `feature_config`. Similarly for `weights`, if not `None`. |
programming_docs
tensorflow tf.tpu.experimental.embedding.TPUEmbeddingV0 tf.tpu.experimental.embedding.TPUEmbeddingV0 ============================================ The TPUEmbedding mid level API running on TPU without Embedding accelerator. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.TPUEmbeddingV0`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/TPUEmbeddingV0) ``` tf.tpu.experimental.embedding.TPUEmbeddingV0( feature_config: Union[tf.tpu.experimental.embedding.FeatureConfig, Iterable], optimizer: Optional[tpu_embedding_v2_utils._Optimizer] ) ``` > > **Note:** This mid level API is not intended for large embedding table lookup. Embedding tables will be replicated across devices rather than sharding across them. To do large embedding table lookup, please use the [`tpu.experimental.embedding.TPUEmbedding`](tpuembedding) class. This class is an alternative way to do embedding lookups when the TPU doesn't support any version of embedding feature. See `tpu.experimental.tpu_hardware_feature.embedding_feature` for a detailed explanation. > This class has to be created under the `TPUStrategy`, Otherwise a RuntimeError will be raised. ``` strategy = tf.distribute.TPUStrategy(...) with strategy.scope(): embedding = tf.tpu.experimental.embedding.TPUEmbeddingV0( feature_config=feature_config, optimizer=tf.tpu.experimental.embedding.SGD(0.1)) ``` When creating a distributed dataset that is to be passed to the lookup operation a special input option must be specified: ``` distributed_dataset = ( strategy.distribute_datasets_from_function( dataset_fn=..., options=tf.distribute.InputOptions( experimental_fetch_to_device=False)) dataset_iterator = iter(distributed_dataset) ``` Below is an example of a training and evaluation step: ``` optimizer = tf.keras.optimizers.SGD(0.1) @tf.function def training_step(dataset_iterator, num_steps): def tpu_step(embedding_features): with tf.GradientTape() as tape: tape.watch(embedding.embedding_table.values()) activation = embedding(embedding_features) model_output = model(activations) loss = ... # some function of labels and model_output embedding_gradients = tape.gradient(loss, embedding.embedding_table.values()) optimizer.apply_gradients(list(zip(gradients, mid_level_api.embedding_tables.values()))) # Insert your model gradient and optimizer application here for _ in tf.range(num_steps): strategy.run(tpu_step, args=(next(dataset_iterator), )) @tf.function def evalution_step(dataset_iterator, num_steps): def tpu_step(embedding_features): activations = embedding(embedding_features) model_output = model(activations) # Insert your evaluation code here. for _ in tf.range(num_steps): strategy.run(tpu_step, args=(next(dataset_iterator), )) ``` > > **Note:** The optimizer used here is a Keras optimizer. In order to make the slot variable creation stay consistent between Keras optimizers and embedding optimizers, the `slot_variable_creation_fn` argument of the embedding optimizers has to be passed with the Keras `add_slot` function. Also note that the slot names might be slightly different between them. > ``` optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.1) def slot_variable_creation_fn(table, slot_names, slot_initializers): slots = {} for slot, initializer in zip(slot_names, slot_initializers): slots[slot] = optimizer.add_slot(table, slot, initializer) return slots embedding_optimizer = tf.experimental.embedding.Adagrad( learning_rate=0.1, slot_variable_creation_fn=slot_variable_creation_fn) # Use the embedding optimizer to create mid level api and keras optimizer to # apply gradients. ``` | Attributes | | `embedding_tables` | Returns a dict of embedding tables, keyed by `TableConfig`. | Methods ------- ### `build` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_base.py#L140-L145) ``` build() ``` Create variables and slots variables for TPU embeddings. ### `embedding_lookup` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_v1.py#L239-L304) ``` embedding_lookup( features: Any, weights: Optional[Any] = None ) -> Any ``` Apply embedding lookup on TPUs using Tensorcore. Note that all the sparse and ragged tensors will be converted to dense tensors on CPU and then passed to the TPU to do embedding look up. Large embedding lookup is not supported by this API, use the TPUEmbedding mid level api instead. | Args | | `features` | a nested structure of Tensors, SparseTensors or RaggedTensors. | | `weights` | a nested structure of Tensors, SparseTensors or RaggedTensors or None for no weights. If not None, structure must match that of inputs, but entries are allowed to be None. | | Returns | | A nested structure of Tensors with the same structure as inputs. | ### `__call__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_base.py#L147-L151) ``` __call__( features: Any, weights: Optional[Any] = None ) -> Any ``` Call the mid level api to do embedding lookup. tensorflow tf.tpu.experimental.embedding.TPUEmbeddingForServing tf.tpu.experimental.embedding.TPUEmbeddingForServing ==================================================== The TPUEmbedding mid level API running on CPU for serving. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.embedding.TPUEmbeddingForServing`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/TPUEmbeddingForServing) ``` tf.tpu.experimental.embedding.TPUEmbeddingForServing( feature_config: Union[tf.tpu.experimental.embedding.FeatureConfig, Iterable], optimizer: Optional[tpu_embedding_v2_utils._Optimizer] ) ``` > > **Note:** This class is intended to be used for embedding tables that are trained on TPU and to be served on CPU. Therefore the class should be only initialized under non-TPU strategy. Otherwise an error will be raised. > You can first train your model using the TPUEmbedding class and save the checkpoint. Then use this class to restore the checkpoint to do serving. First train a model and save the checkpoint. ``` model = model_fn(...) strategy = tf.distribute.TPUStrategy(...) with strategy.scope(): embedding = tf.tpu.experimental.embedding.TPUEmbedding( feature_config=feature_config, optimizer=tf.tpu.experimental.embedding.SGD(0.1)) # Your custom training code. checkpoint = tf.train.Checkpoint(model=model, embedding=embedding) checkpoint.save(...) ``` Then restore the checkpoint and do serving. ``` # Restore the model on CPU. model = model_fn(...) embedding = tf.tpu.experimental.embedding.TPUEmbeddingForServing( feature_config=feature_config, optimizer=tf.tpu.experimental.embedding.SGD(0.1)) checkpoint = tf.train.Checkpoint(model=model, embedding=embedding) checkpoint.restore(...) result = embedding(...) table = embedding.embedding_table ``` > > **Note:** This class can also be used to do embedding training on CPU. But it requires the conversion between keras optimizer and embedding optimizers so that the slot variables can stay consistent between them. > | Args | | `feature_config` | A nested structure of [`tf.tpu.experimental.embedding.FeatureConfig`](featureconfig) configs. | | `optimizer` | An instance of one of [`tf.tpu.experimental.embedding.SGD`](sgd), [`tf.tpu.experimental.embedding.Adagrad`](adagrad) or [`tf.tpu.experimental.embedding.Adam`](adam). When not created under TPUStrategy may be set to None to avoid the creation of the optimizer slot variables, useful for optimizing memory consumption when exporting the model for serving where slot variables aren't needed. | | Raises | | `RuntimeError` | If created under TPUStrategy. | | Attributes | | `embedding_tables` | Returns a dict of embedding tables, keyed by `TableConfig`. | Methods ------- ### `build` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_base.py#L140-L145) ``` build() ``` Create variables and slots variables for TPU embeddings. ### `embedding_lookup` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_for_serving.py#L152-L173) ``` embedding_lookup( features: Any, weights: Optional[Any] = None ) -> Any ``` Apply standard lookup ops on CPU. | Args | | `features` | A nested structure of [`tf.Tensor`](../../../tensor)s, [`tf.SparseTensor`](../../../sparse/sparsetensor)s or [`tf.RaggedTensor`](../../../raggedtensor)s, with the same structure as `feature_config`. Inputs will be downcast to [`tf.int32`](../../../../tf#int32). Only one type out of [`tf.SparseTensor`](../../../sparse/sparsetensor) or [`tf.RaggedTensor`](../../../raggedtensor) is supported per call. | | `weights` | If not `None`, a nested structure of [`tf.Tensor`](../../../tensor)s, [`tf.SparseTensor`](../../../sparse/sparsetensor)s or [`tf.RaggedTensor`](../../../raggedtensor)s, matching the above, except that the tensors should be of float type (and they will be downcast to [`tf.float32`](../../../../tf#float32)). For [`tf.SparseTensor`](../../../sparse/sparsetensor)s we assume the `indices` are the same for the parallel entries from `features` and similarly for [`tf.RaggedTensor`](../../../raggedtensor)s we assume the row\_splits are the same. | | Returns | | A nested structure of Tensors with the same structure as input features. | ### `__call__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/tpu/tpu_embedding_base.py#L147-L151) ``` __call__( features: Any, weights: Optional[Any] = None ) -> Any ``` Call the mid level api to do embedding lookup. tensorflow tf.tpu.experimental.HardwareFeature.EmbeddingFeature tf.tpu.experimental.HardwareFeature.EmbeddingFeature ==================================================== Embedding feature flag strings. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.tpu.experimental.HardwareFeature.EmbeddingFeature`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/HardwareFeature/EmbeddingFeature) UNSUPPORTED: No embedding lookup accelerator available on the tpu. V1: Embedding lookup accelerator V1. The embedding lookup operation can only be placed at the beginning of computation. Only one instance of embedding lookup layer is allowed. V2: Embedding lookup accelerator V2. The embedding lookup operation can be placed anywhere of the computation. Multiple instances of embedding lookup layer is allowed. | Class Variables | | UNSUPPORTED | `<EmbeddingFeature.UNSUPPORTED: 'UNSUPPORTED'>` | | V1 | `<EmbeddingFeature.V1: 'V1'>` | | V2 | `<EmbeddingFeature.V2: 'V2'>` | tensorflow tf.errors.AlreadyExistsError tf.errors.AlreadyExistsError ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L312-L326) | Raised when an entity that we attempted to create already exists. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.AlreadyExistsError`](https://www.tensorflow.org/api_docs/python/tf/errors/AlreadyExistsError) ``` tf.errors.AlreadyExistsError( node_def, op, message, *args ) ``` For example, running an operation that saves a file (e.g. `tf.train.Saver.save`) could potentially raise this exception if an explicit filename for an existing file was passed. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.DeadlineExceededError tf.errors.DeadlineExceededError =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L280-L291) | Raised when a deadline expires before an operation could complete. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.DeadlineExceededError`](https://www.tensorflow.org/api_docs/python/tf/errors/DeadlineExceededError) ``` tf.errors.DeadlineExceededError( node_def, op, message, *args ) ``` This exception is not currently used. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.UnknownError tf.errors.UnknownError ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L240-L254) | Unknown error. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.UnknownError`](https://www.tensorflow.org/api_docs/python/tf/errors/UnknownError) ``` tf.errors.UnknownError( node_def, op, message, *args ) ``` An example of where this error may be returned is if a Status value received from another address space belongs to an error-space that is not known to this address space. Also, errors raised by APIs that do not return enough error information may be converted to this error. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.PermissionDeniedError tf.errors.PermissionDeniedError =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L330-L344) | Raised when the caller does not have permission to run an operation. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.PermissionDeniedError`](https://www.tensorflow.org/api_docs/python/tf/errors/PermissionDeniedError) ``` tf.errors.PermissionDeniedError( node_def, op, message, *args ) ``` For example, running the `tf.WholeFileReader.read` operation could raise `PermissionDeniedError` if it receives the name of a file for which the user does not have the read file permission. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.NotFoundError tf.errors.NotFoundError ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L295-L308) | Raised when a requested entity (e.g., a file or directory) was not found. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.NotFoundError`](https://www.tensorflow.org/api_docs/python/tf/errors/NotFoundError) ``` tf.errors.NotFoundError( node_def, op, message, *args ) ``` For example, running the `tf.WholeFileReader.read` operation could raise `NotFoundError` if it receives the name of a file that does not exist. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.OpError tf.errors.OpError ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L46-L170) | The base class for TensorFlow exceptions. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError), [`tf.compat.v1.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) ``` tf.errors.OpError( node_def, op, message, error_code, *args ) ``` Usually, TensorFlow will raise a more specific subclass of `OpError` from the [`tf.errors`](../errors) module. | Args | | `node_def` | The `node_def_pb2.NodeDef` proto representing the op that failed, if known; otherwise None. | | `op` | The [`ops.Operation`](https://quantumai.google/reference/python/cirq/ops/Operation) that failed, if known; otherwise None. During eager execution, this field is always `None`. | | `message` | The message string describing the failure. | | `error_code` | The `error_codes_pb2.Code` describing the error. | | `*args` | If not empty, it should contain a dictionary describing details about the error. This argument is inspired by Abseil payloads: https://github.com/abseil/abseil-cpp/blob/master/absl/status/status.h | | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. |
programming_docs
tensorflow tf.errors.ResourceExhaustedError tf.errors.ResourceExhaustedError ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L363-L375) | Some resource has been exhausted. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.ResourceExhaustedError`](https://www.tensorflow.org/api_docs/python/tf/errors/ResourceExhaustedError) ``` tf.errors.ResourceExhaustedError( node_def, op, message, *args ) ``` For example, this error might be raised if a per-user quota is exhausted, or perhaps the entire file system is out of space. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.CancelledError tf.errors.CancelledError ======================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L216-L233) | Raised when an operation or step is cancelled. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.CancelledError`](https://www.tensorflow.org/api_docs/python/tf/errors/CancelledError) ``` tf.errors.CancelledError( node_def, op, message, *args ) ``` For example, a long-running operation (e.g. `tf.QueueBase.enqueue` may be cancelled by running another operation (e.g. `tf.QueueBase.close`, or by `tf.Session.close`. A step that is running such a long-running operation will fail by raising `CancelledError`. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.InvalidArgumentError tf.errors.InvalidArgumentError ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L258-L276) | Raised when an operation receives an invalid argument. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.InvalidArgumentError`](https://www.tensorflow.org/api_docs/python/tf/errors/InvalidArgumentError) ``` tf.errors.InvalidArgumentError( node_def, op, message, *args ) ``` This error is typically raised when an op receives mismatched arguments. #### Example: ``` tf.reshape([1, 2, 3], (2,)) Traceback (most recent call last): InvalidArgumentError: ... ``` | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.InternalError tf.errors.InternalError ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L452-L463) | Raised when the system experiences an internal error. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.InternalError`](https://www.tensorflow.org/api_docs/python/tf/errors/InternalError) ``` tf.errors.InternalError( node_def, op, message, *args ) ``` This exception is raised when some invariant expected by the runtime has been broken. Catching this exception is not recommended. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.OutOfRangeError tf.errors.OutOfRangeError ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L414-L429) | Raised when an operation iterates past the valid input range. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.OutOfRangeError`](https://www.tensorflow.org/api_docs/python/tf/errors/OutOfRangeError) ``` tf.errors.OutOfRangeError( node_def, op, message, *args ) ``` This exception is raised in "end-of-file" conditions, such as when a `tf.QueueBase.dequeue` operation is blocked on an empty queue, and a `tf.QueueBase.close` operation executes. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.DataLossError tf.errors.DataLossError ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L482-L494) | Raised when unrecoverable data loss or corruption is encountered. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.DataLossError`](https://www.tensorflow.org/api_docs/python/tf/errors/DataLossError) ``` tf.errors.DataLossError( node_def, op, message, *args ) ``` For example, this may be raised by running a `tf.WholeFileReader.read` operation, if the file is truncated while it is being read. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.FailedPreconditionError tf.errors.FailedPreconditionError ================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L379-L392) | Operation was rejected because the system is not in a state to execute it. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.FailedPreconditionError`](https://www.tensorflow.org/api_docs/python/tf/errors/FailedPreconditionError) ``` tf.errors.FailedPreconditionError( node_def, op, message, *args ) ``` This exception is most commonly raised when running an operation that reads a [`tf.Variable`](../variable) before it has been initialized. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.OperatorNotAllowedInGraphError tf.errors.OperatorNotAllowedInGraphError ======================================== An error is raised for unsupported operator in Graph execution. ``` tf.errors.OperatorNotAllowedInGraphError( *args, **kwargs ) ``` For example, using a [`tf.Tensor`](../tensor) as a Python `bool` in Graph execution is not allowed. tensorflow tf.errors.AbortedError tf.errors.AbortedError ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L396-L410) | The operation was aborted, typically due to a concurrent action. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.AbortedError`](https://www.tensorflow.org/api_docs/python/tf/errors/AbortedError) ``` tf.errors.AbortedError( node_def, op, message, *args ) ``` For example, running a `tf.QueueBase.enqueue` operation may raise `AbortedError` if a `tf.QueueBase.close` operation previously ran. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.UnavailableError tf.errors.UnavailableError ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L467-L478) | Raised when the runtime is currently unavailable. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.UnavailableError`](https://www.tensorflow.org/api_docs/python/tf/errors/UnavailableError) ``` tf.errors.UnavailableError( node_def, op, message, *args ) ``` This exception is not currently used. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.UnauthenticatedError tf.errors.UnauthenticatedError ============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L348-L359) | The request does not have valid authentication credentials. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.UnauthenticatedError`](https://www.tensorflow.org/api_docs/python/tf/errors/UnauthenticatedError) ``` tf.errors.UnauthenticatedError( node_def, op, message, *args ) ``` This exception is not currently used. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.errors.UnimplementedError tf.errors.UnimplementedError ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/errors_impl.py#L433-L448) | Raised when an operation has not been implemented. Inherits From: [`OpError`](operror) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.errors.UnimplementedError`](https://www.tensorflow.org/api_docs/python/tf/errors/UnimplementedError) ``` tf.errors.UnimplementedError( node_def, op, message, *args ) ``` Some operations may raise this error when passed otherwise-valid arguments that it does not currently support. For example, running the [`tf.nn.max_pool2d`](../nn/max_pool2d) operation would raise this error if pooling was requested on the batch dimension, because this is not yet supported. | Attributes | | `error_code` | The integer error code that describes the error. | | `experimental_payloads` | A dictionary describing the details of the error. | | `message` | The error message that describes the error. | | `node_def` | The `NodeDef` proto representing the op that failed. | | `op` | The operation that failed, if known. **Note:** If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding [`tf.Operation`](../operation) object. In that case, this will return `None`, and you should instead use the [`tf.errors.OpError.node_def`](operror#node_def) to discover information about the op. | tensorflow tf.bitwise.bitwise_and tf.bitwise.bitwise\_and ======================= Elementwise computes the bitwise AND of `x` and `y`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.bitwise.bitwise_and`](https://www.tensorflow.org/api_docs/python/tf/bitwise/bitwise_and) ``` tf.bitwise.bitwise_and( x, y, name=None ) ``` The result will have those bits set, that are set in both `x` and `y`. The computation is performed on the underlying representations of `x` and `y`. #### For example: ``` import tensorflow as tf from tensorflow.python.ops import bitwise_ops dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, tf.uint8, tf.uint16, tf.uint32, tf.uint64] for dtype in dtype_list: lhs = tf.constant([0, 5, 3, 14], dtype=dtype) rhs = tf.constant([5, 0, 7, 11], dtype=dtype) exp = tf.constant([0, 0, 3, 10], dtype=tf.float32) res = bitwise_ops.bitwise_and(lhs, rhs) tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.bitwise.bitwise_xor tf.bitwise.bitwise\_xor ======================= Elementwise computes the bitwise XOR of `x` and `y`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.bitwise.bitwise_xor`](https://www.tensorflow.org/api_docs/python/tf/bitwise/bitwise_xor) ``` tf.bitwise.bitwise_xor( x, y, name=None ) ``` The result will have those bits set, that are different in `x` and `y`. The computation is performed on the underlying representations of `x` and `y`. #### For example: ``` import tensorflow as tf from tensorflow.python.ops import bitwise_ops dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, tf.uint8, tf.uint16, tf.uint32, tf.uint64] for dtype in dtype_list: lhs = tf.constant([0, 5, 3, 14], dtype=dtype) rhs = tf.constant([5, 0, 7, 11], dtype=dtype) exp = tf.constant([5, 5, 4, 5], dtype=tf.float32) res = bitwise_ops.bitwise_xor(lhs, rhs) tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.bitwise.bitwise_or tf.bitwise.bitwise\_or ====================== Elementwise computes the bitwise OR of `x` and `y`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.bitwise.bitwise_or`](https://www.tensorflow.org/api_docs/python/tf/bitwise/bitwise_or) ``` tf.bitwise.bitwise_or( x, y, name=None ) ``` The result will have those bits set, that are set in `x`, `y` or both. The computation is performed on the underlying representations of `x` and `y`. #### For example: ``` import tensorflow as tf from tensorflow.python.ops import bitwise_ops dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, tf.uint8, tf.uint16, tf.uint32, tf.uint64] for dtype in dtype_list: lhs = tf.constant([0, 5, 3, 14], dtype=dtype) rhs = tf.constant([5, 0, 7, 11], dtype=dtype) exp = tf.constant([5, 5, 7, 15], dtype=tf.float32) res = bitwise_ops.bitwise_or(lhs, rhs) tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. |
programming_docs
tensorflow tf.bitwise.left_shift tf.bitwise.left\_shift ====================== Elementwise computes the bitwise left-shift of `x` and `y`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.bitwise.left_shift`](https://www.tensorflow.org/api_docs/python/tf/bitwise/left_shift) ``` tf.bitwise.left_shift( x, y, name=None ) ``` If `y` is negative, or greater than or equal to the width of `x` in bits the result is implementation defined. #### Example: ``` import tensorflow as tf from tensorflow.python.ops import bitwise_ops import numpy as np dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] for dtype in dtype_list: lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) rhs = tf.constant([5, 0, 7, 11], dtype=dtype) left_shift_result = bitwise_ops.left_shift(lhs, rhs) print(left_shift_result) # This will print: # tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8) # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16) # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32) # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64) lhs = np.array([-2, 64, 101, 32], dtype=np.int8) rhs = np.array([-1, -5, -3, -14], dtype=np.int8) bitwise_ops.left_shift(lhs, rhs) # <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.bitwise.right_shift tf.bitwise.right\_shift ======================= Elementwise computes the bitwise right-shift of `x` and `y`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.bitwise.right_shift`](https://www.tensorflow.org/api_docs/python/tf/bitwise/right_shift) ``` tf.bitwise.right_shift( x, y, name=None ) ``` Performs a logical shift for unsigned integer types, and an arithmetic shift for signed integer types. If `y` is negative, or greater than or equal to than the width of `x` in bits the result is implementation defined. #### Example: ``` import tensorflow as tf from tensorflow.python.ops import bitwise_ops import numpy as np dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] for dtype in dtype_list: lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) rhs = tf.constant([5, 0, 7, 11], dtype=dtype) right_shift_result = bitwise_ops.right_shift(lhs, rhs) print(right_shift_result) # This will print: # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8) # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16) # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32) # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64) lhs = np.array([-2, 64, 101, 32], dtype=np.int8) rhs = np.array([-1, -5, -3, -14], dtype=np.int8) bitwise_ops.right_shift(lhs, rhs) # <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)> ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. | | `y` | A `Tensor`. Must have the same type as `x`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.bitwise.invert tf.bitwise.invert ================= Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.bitwise.invert`](https://www.tensorflow.org/api_docs/python/tf/bitwise/invert) ``` tf.bitwise.invert( x, name=None ) ``` Flip each bit of supported types. For example, type `int8` (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. This operation is performed on each element of the tensor argument `x`. #### Example: ``` import tensorflow as tf from tensorflow.python.ops import bitwise_ops # flip 2 (00000010) to -3 (11111101) tf.assert_equal(-3, bitwise_ops.invert(2)) dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64, dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64] inputs = [0, 5, 3, 14] for dtype in dtype_list: # Because of issues with negative numbers, let's test this indirectly. # 1. invert(a) and a = 0 # 2. invert(a) or a = invert(0) input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype) not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and( input_tensor, bitwise_ops.invert(input_tensor)), bitwise_ops.bitwise_or( input_tensor, bitwise_ops.invert(input_tensor)), bitwise_ops.invert( tf.constant(0, dtype=dtype))] expected = tf.constant([0, 0, 0, 0], dtype=tf.float32) tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected) expected = tf.cast([not_0] * 4, tf.float32) tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected) # For unsigned dtypes let's also check the result directly. if dtype.is_unsigned: inverted = bitwise_ops.invert(input_tensor) expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32) tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32)) ``` | Args | | `x` | A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `x`. | tensorflow tf.random.stateless_normal tf.random.stateless\_normal =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateless_random_ops.py#L680-L724) | Outputs deterministic pseudorandom values from a normal distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_normal`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_normal) ``` tf.random.stateless_normal( shape, seed, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None, alg='auto_select' ) ``` This is a stateless version of [`tf.random.normal`](normal): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `mean` | A 0-D Tensor or Python value of type `dtype`. The mean of the normal distribution. | | `stddev` | A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution. | | `dtype` | The float type of the output: `float16`, `bfloat16`, `float32`, `float64`. Defaults to `float32`. | | `name` | A name for the operation (optional). | | `alg` | The RNG algorithm used to generate the random numbers. See [`tf.random.stateless_uniform`](stateless_uniform) for a detailed explanation. | | Returns | | A tensor of the specified shape filled with random normal values. | tensorflow tf.random.Algorithm tf.random.Algorithm =================== An enumeration. #### View aliases **Main aliases** [`tf.random.experimental.Algorithm`](https://www.tensorflow.org/api_docs/python/tf/random/Algorithm) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.Algorithm`](https://www.tensorflow.org/api_docs/python/tf/random/Algorithm), [`tf.compat.v1.random.experimental.Algorithm`](https://www.tensorflow.org/api_docs/python/tf/random/Algorithm) | Class Variables | | AUTO\_SELECT | `<Algorithm.AUTO_SELECT: 3>` | | PHILOX | `<Algorithm.PHILOX: 1>` | | THREEFRY | `<Algorithm.THREEFRY: 2>` | tensorflow tf.random.categorical tf.random.categorical ===================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L500-L527) | Draws samples from a categorical distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.categorical`](https://www.tensorflow.org/api_docs/python/tf/random/categorical) ``` tf.random.categorical( logits, num_samples, dtype=None, seed=None, name=None ) ``` #### Example: ``` # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5) ``` | Args | | `logits` | 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` represents the unnormalized log-probabilities for all classes. | | `num_samples` | 0-D. Number of independent samples to draw for each row slice. | | `dtype` | The integer type of the output: `int32` or `int64`. Defaults to `int64`. | | `seed` | A Python integer. Used to create a random seed for the distribution. See [`tf.random.set_seed`](set_seed) for behavior. | | `name` | Optional name for the operation. | | Returns | | The drawn samples of shape `[batch_size, num_samples]`. | tensorflow tf.random.stateless_binomial tf.random.stateless\_binomial ============================= Outputs deterministic pseudorandom values from a binomial distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_binomial`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_binomial) ``` tf.random.stateless_binomial( shape, seed, counts, probs, output_dtype=tf.dtypes.int32, name=None ) ``` The generated values follow a binomial distribution with specified count and probability of success parameters. This is a stateless version of [`tf.random.Generator.binomial`](generator#binomial): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. #### Example: ``` counts = [10., 20.] # Probability of success. probs = [0.8] binomial_samples = tf.random.stateless_binomial( shape=[2], seed=[123, 456], counts=counts, probs=probs) counts = ... # Shape [3, 1, 2] probs = ... # Shape [1, 4, 2] shape = [3, 4, 3, 4, 2] # Sample shape will be [3, 4, 3, 4, 2] binomial_samples = tf.random.stateless_binomial( shape=shape, seed=[123, 456], counts=counts, probs=probs) ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `counts` | Tensor. The counts of the binomial distribution. Must be broadcastable with `probs`, and broadcastable with the rightmost dimensions of `shape`. | | `probs` | Tensor. The probability of success for the binomial distribution. Must be broadcastable with `counts` and broadcastable with the rightmost dimensions of `shape`. | | `output_dtype` | The type of the output. Default: tf.int32 | | `name` | A name for the operation (optional). | | Returns | | `samples` | A Tensor of the specified shape filled with random binomial values. For each i, each samples[..., i] is an independent draw from the binomial distribution on counts[i] trials with probability of success probs[i]. | tensorflow tf.random.set_seed tf.random.set\_seed =================== Sets the global random seed. ``` tf.random.set_seed( seed ) ``` Operations that rely on a random seed actually derive it from two seeds: the global and operation-level seeds. This sets the global seed. Its interactions with operation-level seeds is as follows: 1. If neither the global seed nor the operation seed is set: A randomly picked seed is used for this op. 2. If the global seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the global seed so that it gets a unique random sequence. Within the same version of tensorflow and user code, this sequence is deterministic. However across different versions, this sequence might change. If the code depends on particular seeds to work, specify both global and operation-level seeds explicitly. 3. If the operation seed is set, but the global seed is not set: A default global seed and the specified operation seed are used to determine the random sequence. 4. If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence. To illustrate the user-visible effects, consider these examples: If neither the global seed nor the operation seed is set, we get different results for every call to the random op and every re-run of the program: ``` print(tf.random.uniform([1])) # generates 'A1' print(tf.random.uniform([1])) # generates 'A2' ``` (now close the program and run it again) ``` print(tf.random.uniform([1])) # generates 'A3' print(tf.random.uniform([1])) # generates 'A4' ``` If the global seed is set but the operation seed is not set, we get different results for every call to the random op, but the same sequence for every re-run of the program: ``` tf.random.set_seed(1234) print(tf.random.uniform([1])) # generates 'A1' print(tf.random.uniform([1])) # generates 'A2' ``` (now close the program and run it again) ``` tf.random.set_seed(1234) print(tf.random.uniform([1])) # generates 'A1' print(tf.random.uniform([1])) # generates 'A2' ``` The reason we get 'A2' instead 'A1' on the second call of [`tf.random.uniform`](uniform) above is because the second call uses a different operation seed. Note that [`tf.function`](../function) acts like a re-run of a program in this case. When the global seed is set but operation seeds are not set, the sequence of random numbers are the same for each [`tf.function`](../function). For example: ``` tf.random.set_seed(1234) @tf.function def f(): a = tf.random.uniform([1]) b = tf.random.uniform([1]) return a, b @tf.function def g(): a = tf.random.uniform([1]) b = tf.random.uniform([1]) return a, b print(f()) # prints '(A1, A2)' print(g()) # prints '(A1, A2)' ``` If the operation seed is set, we get different results for every call to the random op, but the same sequence for every re-run of the program: ``` print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' ``` (now close the program and run it again) ``` print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' ``` The reason we get 'A2' instead 'A1' on the second call of [`tf.random.uniform`](uniform) above is because the same [`tf.random.uniform`](uniform) kernel (i.e. internal representation) is used by TensorFlow for all calls of it with the same arguments, and the kernel maintains an internal counter which is incremented every time it is executed, generating different results. Calling [`tf.random.set_seed`](set_seed) will reset any such counters: ``` tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' ``` When multiple identical random ops are wrapped in a [`tf.function`](../function), their behaviors change because the ops no long share the same counter. For example: ``` @tf.function def foo(): a = tf.random.uniform([1], seed=1) b = tf.random.uniform([1], seed=1) return a, b print(foo()) # prints '(A1, A1)' print(foo()) # prints '(A2, A2)' @tf.function def bar(): a = tf.random.uniform([1]) b = tf.random.uniform([1]) return a, b print(bar()) # prints '(A1, A2)' print(bar()) # prints '(A3, A4)' ``` The second call of `foo` returns '(A2, A2)' instead of '(A1, A1)' because [`tf.random.uniform`](uniform) maintains an internal counter. If you want `foo` to return '(A1, A1)' every time, use the stateless random ops such as [`tf.random.stateless_uniform`](stateless_uniform). Also see [`tf.random.experimental.Generator`](generator) for a new set of stateful random ops that use external variables to manage their states. | Args | | `seed` | integer. | tensorflow tf.random.shuffle tf.random.shuffle ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L327-L357) | Randomly shuffles a tensor along its first dimension. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.shuffle`](https://www.tensorflow.org/api_docs/python/tf/random/shuffle), [`tf.compat.v1.random_shuffle`](https://www.tensorflow.org/api_docs/python/tf/random/shuffle) ``` tf.random.shuffle( value, seed=None, name=None ) ``` The tensor is shuffled along dimension 0, such that each `value[j]` is mapped to one and only one `output[i]`. For example, a mapping that might occur for a 3x2 tensor is: ``` [[1, 2], [[5, 6], [3, 4], ==> [1, 2], [5, 6]] [3, 4]] ``` | Args | | `value` | A Tensor to be shuffled. | | `seed` | A Python integer. Used to create a random seed for the distribution. See [`tf.random.set_seed`](set_seed) for behavior. | | `name` | A name for the operation (optional). | | Returns | | A tensor of same shape and type as `value`, shuffled along its first dimension. | tensorflow tf.random.get_global_generator tf.random.get\_global\_generator ================================ Retrieves the global generator. #### View aliases **Main aliases** [`tf.random.experimental.get_global_generator`](https://www.tensorflow.org/api_docs/python/tf/random/get_global_generator) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.experimental.get_global_generator`](https://www.tensorflow.org/api_docs/python/tf/random/get_global_generator), [`tf.compat.v1.random.get_global_generator`](https://www.tensorflow.org/api_docs/python/tf/random/get_global_generator) ``` tf.random.get_global_generator() ``` This function will create the global generator the first time it is called, and the generator will be placed at the default device at that time, so one needs to be careful when this function is first called. Using a generator placed on a less-ideal device will incur performance regression. | Returns | | The global [`tf.random.Generator`](generator) object. | tensorflow tf.random.stateless_truncated_normal tf.random.stateless\_truncated\_normal ====================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateless_random_ops.py#L727-L774) | Outputs deterministic pseudorandom values, truncated normally distributed. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_truncated_normal`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_truncated_normal) ``` tf.random.stateless_truncated_normal( shape, seed, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None, alg='auto_select' ) ``` This is a stateless version of [`tf.random.truncated_normal`](truncated_normal): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `mean` | A 0-D Tensor or Python value of type `dtype`. The mean of the truncated normal distribution. | | `stddev` | A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution, before truncation. | | `dtype` | The type of the output. | | `name` | A name for the operation (optional). | | `alg` | The RNG algorithm used to generate the random numbers. See [`tf.random.stateless_uniform`](stateless_uniform) for a detailed explanation. | | Returns | | A tensor of the specified shape filled with random truncated normal values. |
programming_docs
tensorflow tf.random.uniform tf.random.uniform ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L212-L321) | Outputs random values from a uniform distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.uniform`](https://www.tensorflow.org/api_docs/python/tf/random/uniform), [`tf.compat.v1.random_uniform`](https://www.tensorflow.org/api_docs/python/tf/random/uniform) ``` tf.random.uniform( shape, minval=0, maxval=None, dtype=tf.dtypes.float32, seed=None, name=None ) ``` The generated values follow a uniform distribution in the range `[minval, maxval)`. The lower bound `minval` is included in the range, while the upper bound `maxval` is excluded. For floats, the default range is `[0, 1)`. For ints, at least `maxval` must be specified explicitly. In the integer case, the random integers are slightly biased unless `maxval - minval` is an exact power of two. The bias is small for values of `maxval - minval` significantly smaller than the range of the output (either `2**32` or `2**64`). #### Examples: ``` tf.random.uniform(shape=[2]) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([..., ...], dtype=float32)> tf.random.uniform(shape=[], minval=-1., maxval=0.) <tf.Tensor: shape=(), dtype=float32, numpy=-...> tf.random.uniform(shape=[], minval=5, maxval=10, dtype=tf.int64) <tf.Tensor: shape=(), dtype=int64, numpy=...> ``` The `seed` argument produces a deterministic sequence of tensors across multiple calls. To repeat that sequence, use [`tf.random.set_seed`](set_seed): ``` tf.random.set_seed(5) tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=2> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=0> tf.random.set_seed(5) tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=2> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=0> ``` Without [`tf.random.set_seed`](set_seed) but with a `seed` argument is specified, small changes to function graphs or previously executed operations will change the returned value. See [`tf.random.set_seed`](set_seed) for details. | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `minval` | A Tensor or Python value of type `dtype`, broadcastable with `shape` (for integer types, broadcasting is not supported, so it needs to be a scalar). The lower bound on the range of random values to generate (inclusive). Defaults to 0. | | `maxval` | A Tensor or Python value of type `dtype`, broadcastable with `shape` (for integer types, broadcasting is not supported, so it needs to be a scalar). The upper bound on the range of random values to generate (exclusive). Defaults to 1 if `dtype` is floating point. | | `dtype` | The type of the output: `float16`, `bfloat16`, `float32`, `float64`, `int32`, or `int64`. Defaults to `float32`. | | `seed` | A Python integer. Used in combination with [`tf.random.set_seed`](set_seed) to create a reproducible sequence of tensors across multiple calls. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random uniform values. | | Raises | | `ValueError` | If `dtype` is integral and `maxval` is not specified. | tensorflow tf.random.stateless_poisson tf.random.stateless\_poisson ============================ Outputs deterministic pseudorandom values from a Poisson distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_poisson`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_poisson) ``` tf.random.stateless_poisson( shape, seed, lam, dtype=tf.dtypes.int32, name=None ) ``` The generated values follow a Poisson distribution with specified rate parameter. This is a stateless version of [`tf.random.poisson`](poisson): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware, but may change between versions of TensorFlow or on non-CPU/GPU hardware. A slight difference exists in the interpretation of the `shape` parameter between `stateless_poisson` and `poisson`: in `poisson`, the `shape` is always prepended to the shape of `lam`; whereas in `stateless_poisson` the shape of `lam` must match the trailing dimensions of `shape`. #### Example: ``` samples = tf.random.stateless_poisson([10, 2], seed=[12, 34], lam=[5, 15]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.stateless_poisson([7, 5, 2], seed=[12, 34], lam=[5, 15]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions rate = tf.constant([[1.], [3.], [5.]]) samples = tf.random.stateless_poisson([30, 3, 1], seed=[12, 34], lam=rate) # samples has shape [30, 3, 1], with 30 samples each of 3x1 distributions. ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `lam` | Tensor. The rate parameter "lambda" of the Poisson distribution. Shape must match the rightmost dimensions of `shape`. | | `dtype` | Dtype of the samples (int or float dtypes are permissible, as samples are discrete). Default: int32. | | `name` | A name for the operation (optional). | | Returns | | `samples` | A Tensor of the specified shape filled with random Poisson values. For each i, each `samples[..., i]` is an independent draw from the Poisson distribution with rate `lam[i]`. | tensorflow tf.random.gamma tf.random.gamma =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L557-L648) | Draws `shape` samples from each of the given Gamma distribution(s). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.gamma`](https://www.tensorflow.org/api_docs/python/tf/random/gamma), [`tf.compat.v1.random_gamma`](https://www.tensorflow.org/api_docs/python/tf/random/gamma) ``` tf.random.gamma( shape, alpha, beta=None, dtype=tf.dtypes.float32, seed=None, name=None ) ``` `alpha` is the shape parameter describing the distribution(s), and `beta` is the inverse scale parameter(s). > > **Note:** Because internal calculations are done using `float64` and casting has `floor` semantics, we must manually map zero outcomes to the smallest possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise should. This bias can only happen for small values of `alpha`, i.e., `alpha << 1` or large values of `beta`, i.e., `beta >> 1`. > The samples are differentiable w.r.t. alpha and beta. The derivatives are computed using the approach described in (Figurnov et al., 2018). #### Example: ``` samples = tf.random.gamma([10], [0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.gamma([7, 5], [0.5, 1.5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions alpha = tf.constant([[1.],[3.],[5.]]) beta = tf.constant([[3., 4.]]) samples = tf.random.gamma([30], alpha=alpha, beta=beta) # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions. loss = tf.reduce_mean(tf.square(samples)) dloss_dalpha, dloss_dbeta = tf.gradients(loss, [alpha, beta]) # unbiased stochastic derivatives of the loss function alpha.shape == dloss_dalpha.shape # True beta.shape == dloss_dbeta.shape # True ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output samples to be drawn per alpha/beta-parameterized distribution. | | `alpha` | A Tensor or Python value or N-D array of type `dtype`. `alpha` provides the shape parameter(s) describing the gamma distribution(s) to sample. Must be broadcastable with `beta`. | | `beta` | A Tensor or Python value or N-D array of type `dtype`. Defaults to 1. `beta` provides the inverse scale parameter(s) of the gamma distribution(s) to sample. Must be broadcastable with `alpha`. | | `dtype` | The type of alpha, beta, and the output: `float16`, `float32`, or `float64`. | | `seed` | A Python integer. Used to create a random seed for the distributions. See [`tf.random.set_seed`](set_seed) for behavior. | | `name` | Optional name for the operation. | | Returns | | `samples` | a `Tensor` of shape `tf.concat([shape, tf.shape(alpha + beta)], axis=0)` with values of type `dtype`. | #### References: Implicit Reparameterization Gradients: [Figurnov et al., 2018](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf)) tensorflow tf.random.stateless_parameterized_truncated_normal tf.random.stateless\_parameterized\_truncated\_normal ===================================================== Outputs random values from a truncated normal distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_parameterized_truncated_normal`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_parameterized_truncated_normal) ``` tf.random.stateless_parameterized_truncated_normal( shape, seed, means=0.0, stddevs=1.0, minvals=-2.0, maxvals=2.0, name=None ) ``` The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. #### Examples: Sample from a Truncated normal, with deferring shape parameters that broadcast. ``` means = 0. stddevs = tf.math.exp(tf.random.uniform(shape=[2, 3])) minvals = [-1., -2., -1000.] maxvals = [[10000.], [1.]] y = tf.random.stateless_parameterized_truncated_normal( shape=[10, 2, 3], seed=[7, 17], means=means, stddevs=stddevs, minvals=minvals, maxvals=maxvals) y.shape TensorShape([10, 2, 3]) ``` | Args | | `shape` | A 1-D integer `Tensor` or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `means` | A `Tensor` or Python value of type `dtype`. The mean of the truncated normal distribution. This must broadcast with `stddevs`, `minvals` and `maxvals`, and the broadcasted shape must be dominated by `shape`. | | `stddevs` | A `Tensor` or Python value of type `dtype`. The standard deviation of the truncated normal distribution. This must broadcast with `means`, `minvals` and `maxvals`, and the broadcasted shape must be dominated by `shape`. | | `minvals` | A `Tensor` or Python value of type `dtype`. The minimum value of the truncated normal distribution. This must broadcast with `means`, `stddevs` and `maxvals`, and the broadcasted shape must be dominated by `shape`. | | `maxvals` | A `Tensor` or Python value of type `dtype`. The maximum value of the truncated normal distribution. This must broadcast with `means`, `stddevs` and `minvals`, and the broadcasted shape must be dominated by `shape`. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random truncated normal values. | tensorflow tf.random.stateless_categorical tf.random.stateless\_categorical ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateless_random_ops.py#L821-L861) | Draws deterministic pseudorandom samples from a categorical distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_categorical`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_categorical) ``` tf.random.stateless_categorical( logits, num_samples, seed, dtype=tf.dtypes.int64, name=None ) ``` This is a stateless version of `tf.categorical`: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. #### Example: ``` # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.stateless_categorical( tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17]) ``` | Args | | `logits` | 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` represents the unnormalized log-probabilities for all classes. | | `num_samples` | 0-D. Number of independent samples to draw for each row slice. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `dtype` | The integer type of the output: `int32` or `int64`. Defaults to `int64`. | | `name` | Optional name for the operation. | | Returns | | The drawn samples of shape `[batch_size, num_samples]`. | tensorflow tf.random.truncated_normal tf.random.truncated\_normal =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L156-L205) | Outputs random values from a truncated normal distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.truncated_normal`](https://www.tensorflow.org/api_docs/python/tf/random/truncated_normal), [`tf.compat.v1.truncated_normal`](https://www.tensorflow.org/api_docs/python/tf/random/truncated_normal) ``` tf.random.truncated_normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, seed=None, name=None ) ``` The values are drawn from a normal distribution with specified mean and standard deviation, discarding and re-drawing any samples that are more than two standard deviations from the mean. #### Examples: ``` tf.random.truncated_normal(shape=[2]) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([..., ...], dtype=float32)> ``` ``` tf.random.truncated_normal(shape=[2], mean=3, stddev=1, dtype=tf.float32) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([..., ...], dtype=float32)> ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `mean` | A 0-D Tensor or Python value of type `dtype`. The mean of the truncated normal distribution. | | `stddev` | A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution, before truncation. | | `dtype` | The type of the output. Restricted to floating-point types: [`tf.half`](../../tf#half), `tf.float`, [`tf.double`](../../tf#double), etc. | | `seed` | A Python integer. Used to create a random seed for the distribution. See [`tf.random.set_seed`](set_seed) for more information. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random truncated normal values. | tensorflow tf.random.log_uniform_candidate_sampler tf.random.log\_uniform\_candidate\_sampler ========================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/candidate_sampling_ops.py#L87-L150) | Samples a set of classes using a log-uniform (Zipfian) base distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nn.log_uniform_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/log_uniform_candidate_sampler), [`tf.compat.v1.random.log_uniform_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/log_uniform_candidate_sampler) ``` tf.random.log_uniform_candidate_sampler( true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None ) ``` This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`. The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution. The base distribution for this operation is an approximately log-uniform or Zipfian distribution: `P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)` This sampler is useful when the target classes approximately follow such a distribution - for example, if the classes represent words in a lexicon sorted in decreasing order of frequency. If your classes are not ordered by decreasing frequency, do not use this op. In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately. | Args | | `true_classes` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. | | `num_true` | An `int`. The number of target classes per training example. | | `num_sampled` | An `int`. The number of classes to randomly sample. | | `unique` | A `bool`. Determines whether all sampled classes in a batch are unique. | | `range_max` | An `int`. The number of possible classes. | | `seed` | An `int`. An operation-specific seed. Default is 0. | | `name` | A name for the operation (optional). | | Returns | | `sampled_candidates` | A tensor of type `int64` and shape `[num_sampled]`. The sampled classes. | | `true_expected_count` | A tensor of type `float`. Same shape as `true_classes`. The expected counts under the sampling distribution of each of `true_classes`. | | `sampled_expected_count` | A tensor of type `float`. Same shape as `sampled_candidates`. The expected counts under the sampling distribution of each of `sampled_candidates`. | tensorflow tf.random.all_candidate_sampler tf.random.all\_candidate\_sampler ================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/candidate_sampling_ops.py#L308-L341) | Generate the set of all classes. #### View aliases **Main aliases** [`tf.nn.all_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/all_candidate_sampler) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nn.all_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/all_candidate_sampler), [`tf.compat.v1.random.all_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/all_candidate_sampler) ``` tf.random.all_candidate_sampler( true_classes, num_true, num_sampled, unique, seed=None, name=None ) ``` Deterministically generates and returns the set of all possible classes. For testing purposes. There is no need to use this, since you might as well use full softmax or full logistic regression. | Args | | `true_classes` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. | | `num_true` | An `int`. The number of target classes per training example. | | `num_sampled` | An `int`. The number of possible classes. | | `unique` | A `bool`. Ignored. unique. | | `seed` | An `int`. An operation-specific seed. Default is 0. | | `name` | A name for the operation (optional). | | Returns | | `sampled_candidates` | A tensor of type `int64` and shape `[num_sampled]`. This operation deterministically returns the entire range `[0, num_sampled]`. | | `true_expected_count` | A tensor of type `float`. Same shape as `true_classes`. The expected counts under the sampling distribution of each of `true_classes`. All returned values are 1.0. | | `sampled_expected_count` | A tensor of type `float`. Same shape as `sampled_candidates`. The expected counts under the sampling distribution of each of `sampled_candidates`. All returned values are 1.0. |
programming_docs
tensorflow tf.random.learned_unigram_candidate_sampler tf.random.learned\_unigram\_candidate\_sampler ============================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/candidate_sampling_ops.py#L153-L211) | Samples a set of classes from a distribution learned during training. #### View aliases **Main aliases** [`tf.nn.learned_unigram_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/learned_unigram_candidate_sampler) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nn.learned_unigram_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/learned_unigram_candidate_sampler), [`tf.compat.v1.random.learned_unigram_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/learned_unigram_candidate_sampler) ``` tf.random.learned_unigram_candidate_sampler( true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None ) ``` This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`. The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution. The base distribution for this operation is constructed on the fly during training. It is a unigram distribution over the target classes seen so far during training. Every integer in `[0, range_max)` begins with a weight of 1, and is incremented by 1 each time it is seen as a target class. The base distribution is not saved to checkpoints, so it is reset when the model is reloaded. In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately. | Args | | `true_classes` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. | | `num_true` | An `int`. The number of target classes per training example. | | `num_sampled` | An `int`. The number of classes to randomly sample. | | `unique` | A `bool`. Determines whether all sampled classes in a batch are unique. | | `range_max` | An `int`. The number of possible classes. | | `seed` | An `int`. An operation-specific seed. Default is 0. | | `name` | A name for the operation (optional). | | Returns | | `sampled_candidates` | A tensor of type `int64` and shape `[num_sampled]`. The sampled classes. | | `true_expected_count` | A tensor of type `float`. Same shape as `true_classes`. The expected counts under the sampling distribution of each of `true_classes`. | | `sampled_expected_count` | A tensor of type `float`. Same shape as `sampled_candidates`. The expected counts under the sampling distribution of each of `sampled_candidates`. | tensorflow tf.random.stateless_gamma tf.random.stateless\_gamma ========================== Outputs deterministic pseudorandom values from a gamma distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_gamma`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_gamma) ``` tf.random.stateless_gamma( shape, seed, alpha, beta=None, dtype=tf.dtypes.float32, name=None ) ``` The generated values follow a gamma distribution with specified concentration (`alpha`) and inverse scale (`beta`) parameters. This is a stateless version of [`tf.random.gamma`](gamma): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. A slight difference exists in the interpretation of the `shape` parameter between `stateless_gamma` and `gamma`: in `gamma`, the `shape` is always prepended to the shape of the broadcast of `alpha` with `beta`; whereas in `stateless_gamma` the `shape` parameter must always encompass the shapes of each of `alpha` and `beta` (which must broadcast together to match the trailing dimensions of `shape`). > > **Note:** Because internal calculations are done using `float64` and casting has `floor` semantics, we must manually map zero outcomes to the smallest possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise should. This bias can only happen for small values of `alpha`, i.e., `alpha << 1` or large values of `beta`, i.e., `beta >> 1`. > The samples are differentiable w.r.t. alpha and beta. The derivatives are computed using the approach described in (Figurnov et al., 2018). #### Example: ``` samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions alpha = tf.constant([[1.], [3.], [5.]]) beta = tf.constant([[3., 4.]]) samples = tf.random.stateless_gamma( [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta) # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions. with tf.GradientTape() as tape: tape.watch([alpha, beta]) loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma( [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta))) dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta]) # unbiased stochastic derivatives of the loss function alpha.shape == dloss_dalpha.shape # True beta.shape == dloss_dbeta.shape # True ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `alpha` | Tensor. The concentration parameter of the gamma distribution. Must be broadcastable with `beta`, and broadcastable with the rightmost dimensions of `shape`. | | `beta` | Tensor. The inverse scale parameter of the gamma distribution. Must be broadcastable with `alpha` and broadcastable with the rightmost dimensions of `shape`. | | `dtype` | Floating point dtype of `alpha`, `beta`, and the output. | | `name` | A name for the operation (optional). | | Returns | | `samples` | A Tensor of the specified shape filled with random gamma values. For each i, each `samples[..., i] is an independent draw from the gamma distribution with concentration alpha[i] and scale beta[i]. | tensorflow tf.random.poisson tf.random.poisson ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L692-L735) | Draws `shape` samples from each of the given Poisson distribution(s). ``` tf.random.poisson( shape, lam, dtype=tf.dtypes.float32, seed=None, name=None ) ``` `lam` is the rate parameter describing the distribution(s). #### Example: ``` samples = tf.random.poisson([10], [0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.poisson([7, 5], [12.2, 3.3]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output samples to be drawn per "rate"-parameterized distribution. | | `lam` | A Tensor or Python value or N-D array of type `dtype`. `lam` provides the rate parameter(s) describing the poisson distribution(s) to sample. | | `dtype` | The type of the output: `float16`, `float32`, `float64`, `int32` or `int64`. | | `seed` | A Python integer. Used to create a random seed for the distributions. See [`tf.random.set_seed`](set_seed) for behavior. | | `name` | Optional name for the operation. | | Returns | | `samples` | a `Tensor` of shape `tf.concat([shape, tf.shape(lam)], axis=0)` with values of type `dtype`. | tensorflow Module: tf.random.experimental Module: tf.random.experimental ============================== Public API for tf.random.experimental namespace. Classes ------- [`class Algorithm`](algorithm): An enumeration. [`class Generator`](generator): Random-number generator. Functions --------- [`create_rng_state(...)`](create_rng_state): Creates a RNG state from an integer or a vector. [`get_global_generator(...)`](get_global_generator): Retrieves the global generator. [`index_shuffle(...)`](experimental/index_shuffle): Outputs the position of `index` in a permutation of [0, ..., max\_index]. [`set_global_generator(...)`](set_global_generator): Replaces the global generator with another `Generator` object. [`stateless_fold_in(...)`](experimental/stateless_fold_in): Folds in data to an RNG seed to form a new RNG seed. [`stateless_split(...)`](experimental/stateless_split): Splits an RNG seed into `num` new seeds by adding a leading axis. tensorflow tf.random.create_rng_state tf.random.create\_rng\_state ============================ Creates a RNG state from an integer or a vector. #### View aliases **Main aliases** [`tf.random.experimental.create_rng_state`](https://www.tensorflow.org/api_docs/python/tf/random/create_rng_state) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.create_rng_state`](https://www.tensorflow.org/api_docs/python/tf/random/create_rng_state), [`tf.compat.v1.random.experimental.create_rng_state`](https://www.tensorflow.org/api_docs/python/tf/random/create_rng_state) ``` tf.random.create_rng_state( seed, alg ) ``` #### Example: ``` tf.random.create_rng_state( 1234, "philox") <tf.Tensor: shape=(3,), dtype=int64, numpy=array([1234, 0, 0])> tf.random.create_rng_state( [12, 34], "threefry") <tf.Tensor: shape=(2,), dtype=int64, numpy=array([12, 34])> ``` | Args | | `seed` | an integer or 1-D numpy array. | | `alg` | the RNG algorithm. Can be a string, an `Algorithm` or an integer. | | Returns | | a 1-D numpy array whose size depends on the algorithm. | tensorflow tf.random.stateless_uniform tf.random.stateless\_uniform ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateless_random_ops.py#L324-L444) | Outputs deterministic pseudorandom values from a uniform distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.stateless_uniform`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_uniform) ``` tf.random.stateless_uniform( shape, seed, minval=0, maxval=None, dtype=tf.dtypes.float32, name=None, alg='auto_select' ) ``` This is a stateless version of [`tf.random.uniform`](uniform): if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. The generated values follow a uniform distribution in the range `[minval, maxval)`. The lower bound `minval` is included in the range, while the upper bound `maxval` is excluded. For floats, the default range is `[0, 1)`. For ints, at least `maxval` must be specified explicitly. In the integer case, the random integers are slightly biased unless `maxval - minval` is an exact power of two. The bias is small for values of `maxval - minval` significantly smaller than the range of the output (either `2**32` or `2**64`). For full-range (i.e. inclusive of both max and min) random integers, pass `minval=None` and `maxval=None` with an integer `dtype`. For an integer dtype either both `minval` and `maxval` must be `None` or neither may be `None`. For example: ``` ints = tf.random.stateless_uniform( [10], seed=(2, 3), minval=None, maxval=None, dtype=tf.int32) ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `seed` | A shape [2] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) | | `minval` | A Tensor or Python value of type `dtype`, broadcastable with `shape` (for integer types, broadcasting is not supported, so it needs to be a scalar). The lower bound on the range of random values to generate. Pass `None` for full-range integers. Defaults to 0. | | `maxval` | A Tensor or Python value of type `dtype`, broadcastable with `shape` (for integer types, broadcasting is not supported, so it needs to be a scalar). The upper bound on the range of random values to generate. Defaults to 1 if `dtype` is floating point. Pass `None` for full-range integers. | | `dtype` | The type of the output: `float16`, `bfloat16`, `float32`, `float64`, `int32`, or `int64`. For unbounded uniform ints (`minval`, `maxval` both `None`), `uint32` and `uint64` may be used. Defaults to `float32`. | | `name` | A name for the operation (optional). | | `alg` | The RNG algorithm used to generate the random numbers. Valid choices are `"philox"` for [the Philox algorithm](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf), `"threefry"` for [the ThreeFry algorithm](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf), and `"auto_select"` (default) for the system to automatically select an algorithm based the device type. Values of [`tf.random.Algorithm`](algorithm) can also be used. Note that with `"auto_select"`, the outputs of this function may change when it is running on a different device. | | Returns | | A tensor of the specified shape filled with random uniform values. | | Raises | | `ValueError` | If `dtype` is integral and only one of `minval` or `maxval` is specified. | tensorflow tf.random.uniform_candidate_sampler tf.random.uniform\_candidate\_sampler ===================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/candidate_sampling_ops.py#L27-L84) | Samples a set of classes using a uniform base distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nn.uniform_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/uniform_candidate_sampler), [`tf.compat.v1.random.uniform_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/uniform_candidate_sampler) ``` tf.random.uniform_candidate_sampler( true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None ) ``` This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`. The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution. The base distribution for this operation is the uniform distribution over the range of integers `[0, range_max)`. In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately. | Args | | `true_classes` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. | | `num_true` | An `int`. The number of target classes per training example. | | `num_sampled` | An `int`. The number of classes to randomly sample. The `sampled_candidates` return value will have shape `[num_sampled]`. If `unique=True`, `num_sampled` must be less than or equal to `range_max`. | | `unique` | A `bool`. Determines whether all sampled classes in a batch are unique. | | `range_max` | An `int`. The number of possible classes. | | `seed` | An `int`. An operation-specific seed. Default is 0. | | `name` | A name for the operation (optional). | | Returns | | `sampled_candidates` | A tensor of type `int64` and shape `[num_sampled]`. The sampled classes, either with possible duplicates (`unique=False`) or all unique (`unique=True`). In either case, `sampled_candidates` is independent of the true classes. | | `true_expected_count` | A tensor of type `float`. Same shape as `true_classes`. The expected counts under the sampling distribution of each of `true_classes`. | | `sampled_expected_count` | A tensor of type `float`. Same shape as `sampled_candidates`. The expected counts under the sampling distribution of each of `sampled_candidates`. | tensorflow tf.random.set_global_generator tf.random.set\_global\_generator ================================ Replaces the global generator with another `Generator` object. #### View aliases **Main aliases** [`tf.random.experimental.set_global_generator`](https://www.tensorflow.org/api_docs/python/tf/random/set_global_generator) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.experimental.set_global_generator`](https://www.tensorflow.org/api_docs/python/tf/random/set_global_generator), [`tf.compat.v1.random.set_global_generator`](https://www.tensorflow.org/api_docs/python/tf/random/set_global_generator) ``` tf.random.set_global_generator( generator ) ``` This function replaces the global generator with the provided `generator` object. A random number generator utilizes a [`tf.Variable`](../variable) object to store its state. The user shall be aware of caveats how `set_global_generator` interacts with [`tf.function`](../function): * tf.function puts restrictions on Variable creation thus one cannot freely create a new random generator instance inside [`tf.function`](../function). To call `set_global_generator` inside [`tf.function`](../function), the generator instance must have already been created eagerly. * tf.function captures the Variable during trace-compilation, thus a compiled f.function will not be affected `set_global_generator` as demonstrated by random\_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun . For most use cases, avoid calling `set_global_generator` after program initialization, and prefer to reset the state of the existing global generator instead, such as, ``` rng = tf.random.get_global_generator() rng.reset_from_seed(30) ``` | Args | | `generator` | the new `Generator` object. | tensorflow tf.random.normal tf.random.normal ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/random_ops.py#L40-L96) | Outputs random values from a normal distribution. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.normal`](https://www.tensorflow.org/api_docs/python/tf/random/normal), [`tf.compat.v1.random_normal`](https://www.tensorflow.org/api_docs/python/tf/random/normal) ``` tf.random.normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, seed=None, name=None ) ``` Example that generates a new set of random values every time: ``` tf.random.set_seed(5); tf.random.normal([4], 0, 1, tf.float32) <tf.Tensor: shape=(4,), dtype=float32, numpy=..., dtype=float32)> ``` Example that outputs a reproducible result: ``` tf.random.set_seed(5); tf.random.normal([2,2], 0, 1, tf.float32, seed=1) <tf.Tensor: shape=(2, 2), dtype=float32, numpy= array([[-1.3768897 , -0.01258316], [-0.169515 , 1.0824056 ]], dtype=float32)> ``` In this case, we are setting both the global and operation-level seed to ensure this result is reproducible. See [`tf.random.set_seed`](set_seed) for more information. | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `mean` | A Tensor or Python value of type `dtype`, broadcastable with `stddev`. The mean of the normal distribution. | | `stddev` | A Tensor or Python value of type `dtype`, broadcastable with `mean`. The standard deviation of the normal distribution. | | `dtype` | The float type of the output: `float16`, `bfloat16`, `float32`, `float64`. Defaults to `float32`. | | `seed` | A Python integer. Used to create a random seed for the distribution. See [`tf.random.set_seed`](set_seed) for behavior. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random normal values. |
programming_docs
tensorflow tf.random.fixed_unigram_candidate_sampler tf.random.fixed\_unigram\_candidate\_sampler ============================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/candidate_sampling_ops.py#L214-L305) | Samples a set of classes using the provided (fixed) base distribution. #### View aliases **Main aliases** [`tf.nn.fixed_unigram_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/fixed_unigram_candidate_sampler) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nn.fixed_unigram_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/fixed_unigram_candidate_sampler), [`tf.compat.v1.random.fixed_unigram_candidate_sampler`](https://www.tensorflow.org/api_docs/python/tf/random/fixed_unigram_candidate_sampler) ``` tf.random.fixed_unigram_candidate_sampler( true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None ) ``` This operation randomly samples a tensor of sampled classes (`sampled_candidates`) from the range of integers `[0, range_max)`. The elements of `sampled_candidates` are drawn without replacement (if `unique=True`) or with replacement (if `unique=False`) from the base distribution. The base distribution is read from a file or passed in as an in-memory array. There is also an option to skew the distribution by applying a distortion power to the weights. In addition, this operation returns tensors `true_expected_count` and `sampled_expected_count` representing the number of times each of the target classes (`true_classes`) and the sampled classes (`sampled_candidates`) is expected to occur in an average tensor of sampled classes. These values correspond to `Q(y|x)` defined in [this document](http://www.tensorflow.org/extras/candidate_sampling.pdf). If `unique=True`, then these are post-rejection probabilities and we compute them approximately. | Args | | `true_classes` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. | | `num_true` | An `int`. The number of target classes per training example. | | `num_sampled` | An `int`. The number of classes to randomly sample. | | `unique` | A `bool`. Determines whether all sampled classes in a batch are unique. | | `range_max` | An `int`. The number of possible classes. | | `vocab_file` | Each valid line in this file (which should have a CSV-like format) corresponds to a valid word ID. IDs are in sequential order, starting from num\_reserved\_ids. The last entry in each line is expected to be a value corresponding to the count or relative probability. Exactly one of `vocab_file` and `unigrams` needs to be passed to this operation. | | `distortion` | The distortion is used to skew the unigram probability distribution. Each weight is first raised to the distortion's power before adding to the internal unigram distribution. As a result, `distortion = 1.0` gives regular unigram sampling (as defined by the vocab file), and `distortion = 0.0` gives a uniform distribution. | | `num_reserved_ids` | Optionally some reserved IDs can be added in the range `[0, num_reserved_ids)` by the users. One use case is that a special unknown word token is used as ID 0. These IDs will have a sampling probability of 0. | | `num_shards` | A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with `shard`) indicates the number of partitions that are being used in the overall computation. | | `shard` | A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with `num_shards`) indicates the particular partition number of the operation, when partitioning is being used. | | `unigrams` | A list of unigram counts or probabilities, one per ID in sequential order. Exactly one of `vocab_file` and `unigrams` should be passed to this operation. | | `seed` | An `int`. An operation-specific seed. Default is 0. | | `name` | A name for the operation (optional). | | Returns | | `sampled_candidates` | A tensor of type `int64` and shape `[num_sampled]`. The sampled classes. | | `true_expected_count` | A tensor of type `float`. Same shape as `true_classes`. The expected counts under the sampling distribution of each of `true_classes`. | | `sampled_expected_count` | A tensor of type `float`. Same shape as `sampled_candidates`. The expected counts under the sampling distribution of each of `sampled_candidates`. | tensorflow tf.random.Generator tf.random.Generator =================== Random-number generator. #### View aliases **Main aliases** [`tf.random.experimental.Generator`](https://www.tensorflow.org/api_docs/python/tf/random/Generator) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.Generator`](https://www.tensorflow.org/api_docs/python/tf/random/Generator), [`tf.compat.v1.random.experimental.Generator`](https://www.tensorflow.org/api_docs/python/tf/random/Generator) ``` tf.random.Generator( copy_from=None, state=None, alg=None ) ``` #### Example: Creating a generator from a seed: ``` g = tf.random.Generator.from_seed(1234) g.normal(shape=(2, 3)) <tf.Tensor: shape=(2, 3), dtype=float32, numpy= array([[ 0.9356609 , 1.0854305 , -0.93788373], [-0.5061547 , 1.3169702 , 0.7137579 ]], dtype=float32)> ``` Creating a generator from a non-deterministic state: ``` g = tf.random.Generator.from_non_deterministic_state() g.normal(shape=(2, 3)) <tf.Tensor: shape=(2, 3), dtype=float32, numpy=...> ``` All the constructors allow explicitly choosing an Random-Number-Generation (RNG) algorithm. Supported algorithms are `"philox"` and `"threefry"`. For example: ``` g = tf.random.Generator.from_seed(123, alg="philox") g.normal(shape=(2, 3)) <tf.Tensor: shape=(2, 3), dtype=float32, numpy= array([[ 0.8673864 , -0.29899067, -0.9310337 ], [-1.5828488 , 1.2481191 , -0.6770643 ]], dtype=float32)> ``` CPU, GPU and TPU with the same algorithm and seed will generate the same integer random numbers. Float-point results (such as the output of `normal`) may have small numerical discrepancies between different devices. This class uses a [`tf.Variable`](../variable) to manage its internal state. Every time random numbers are generated, the state of the generator will change. For example: ``` g = tf.random.Generator.from_seed(1234) g.state <tf.Variable ... numpy=array([1234, 0, 0])> g.normal(shape=(2, 3)) <...> g.state <tf.Variable ... numpy=array([2770, 0, 0])> ``` The shape of the state is algorithm-specific. There is also a global generator: ``` g = tf.random.get_global_generator() g.normal(shape=(2, 3)) <tf.Tensor: shape=(2, 3), dtype=float32, numpy=...> ``` When creating a generator inside a [`tf.distribute.Strategy`](../distribute/strategy) scope, each replica will get a different stream of random numbers. For example, in this code: ``` strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"]) with strat.scope(): g = tf.random.Generator.from_seed(1) def f(): return g.normal([]) results = strat.run(f).values ``` `results[0]` and `results[1]` will have different values. If the generator is seeded (e.g. created via [`Generator.from_seed`](generator#from_seed)), the random numbers will be determined by the seed, even though different replicas get different numbers. One can think of a random number generated on a replica as a hash of the replica ID and a "master" random number that may be common to all replicas. Hence, the whole system is still deterministic. (Note that the random numbers on different replicas are not correlated, even if they are deterministically determined by the same seed. They are not correlated in the sense that no matter what statistics one calculates on them, there won't be any discernable correlation.) Generators can be freely saved and restored using [`tf.train.Checkpoint`](../train/checkpoint). The checkpoint can be restored in a distribution strategy with a different number of replicas than the original strategy. If a replica ID is present in both the original and the new distribution strategy, its state will be properly restored (i.e. the random-number stream from the restored point will be the same as that from the saving point) unless the replicas have already diverged in their RNG call traces before saving (e.g. one replica has made one RNG call while another has made two RNG calls). We don't have such guarantee if the generator is saved in a strategy scope and restored outside of any strategy scope, or vice versa. When a generator is created within the scope of [`tf.distribute.experimental.ParameterServerStrategy`](../distribute/experimental/parameterserverstrategy), the workers will share the generator's state (placed on one of the parameter servers). In this way the workers will still get different random-number streams, as stated above. (This is similar to replicas in a [`tf.distribute.MirroredStrategy`](../distribute/mirroredstrategy) sequentially accessing a generator created outside the strategy.) Each RNG call on a worker will incur a round-trip to a parameter server, which may have performance impacts. When creating a [`tf.distribute.experimental.ParameterServerStrategy`](../distribute/experimental/parameterserverstrategy), please make sure that the `variable_partitioner` argument won't shard small variables of shape `[2]` or `[3]` (because generator states must not be sharded). Ways to avoid sharding small variables include setting `variable_partitioner` to `None` or to [`tf.distribute.experimental.partitioners.MinSizePartitioner`](../distribute/experimental/partitioners/minsizepartitioner) with a large enough `min_shard_bytes` (see [`tf.distribute.experimental.ParameterServerStrategy`](../distribute/experimental/parameterserverstrategy)'s documentation for more details). | Args | | `copy_from` | a generator to be copied from. | | `state` | a vector of dtype STATE\_TYPE representing the initial state of the RNG, whose length and semantics are algorithm-specific. If it's a variable, the generator will reuse it instead of creating a new variable. | | `alg` | the RNG algorithm. Possible values are [`tf.random.Algorithm.PHILOX`](algorithm#PHILOX) for the Philox algorithm and [`tf.random.Algorithm.THREEFRY`](algorithm#THREEFRY) for the ThreeFry algorithm (see paper 'Parallel Random Numbers: As Easy as 1, 2, 3' [https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]). The string names `"philox"` and `"threefry"` can also be used. Note `PHILOX` guarantees the same numbers are produced (given the same random state) across all architectures (CPU, GPU, XLA etc). | | Attributes | | `algorithm` | The RNG algorithm id (a Python integer or scalar integer Tensor). | | `key` | The 'key' part of the state of a counter-based RNG. For a counter-base RNG algorithm such as Philox and ThreeFry (as described in paper 'Parallel Random Numbers: As Easy as 1, 2, 3' [<https://www.thesalmons.org/john/random123/papers/random123sc11.pdf>]), the RNG state consists of two parts: counter and key. The output is generated via the formula: output=hash(key, counter), i.e. a hashing of the counter parametrized by the key. Two RNGs with two different keys can be thought as generating two independent random-number streams (a stream is formed by increasing the counter). | | `state` | The internal state of the RNG. | Methods ------- ### `binomial` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L811-L867) ``` binomial( shape, counts, probs, dtype=tf.dtypes.int32, name=None ) ``` Outputs random values from a binomial distribution. The generated values follow a binomial distribution with specified count and probability of success parameters. #### Example: ``` counts = [10., 20.] # Probability of success. probs = [0.8] rng = tf.random.Generator.from_seed(seed=234) binomial_samples = rng.binomial(shape=[2], counts=counts, probs=probs) counts = ... # Shape [3, 1, 2] probs = ... # Shape [1, 4, 2] shape = [3, 4, 3, 4, 2] rng = tf.random.Generator.from_seed(seed=1717) # Sample shape will be [3, 4, 3, 4, 2] binomial_samples = rng.binomial(shape=shape, counts=counts, probs=probs) ``` | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `counts` | Tensor. The counts of the binomial distribution. Must be broadcastable with `probs`, and broadcastable with the rightmost dimensions of `shape`. | | `probs` | Tensor. The probability of success for the binomial distribution. Must be broadcastable with `counts` and broadcastable with the rightmost dimensions of `shape`. | | `dtype` | The type of the output. Default: tf.int32 | | `name` | A name for the operation (optional). | | Returns | | `samples` | A Tensor of the specified shape filled with random binomial values. For each i, each samples[i, ...] is an independent draw from the binomial distribution on counts[i] trials with probability of success probs[i]. | ### `from_key_counter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L385-L409) ``` @classmethod from_key_counter( key, counter, alg ) ``` Creates a generator from a key and a counter. This constructor only applies if the algorithm is a counter-based algorithm. See method `key` for the meaning of "key" and "counter". | Args | | `key` | the key for the RNG, a scalar of type STATE\_TYPE. | | `counter` | a vector of dtype STATE\_TYPE representing the initial counter for the RNG, whose length is algorithm-specific., | | `alg` | the RNG algorithm. If None, it will be auto-selected. See `__init__` for its possible values. | | Returns | | The new generator. | ### `from_non_deterministic_state` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L361-L383) ``` @classmethod from_non_deterministic_state( alg=None ) ``` Creates a generator by non-deterministically initializing its state. The source of the non-determinism will be platform- and time-dependent. | Args | | `alg` | (optional) the RNG algorithm. If None, it will be auto-selected. See `__init__` for its possible values. | | Returns | | The new generator. | ### `from_seed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L335-L359) ``` @classmethod from_seed( seed, alg=None ) ``` Creates a generator from a seed. A seed is a 1024-bit unsigned integer represented either as a Python integer or a vector of integers. Seeds shorter than 1024-bit will be padded. The padding, the internal structure of a seed and the way a seed is converted to a state are all opaque (unspecified). The only semantics specification of seeds is that two different seeds are likely to produce two independent generators (but no guarantee). | Args | | `seed` | the seed for the RNG. | | `alg` | (optional) the RNG algorithm. If None, it will be auto-selected. See `__init__` for its possible values. | | Returns | | The new generator. | ### `from_state` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L320-L333) ``` @classmethod from_state( state, alg ) ``` Creates a generator from a state. See `__init__` for description of `state` and `alg`. | Args | | `state` | the new state. | | `alg` | the RNG algorithm. | | Returns | | The new generator. | ### `make_seeds` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L878-L909) ``` make_seeds( count=1 ) ``` Generates seeds for stateless random ops. #### For example: ``` seeds = get_global_generator().make_seeds(count=10) for i in range(10): seed = seeds[:, i] numbers = stateless_random_normal(shape=[2, 3], seed=seed) ... ``` | Args | | `count` | the number of seed pairs (note that stateless random ops need a pair of seeds to invoke). | | Returns | | A tensor of shape [2, count] and dtype int64. | ### `normal` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L641-L663) ``` normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None ) ``` Outputs random values from a normal distribution. | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `mean` | A 0-D Tensor or Python value of type `dtype`. The mean of the normal distribution. | | `stddev` | A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution. | | `dtype` | The type of the output. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random normal values. | ### `reset` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L489-L499) ``` reset( state ) ``` Resets the generator by a new state. See `__init__` for the meaning of "state". | Args | | `state` | the new state. | ### `reset_from_key_counter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L512-L528) ``` reset_from_key_counter( key, counter ) ``` Resets the generator by a new key-counter pair. See `from_key_counter` for the meaning of "key" and "counter". | Args | | `key` | the new key. | | `counter` | the new counter. | ### `reset_from_seed` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L501-L510) ``` reset_from_seed( seed ) ``` Resets the generator by a new seed. See `from_seed` for the meaning of "seed". | Args | | `seed` | the new seed. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L578-L613) ``` skip( delta ) ``` Advance the counter of a counter-based RNG. | Args | | `delta` | the amount of advancement. The state of the RNG after `skip(n)` will be the same as that after `normal([n])` (or any other distribution). The actual increment added to the counter is an unspecified implementation detail. | | Returns | | A `Tensor` of type `int64`. | ### `split` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L911-L962) ``` split( count=1 ) ``` Returns a list of independent `Generator` objects. Two generators are independent of each other in the sense that the random-number streams they generate don't have statistically detectable correlations. The new generators are also independent of the old one. The old generator's state will be changed (like other random-number generating methods), so two calls of `split` will return different new generators. #### For example: ``` gens = get_global_generator().split(count=10) for gen in gens: numbers = gen.normal(shape=[2, 3]) # ... gens2 = get_global_generator().split(count=10) # gens2 will be different from gens ``` The new generators will be put on the current device (possible different from the old generator's), for example: ``` with tf.device("/device:CPU:0"): gen = Generator(seed=1234) # gen is on CPU with tf.device("/device:GPU:0"): gens = gen.split(count=10) # gens are on GPU ``` | Args | | `count` | the number of generators to return. | | Returns | | A list (length `count`) of `Generator` objects independent of each other. The new generators have the same RNG algorithm as the old one. | ### `truncated_normal` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L670-L702) ``` truncated_normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None ) ``` Outputs random values from a truncated normal distribution. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `mean` | A 0-D Tensor or Python value of type `dtype`. The mean of the truncated normal distribution. | | `stddev` | A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution, before truncation. | | `dtype` | The type of the output. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random truncated normal values. | ### `uniform` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L719-L789) ``` uniform( shape, minval=0, maxval=None, dtype=tf.dtypes.float32, name=None ) ``` Outputs random values from a uniform distribution. The generated values follow a uniform distribution in the range `[minval, maxval)`. The lower bound `minval` is included in the range, while the upper bound `maxval` is excluded. (For float numbers especially low-precision types like bfloat16, because of rounding, the result may sometimes include `maxval`.) For floats, the default range is `[0, 1)`. For ints, at least `maxval` must be specified explicitly. In the integer case, the random integers are slightly biased unless `maxval - minval` is an exact power of two. The bias is small for values of `maxval - minval` significantly smaller than the range of the output (either `2**32` or `2**64`). For full-range random integers, pass `minval=None` and `maxval=None` with an integer `dtype` (for integer dtypes, `minval` and `maxval` must be both `None` or both not `None`). | Args | | `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. | | `minval` | A Tensor or Python value of type `dtype`, broadcastable with `shape` (for integer types, broadcasting is not supported, so it needs to be a scalar). The lower bound (included) on the range of random values to generate. Pass `None` for full-range integers. Defaults to 0. | | `maxval` | A Tensor or Python value of type `dtype`, broadcastable with `shape` (for integer types, broadcasting is not supported, so it needs to be a scalar). The upper bound (excluded) on the range of random values to generate. Pass `None` for full-range integers. Defaults to 1 if `dtype` is floating point. | | `dtype` | The type of the output. | | `name` | A name for the operation (optional). | | Returns | | A tensor of the specified shape filled with random uniform values. | | Raises | | `ValueError` | If `dtype` is integral and `maxval` is not specified. | ### `uniform_full_int` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/stateful_random_ops.py#L791-L809) ``` uniform_full_int( shape, dtype=tf.dtypes.uint64, name=None ) ``` Uniform distribution on an integer type's entire range. This method is the same as setting `minval` and `maxval` to `None` in the `uniform` method. | Args | | `shape` | the shape of the output. | | `dtype` | (optional) the integer type, default to uint64. | | `name` | (optional) the name of the node. | | Returns | | A tensor of random numbers of the required shape. |
programming_docs
tensorflow tf.random.experimental.stateless_fold_in tf.random.experimental.stateless\_fold\_in ========================================== Folds in data to an RNG seed to form a new RNG seed. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.experimental.stateless_fold_in`](https://www.tensorflow.org/api_docs/python/tf/random/experimental/stateless_fold_in) ``` tf.random.experimental.stateless_fold_in( seed, data, alg='auto_select' ) ``` For example, in a distributed-training setting, suppose we have a master seed and a replica ID. We want to fold the replica ID into the master seed to form a "replica seed" to be used by that replica later on, so that different replicas will generate different random numbers but the reproducibility of the whole system can still be controlled by the master seed: ``` master_seed = [1, 2] replica_id = 3 replica_seed = tf.random.experimental.stateless_fold_in( master_seed, replica_id) print(replica_seed) tf.Tensor([1105988140 3], shape=(2,), dtype=int32) tf.random.stateless_normal(shape=[3], seed=replica_seed) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([0.03197195, 0.8979765 , 0.13253039], dtype=float32)> ``` | Args | | `seed` | an RNG seed (a tensor with shape [2] and dtype `int32` or `int64`). (When using XLA, only `int32` is allowed.) | | `data` | an `int32` or `int64` scalar representing data to be folded in to the seed. | | `alg` | The RNG algorithm used to generate the random numbers. See [`tf.random.stateless_uniform`](../stateless_uniform) for a detailed explanation. | | Returns | | A new RNG seed that is a deterministic function of the inputs and is statistically safe for producing a stream of new pseudo-random values. It will have the same dtype as `data` (if `data` doesn't have an explict dtype, the dtype will be determined by [`tf.convert_to_tensor`](../../convert_to_tensor)). | tensorflow tf.random.experimental.index_shuffle tf.random.experimental.index\_shuffle ===================================== Outputs the position of `index` in a permutation of [0, ..., max\_index]. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.experimental.index_shuffle`](https://www.tensorflow.org/api_docs/python/tf/random/experimental/index_shuffle) ``` tf.random.experimental.index_shuffle( index, seed, max_index ) ``` For each possible `seed` and `max_index` there is one pseudorandom permutation of the sequence S=[0, ..., max\_index]. Instead of materializing the full array we can compute the new position of any single element in S. This can be useful for very large `max_index`s. The input `index` and output can be used as indices to shuffle a vector. For example: ``` vector = tf.constant(['e0', 'e1', 'e2', 'e3']) indices = tf.random.experimental.index_shuffle(tf.range(4), [5, 9], 3) shuffled_vector = tf.gather(vector, indices) print(shuffled_vector) tf.Tensor([b'e2' b'e0' b'e1' b'e3'], shape=(4,), dtype=string) ``` More usefully, it can be used in a streaming (aka online) scenario such as [`tf.data`](../../data), where each element of `vector` is processed individually and the whole `vector` is never materialized in memory. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.map( lambda idx: tf.random.experimental.index_shuffle(idx, [5, 8], 9)) print(list(dataset.as_numpy_iterator())) [3, 8, 0, 1, 2, 7, 6, 9, 4, 5] ``` This operation is stateless (like other `tf.random.stateless_*` functions), meaning the output is fully determined by the `seed` (other inputs being equal). Each `seed` choice corresponds to one permutation, so when calling this function multiple times for the same shuffling, please make sure to use the same `seed`. For example: ``` seed = [5, 9] idx0 = tf.random.experimental.index_shuffle(0, seed, 3) idx1 = tf.random.experimental.index_shuffle(1, seed, 3) idx2 = tf.random.experimental.index_shuffle(2, seed, 3) idx3 = tf.random.experimental.index_shuffle(3, seed, 3) shuffled_vector = tf.gather(vector, [idx0, idx1, idx2, idx3]) print(shuffled_vector) tf.Tensor([b'e2' b'e0' b'e1' b'e3'], shape=(4,), dtype=string) ``` | Args | | `index` | An integer scalar tensor or vector with values in [0, `max_index`]. It can be seen as either a value `v` in the sequence `S`=[0, ..., `max_index`] to be permutated, or as an index of an element `e` in a shuffled vector. | | `seed` | A tensor of shape [2] or [n, 2] with dtype int32/uint32/int64/uint64. The RNG seed. If the rank is unknown during graph building it must be 1 at runtime. | | `max_index` | A non-negative tensor with the same shape and dtype as `index`. The upper bound (inclusive). | | Returns | | If all inputs were scalar (shape [2] for `seed`) the output will be a scalar with the same dtype as `index`. The output can be seen as the new position of `v` in `S`, or as the index of `e` in the vector before shuffling. If one or multiple inputs were vectors (shape [n, 2] for `seed`) then the output will be a vector of the same size which each element shuffled independently. Scalar values are broadcasted in this case. | tensorflow tf.random.experimental.stateless_split tf.random.experimental.stateless\_split ======================================= Splits an RNG seed into `num` new seeds by adding a leading axis. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.random.experimental.stateless_split`](https://www.tensorflow.org/api_docs/python/tf/random/experimental/stateless_split) ``` tf.random.experimental.stateless_split( seed, num=2, alg='auto_select' ) ``` #### Example: ``` seed = [1, 2] new_seeds = tf.random.experimental.stateless_split(seed, num=3) print(new_seeds) tf.Tensor( [[1105988140 1738052849] [-335576002 370444179] [ 10670227 -246211131]], shape=(3, 2), dtype=int32) tf.random.stateless_normal(shape=[3], seed=new_seeds[0, :]) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.59835213, -0.9578608 , 0.9002807 ], dtype=float32)> ``` | Args | | `seed` | an RNG seed (a tensor with shape [2] and dtype `int32` or `int64`). (When using XLA, only `int32` is allowed.) | | `num` | optional, a positive integer or scalar tensor indicating the number of seeds to produce (default 2). | | `alg` | The RNG algorithm used to generate the random numbers. See [`tf.random.stateless_uniform`](../stateless_uniform) for a detailed explanation. | | Returns | | A tensor with shape [num, 2] representing `num` new seeds. It will have the same dtype as `seed` (if `seed` doesn't have an explict dtype, the dtype will be determined by [`tf.convert_to_tensor`](../../convert_to_tensor)). | tensorflow tf.data.Dataset tf.data.Dataset =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L134-L3471) | Represents a potentially large set of elements. ``` tf.data.Dataset( variant_tensor ) ``` The [`tf.data.Dataset`](dataset) API supports writing descriptive and efficient input pipelines. `Dataset` usage follows a common pattern: 1. Create a source dataset from your input data. 2. Apply dataset transformations to preprocess the data. 3. Iterate over the dataset and process the elements. Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory. #### Source Datasets: The simplest way to create a dataset is to create it from a python `list`: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` To process lines from files, use [`tf.data.TextLineDataset`](textlinedataset): ``` dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"]) ``` To process records written in the `TFRecord` format, use `TFRecordDataset`: ``` dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"]) ``` To create a dataset of all files matching a pattern, use [`tf.data.Dataset.list_files`](dataset#list_files): ``` dataset = tf.data.Dataset.list_files("/path/*.txt") ``` See [`tf.data.FixedLengthRecordDataset`](fixedlengthrecorddataset) and [`tf.data.Dataset.from_generator`](dataset#from_generator) for more ways to create datasets. #### Transformations: Once you have a dataset, you can apply transformations to prepare the data for your model: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.map(lambda x: x*2) list(dataset.as_numpy_iterator()) [2, 4, 6] ``` #### Common Terms: **Element**: A single output from calling `next()` on a dataset iterator. Elements may be nested structures containing multiple components. For example, the element `(1, (3, "apple"))` has one tuple nested in another tuple. The components are `1`, `3`, and `"apple"`. **Component**: The leaf in the nested structure of an element. #### Supported types: Elements can be nested structures of tuples, named tuples, and dictionaries. Note that Python lists are *not* treated as nested structures of components. Instead, lists are converted to tensors and treated as components. For example, the element `(1, [1, 2, 3])` has only two components; the tensor `1` and the tensor `[1, 2, 3]`. Element components can be of any type representable by [`tf.TypeSpec`](../typespec), including [`tf.Tensor`](../tensor), [`tf.data.Dataset`](dataset), [`tf.sparse.SparseTensor`](../sparse/sparsetensor), [`tf.RaggedTensor`](../raggedtensor), and [`tf.TensorArray`](../tensorarray). ``` a = 1 # Integer element b = 2.0 # Float element c = (1, 2) # Tuple element with 2 components d = {"a": (2, 2), "b": 3} # Dict element with 3 components Point = collections.namedtuple("Point", ["x", "y"]) e = Point(1, 2) # Named tuple f = tf.data.Dataset.range(10) # Dataset element ``` For more information, read [this guide](https://www.tensorflow.org/guide/data). | Args | | `variant_tensor` | A DT\_VARIANT tensor that represents the dataset. | | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../tf#string) scalar [`tf.Tensor`](../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](dataset) of scalar [`tf.int64`](../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](dataset#from_generator) uses [`tf.numpy_function`](../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../py_function) accepts [`tf.Tensor`](../tensor) whereas [`tf.numpy_function`](../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../numpy_function) and [`tf.py_function`](../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) or [`tf.int64`](../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../tf#int32), [`tf.int64`](../../tf#int64) or [`tf.string`](../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](dataset#flat_map) and [`Dataset.interleave`](dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.TFRecordDataset tf.data.TFRecordDataset ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/readers.py#L383-L479) | A `Dataset` comprising records from one or more TFRecord files. Inherits From: [`Dataset`](dataset) ``` tf.data.TFRecordDataset( filenames, compression_type=None, buffer_size=None, num_parallel_reads=None, name=None ) ``` This dataset loads TFRecords from the files as bytes, exactly as they were written.`TFRecordDataset` does not do any parsing or decoding on its own. Parsing and decoding can be done by applying [`Dataset.map`](dataset#map) transformations after the `TFRecordDataset`. A minimal example is given below: ``` import tempfile example_path = os.path.join(tempfile.gettempdir(), "example.tfrecords") np.random.seed(0) ``` ``` # Write the records to a file. with tf.io.TFRecordWriter(example_path) as file_writer: for _ in range(4): x, y = np.random.random(), np.random.random() record_bytes = tf.train.Example(features=tf.train.Features(feature={ "x": tf.train.Feature(float_list=tf.train.FloatList(value=[x])), "y": tf.train.Feature(float_list=tf.train.FloatList(value=[y])), })).SerializeToString() file_writer.write(record_bytes) ``` ``` # Read the data back out. def decode_fn(record_bytes): return tf.io.parse_single_example( # Data record_bytes, # Schema {"x": tf.io.FixedLenFeature([], dtype=tf.float32), "y": tf.io.FixedLenFeature([], dtype=tf.float32)} ) ``` ``` for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn): print("x = {x:.4f}, y = {y:.4f}".format(**batch)) x = 0.5488, y = 0.7152 x = 0.6028, y = 0.5449 x = 0.4237, y = 0.6459 x = 0.4376, y = 0.8918 ``` | Args | | `filenames` | A [`tf.string`](../../tf#string) tensor or [`tf.data.Dataset`](dataset) containing one or more filenames. | | `compression_type` | (Optional.) A [`tf.string`](../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. | | `buffer_size` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of bytes in the read buffer. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value 1-100 MBs. If `None`, a sensible default for both local and remote file systems is used. | | `num_parallel_reads` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If `None`, files will be read sequentially. | | `name` | (Optional.) A name for the tf.data operation. | | Raises | | `TypeError` | If any argument does not have the expected type. | | `ValueError` | If any argument does not have the expected shape. | | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../tf#string) scalar [`tf.Tensor`](../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](dataset) of scalar [`tf.int64`](../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](dataset#from_generator) uses [`tf.numpy_function`](../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../py_function) accepts [`tf.Tensor`](../tensor) whereas [`tf.numpy_function`](../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../numpy_function) and [`tf.py_function`](../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) or [`tf.int64`](../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../tf#int32), [`tf.int64`](../../tf#int64) or [`tf.string`](../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](dataset#flat_map) and [`Dataset.interleave`](dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.TextLineDataset tf.data.TextLineDataset ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/readers.py#L168-L249) | Creates a `Dataset` comprising lines from one or more text files. Inherits From: [`Dataset`](dataset) ``` tf.data.TextLineDataset( filenames, compression_type=None, buffer_size=None, num_parallel_reads=None, name=None ) ``` The [`tf.data.TextLineDataset`](textlinedataset) loads text from text files and creates a dataset where each line of the files becomes an element of the dataset. For example, suppose we have 2 files "text\_lines0.txt" and "text\_lines1.txt" with the following lines: ``` with open('/tmp/text_lines0.txt', 'w') as f: f.write('the cow\n') f.write('jumped over\n') f.write('the moon\n') with open('/tmp/text_lines1.txt', 'w') as f: f.write('jack and jill\n') f.write('went up\n') f.write('the hill\n') ``` We can construct a TextLineDataset from them as follows: ``` dataset = tf.data.TextLineDataset(['/tmp/text_lines0.txt', '/tmp/text_lines1.txt']) ``` The elements of the dataset are expected to be: ``` for element in dataset.as_numpy_iterator(): print(element) b'the cow' b'jumped over' b'the moon' b'jack and jill' b'went up' b'the hill' ``` | Args | | `filenames` | A [`tf.data.Dataset`](dataset) whose elements are [`tf.string`](../../tf#string) scalars, a [`tf.string`](../../tf#string) tensor, or a value that can be converted to a [`tf.string`](../../tf#string) tensor (such as a list of Python strings). | | `compression_type` | (Optional.) A [`tf.string`](../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. | | `buffer_size` | (Optional.) A [`tf.int64`](../../tf#int64) scalar denoting the number of bytes to buffer. A value of 0 results in the default buffering values chosen based on the compression type. | | `num_parallel_reads` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If `None`, files will be read sequentially. | | `name` | (Optional.) A name for the tf.data operation. | | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../tf#string) scalar [`tf.Tensor`](../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](dataset) of scalar [`tf.int64`](../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](dataset#from_generator) uses [`tf.numpy_function`](../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../py_function) accepts [`tf.Tensor`](../tensor) whereas [`tf.numpy_function`](../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../numpy_function) and [`tf.py_function`](../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) or [`tf.int64`](../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../tf#int32), [`tf.int64`](../../tf#int64) or [`tf.string`](../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](dataset#flat_map) and [`Dataset.interleave`](dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.DatasetSpec tf.data.DatasetSpec =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4302-L4454) | Type specification for [`tf.data.Dataset`](dataset). Inherits From: [`TypeSpec`](../typespec), [`TraceType`](../types/experimental/tracetype) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.DatasetSpec`](https://www.tensorflow.org/api_docs/python/tf/data/DatasetSpec), [`tf.compat.v1.data.experimental.DatasetStructure`](https://www.tensorflow.org/api_docs/python/tf/data/DatasetSpec) ``` tf.data.DatasetSpec( element_spec, dataset_shape=() ) ``` See [`tf.TypeSpec`](../typespec) for more information about TensorFlow type specifications. ``` dataset = tf.data.Dataset.range(3) tf.data.DatasetSpec.from_value(dataset) DatasetSpec(TensorSpec(shape=(), dtype=tf.int64, name=None), TensorShape([])) ``` | Attributes | | `element_spec` | The inner element spec. | | `value_type` | The Python type for values that are compatible with this TypeSpec. In particular, all values that are compatible with this TypeSpec must be an instance of this type. | Methods ------- ### `from_value` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4418-L4421) ``` @staticmethod from_value( value ) ``` Creates a `DatasetSpec` for the given [`tf.data.Dataset`](dataset) value. ### `is_compatible_with` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L193-L214) ``` is_compatible_with( spec_or_value ) ``` Returns true if `spec_or_value` is compatible with this TypeSpec. Prefer using "is\_subtype\_of" and "most\_specific\_common\_supertype" wherever possible. | Args | | `spec_or_value` | A TypeSpec or TypeSpec associated value to compare against. | ### `is_subtype_of` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4327-L4351) ``` is_subtype_of( other ) ``` See base class. ### `most_specific_common_supertype` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4353-L4390) ``` most_specific_common_supertype( others ) ``` See base class. ### `most_specific_compatible_type` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L216-L234) ``` most_specific_compatible_type( other: 'TypeSpec' ) -> 'TypeSpec' ``` Returns the most specific TypeSpec compatible with `self` and `other`. (deprecated) Deprecated. Please use `most_specific_common_supertype` instead. Do not override this function. | Args | | `other` | A `TypeSpec`. | | Raises | | `ValueError` | If there is no TypeSpec that is compatible with both `self` and `other`. | ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4451-L4454) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L443-L444) ``` __ne__( other ) -> bool ``` Return self!=value. tensorflow tf.data.FixedLengthRecordDataset tf.data.FixedLengthRecordDataset ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/readers.py#L566-L658) | A `Dataset` of fixed-length records from one or more binary files. Inherits From: [`Dataset`](dataset) ``` tf.data.FixedLengthRecordDataset( filenames, record_bytes, header_bytes=None, footer_bytes=None, buffer_size=None, compression_type=None, num_parallel_reads=None, name=None ) ``` The [`tf.data.FixedLengthRecordDataset`](fixedlengthrecorddataset) reads fixed length records from binary files and creates a dataset where each record becomes an element of the dataset. The binary files can have a fixed length header and a fixed length footer, which will both be skipped. For example, suppose we have 2 files "fixed\_length0.bin" and "fixed\_length1.bin" with the following content: ``` with open('/tmp/fixed_length0.bin', 'wb') as f: f.write(b'HEADER012345FOOTER') with open('/tmp/fixed_length1.bin', 'wb') as f: f.write(b'HEADER6789abFOOTER') ``` We can construct a `FixedLengthRecordDataset` from them as follows: ``` dataset1 = tf.data.FixedLengthRecordDataset( filenames=['/tmp/fixed_length0.bin', '/tmp/fixed_length1.bin'], record_bytes=2, header_bytes=6, footer_bytes=6) ``` The elements of the dataset are: ``` for element in dataset1.as_numpy_iterator(): print(element) b'01' b'23' b'45' b'67' b'89' b'ab' ``` | Args | | `filenames` | A [`tf.string`](../../tf#string) tensor or [`tf.data.Dataset`](dataset) containing one or more filenames. | | `record_bytes` | A [`tf.int64`](../../tf#int64) scalar representing the number of bytes in each record. | | `header_bytes` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of bytes to skip at the start of a file. | | `footer_bytes` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of bytes to ignore at the end of a file. | | `buffer_size` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of bytes to buffer when reading. | | `compression_type` | (Optional.) A [`tf.string`](../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. | | `num_parallel_reads` | (Optional.) A [`tf.int64`](../../tf#int64) scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If `None`, files will be read sequentially. | | `name` | (Optional.) A name for the tf.data operation. | | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../tf#string) scalar [`tf.Tensor`](../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](dataset) of scalar [`tf.int64`](../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](dataset#from_generator) uses [`tf.numpy_function`](../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../py_function) accepts [`tf.Tensor`](../tensor) whereas [`tf.numpy_function`](../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../numpy_function) and [`tf.py_function`](../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](experimental/dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../tensorshape) or [`tf.int64`](../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../tf#int32), [`tf.int64`](../../tf#int64) or [`tf.string`](../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](dataset#flat_map) and [`Dataset.interleave`](dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../tf#int64) scalar [`tf.Tensor`](../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../tf#bool) scalar [`tf.Tensor`](../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.Iterator tf.data.Iterator ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L553-L646) | Represents an iterator of a [`tf.data.Dataset`](dataset). [`tf.data.Iterator`](iterator) is the primary mechanism for enumerating elements of a [`tf.data.Dataset`](dataset). It supports the Python Iterator protocol, which means it can be iterated over using a for-loop: ``` dataset = tf.data.Dataset.range(2) for element in dataset: print(element) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) ``` or by fetching individual elements explicitly via `get_next()`: ``` dataset = tf.data.Dataset.range(2) iterator = iter(dataset) print(iterator.get_next()) tf.Tensor(0, shape=(), dtype=int64) print(iterator.get_next()) tf.Tensor(1, shape=(), dtype=int64) ``` In addition, non-raising iteration is supported via `get_next_as_optional()`, which returns the next element (if available) wrapped in a [`tf.experimental.Optional`](../experimental/optional). ``` dataset = tf.data.Dataset.from_tensors(42) iterator = iter(dataset) optional = iterator.get_next_as_optional() print(optional.has_value()) tf.Tensor(True, shape=(), dtype=bool) optional = iterator.get_next_as_optional() print(optional.has_value()) tf.Tensor(False, shape=(), dtype=bool) ``` | Attributes | | `element_spec` | The type specification of an element of this iterator. ``` dataset = tf.data.Dataset.from_tensors(42) iterator = iter(dataset) iterator.element_spec tf.TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `get_next` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L608-L623) ``` @abc.abstractmethod get_next() ``` Returns the next element. ``` dataset = tf.data.Dataset.from_tensors(42) iterator = iter(dataset) print(iterator.get_next()) tf.Tensor(42, shape=(), dtype=int32) ``` | Returns | | A (nested) structure of values matching [`tf.data.Iterator.element_spec`](iterator#element_spec). | | Raises | | [`tf.errors.OutOfRangeError`](../errors/outofrangeerror): If the end of the iterator has been reached. | ### `get_next_as_optional` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L625-L646) ``` @abc.abstractmethod get_next_as_optional() ``` Returns the next element wrapped in [`tf.experimental.Optional`](../experimental/optional). If the iterator has reached the end of the sequence, the returned [`tf.experimental.Optional`](../experimental/optional) will have no value. ``` dataset = tf.data.Dataset.from_tensors(42) iterator = iter(dataset) optional = iterator.get_next_as_optional() print(optional.has_value()) tf.Tensor(True, shape=(), dtype=bool) print(optional.get_value()) tf.Tensor(42, shape=(), dtype=int32) optional = iterator.get_next_as_optional() print(optional.has_value()) tf.Tensor(False, shape=(), dtype=bool) ``` | Returns | | A [`tf.experimental.Optional`](../experimental/optional) object representing the next element. | ### `__iter__` ``` __iter__() ``` tensorflow Module: tf.data.experimental Module: tf.data.experimental ============================ Experimental API for building input pipelines. This module contains experimental `Dataset` sources and transformations that can be used in conjunction with the [`tf.data.Dataset`](dataset) API. Note that the [`tf.data.experimental`](experimental) API is not subject to the same backwards compatibility guarantees as [`tf.data`](../data), but we will provide deprecation advice in advance of removing existing functionality. See [Importing Data](https://tensorflow.org/guide/datasets) for an overview. Modules ------- [`service`](experimental/service) module: API for using the tf.data service. Classes ------- [`class AutoShardPolicy`](experimental/autoshardpolicy): Represents the type of auto-sharding to use. [`class AutotuneAlgorithm`](experimental/autotunealgorithm): Represents the type of autotuning algorithm to use. [`class AutotuneOptions`](experimental/autotuneoptions): Represents options for autotuning dataset performance. [`class CheckpointInputPipelineHook`](experimental/checkpointinputpipelinehook): Checkpoints input pipeline state every N steps or seconds. [`class CsvDataset`](experimental/csvdataset): A Dataset comprising lines from one or more CSV files. [`class DatasetInitializer`](experimental/datasetinitializer): Creates a table initializer from a [`tf.data.Dataset`](dataset). [`class DistributeOptions`](experimental/distributeoptions): Represents options for distributed data processing. [`class ExternalStatePolicy`](experimental/externalstatepolicy): Represents how to handle external state during serialization. [`class OptimizationOptions`](experimental/optimizationoptions): Represents options for dataset optimizations. [`class Optional`](../experimental/optional): Represents a value that may or may not be present. [`class RandomDataset`](experimental/randomdataset): A `Dataset` of pseudorandom values. (deprecated) [`class Reducer`](experimental/reducer): A reducer is used for reducing a set of elements. [`class SqlDataset`](experimental/sqldataset): A `Dataset` consisting of the results from a SQL query. [`class TFRecordWriter`](experimental/tfrecordwriter): Writes a dataset to a TFRecord file. (deprecated) [`class ThreadingOptions`](threadingoptions): Represents options for dataset threading. Functions --------- [`Counter(...)`](experimental/counter): Creates a `Dataset` that counts from `start` in steps of size `step`. [`assert_cardinality(...)`](experimental/assert_cardinality): Asserts the cardinality of the input dataset. [`bucket_by_sequence_length(...)`](experimental/bucket_by_sequence_length): A transformation that buckets elements in a `Dataset` by length. (deprecated) [`cardinality(...)`](experimental/cardinality): Returns the cardinality of `dataset`, if known. [`choose_from_datasets(...)`](experimental/choose_from_datasets): Creates a dataset that deterministically chooses elements from `datasets`. (deprecated) [`copy_to_device(...)`](experimental/copy_to_device): A transformation that copies dataset elements to the given `target_device`. [`dense_to_ragged_batch(...)`](experimental/dense_to_ragged_batch): A transformation that batches ragged elements into [`tf.RaggedTensor`](../raggedtensor)s. [`dense_to_sparse_batch(...)`](experimental/dense_to_sparse_batch): A transformation that batches ragged elements into [`tf.sparse.SparseTensor`](../sparse/sparsetensor)s. [`enable_debug_mode(...)`](experimental/enable_debug_mode): Enables debug mode for tf.data. [`enumerate_dataset(...)`](experimental/enumerate_dataset): A transformation that enumerates the elements of a dataset. (deprecated) [`from_variant(...)`](experimental/from_variant): Constructs a dataset from the given variant and (nested) structure. [`get_next_as_optional(...)`](experimental/get_next_as_optional): Returns a [`tf.experimental.Optional`](../experimental/optional) with the next element of the iterator. (deprecated) [`get_single_element(...)`](experimental/get_single_element): Returns the single element of the `dataset` as a nested structure of tensors. (deprecated) [`get_structure(...)`](experimental/get_structure): Returns the type signature for elements of the input dataset / iterator. [`group_by_reducer(...)`](experimental/group_by_reducer): A transformation that groups elements and performs a reduction. [`group_by_window(...)`](experimental/group_by_window): A transformation that groups windows of elements by key and reduces them. (deprecated) [`ignore_errors(...)`](experimental/ignore_errors): Creates a `Dataset` from another `Dataset` and silently ignores any errors. [`index_table_from_dataset(...)`](experimental/index_table_from_dataset): Returns an index lookup table based on the given dataset. [`load(...)`](experimental/load): Loads a previously saved dataset. [`make_batched_features_dataset(...)`](experimental/make_batched_features_dataset): Returns a `Dataset` of feature dictionaries from `Example` protos. [`make_csv_dataset(...)`](experimental/make_csv_dataset): Reads CSV files into a dataset. [`make_saveable_from_iterator(...)`](experimental/make_saveable_from_iterator): Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated) [`map_and_batch(...)`](experimental/map_and_batch): Fused implementation of `map` and `batch`. (deprecated) [`parallel_interleave(...)`](experimental/parallel_interleave): A parallel version of the [`Dataset.interleave()`](dataset#interleave) transformation. (deprecated) [`parse_example_dataset(...)`](experimental/parse_example_dataset): A transformation that parses `Example` protos into a `dict` of tensors. [`prefetch_to_device(...)`](experimental/prefetch_to_device): A transformation that prefetches dataset values to the given `device`. [`rejection_resample(...)`](experimental/rejection_resample): A transformation that resamples a dataset to achieve a target distribution. (deprecated) [`sample_from_datasets(...)`](experimental/sample_from_datasets): Samples elements at random from the datasets in `datasets`. (deprecated) [`save(...)`](experimental/save): Saves the content of the given dataset. [`scan(...)`](experimental/scan): A transformation that scans a function across an input dataset. (deprecated) [`shuffle_and_repeat(...)`](experimental/shuffle_and_repeat): Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) [`snapshot(...)`](experimental/snapshot): API to persist the output of the input dataset. (deprecated) [`table_from_dataset(...)`](experimental/table_from_dataset): Returns a lookup table based on the given dataset. [`take_while(...)`](experimental/take_while): A transformation that stops dataset iteration based on a `predicate`. (deprecated) [`to_variant(...)`](experimental/to_variant): Returns a variant representing the given dataset. [`unbatch(...)`](experimental/unbatch): Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) [`unique(...)`](experimental/unique): Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated) | Other Members | | AUTOTUNE | `-1` | | INFINITE\_CARDINALITY | `-1` | | SHARD\_HINT | `-1` | | UNKNOWN\_CARDINALITY | `-2` | tensorflow tf.data.IteratorSpec tf.data.IteratorSpec ==================== Type specification for [`tf.data.Iterator`](iterator). Inherits From: [`TypeSpec`](../typespec), [`TraceType`](../types/experimental/tracetype) ``` tf.data.IteratorSpec( element_spec ) ``` For instance, [`tf.data.IteratorSpec`](iteratorspec) can be used to define a tf.function that takes [`tf.data.Iterator`](iterator) as an input argument: ``` @tf.function(input_signature=[tf.data.IteratorSpec( tf.TensorSpec(shape=(), dtype=tf.int32, name=None))]) def square(iterator): x = iterator.get_next() return x * x dataset = tf.data.Dataset.from_tensors(5) iterator = iter(dataset) print(square(iterator)) tf.Tensor(25, shape=(), dtype=int32) ``` | Attributes | | `element_spec` | A (nested) structure of [`tf.TypeSpec`](../typespec) objects that represents the type specification of the iterator elements. | | `value_type` | The Python type for values that are compatible with this TypeSpec. In particular, all values that are compatible with this TypeSpec must be an instance of this type. | Methods ------- ### `from_value` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L901-L903) ``` @staticmethod from_value( value ) ``` ### `is_compatible_with` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L193-L214) ``` is_compatible_with( spec_or_value ) ``` Returns true if `spec_or_value` is compatible with this TypeSpec. Prefer using "is\_subtype\_of" and "most\_specific\_common\_supertype" wherever possible. | Args | | `spec_or_value` | A TypeSpec or TypeSpec associated value to compare against. | ### `is_subtype_of` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L100-L137) ``` is_subtype_of( other: tf.types.experimental.TraceType ) -> bool ``` Returns True if `self` is a subtype of `other`. Implements the tf.types.experimental.func.TraceType interface. If not overridden by a subclass, the default behavior is to assume the TypeSpec is covariant upon attributes that implement TraceType and invariant upon rest of the attributes as well as the structure and type of the TypeSpec. | Args | | `other` | A TraceType object. | ### `most_specific_common_supertype` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L139-L185) ``` most_specific_common_supertype( others: Sequence[tf.types.experimental.TraceType] ) -> Optional['TypeSpec'] ``` Returns the most specific supertype TypeSpec of `self` and `others`. Implements the tf.types.experimental.func.TraceType interface. If not overridden by a subclass, the default behavior is to assume the TypeSpec is covariant upon attributes that implement TraceType and invariant upon rest of the attributes as well as the structure and type of the TypeSpec. | Args | | `others` | A sequence of TraceTypes. | ### `most_specific_compatible_type` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L216-L234) ``` most_specific_compatible_type( other: 'TypeSpec' ) -> 'TypeSpec' ``` Returns the most specific TypeSpec compatible with `self` and `other`. (deprecated) Deprecated. Please use `most_specific_common_supertype` instead. Do not override this function. | Args | | `other` | A `TypeSpec`. | | Raises | | `ValueError` | If there is no TypeSpec that is compatible with both `self` and `other`. | ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L438-L441) ``` __eq__( other ) -> bool ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/type_spec.py#L443-L444) ``` __ne__( other ) -> bool ``` Return self!=value. tensorflow tf.data.Options tf.data.Options =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/options.py#L471-L644) | Represents options for [`tf.data.Dataset`](dataset). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.Options`](https://www.tensorflow.org/api_docs/python/tf/data/Options) ``` tf.data.Options() ``` A [`tf.data.Options`](options) object can be, for instance, used to control which static optimizations to apply to the input pipeline graph or whether to use performance modeling to dynamically tune the parallelism of operations such as [`tf.data.Dataset.map`](dataset#map) or [`tf.data.Dataset.interleave`](dataset#interleave). The options are set for the entire dataset and are carried over to datasets created through tf.data transformations. The options can be set by constructing an `Options` object and using the [`tf.data.Dataset.with_options(options)`](dataset#with_options) transformation, which returns a dataset with the options set. ``` dataset = tf.data.Dataset.range(42) options = tf.data.Options() options.deterministic = False dataset = dataset.with_options(options) print(dataset.options().deterministic) False ``` > > **Note:** A known limitation of the [`tf.data.Options`](options) implementation is that the options are not preserved across tf.function boundaries. In particular, to set options for a dataset that is iterated within a tf.function, the options need to be set within the same tf.function. > | Attributes | | `autotune` | The autotuning options associated with the dataset. See [`tf.data.experimental.AutotuneOptions`](experimental/autotuneoptions) for more details. | | `deterministic` | Whether the outputs need to be produced in deterministic order. If None, defaults to True. | | `experimental_deterministic` | DEPRECATED. Use `deterministic` instead. | | `experimental_distribute` | The distribution strategy options associated with the dataset. See [`tf.data.experimental.DistributeOptions`](experimental/distributeoptions) for more details. | | `experimental_external_state_policy` | This option can be used to override the default policy for how to handle external state when serializing a dataset or checkpointing its iterator. There are three settings available - IGNORE: External state is ignored without a warning; WARN: External state is ignored and a warning is logged; FAIL: External state results in an error. | | `experimental_optimization` | The optimization options associated with the dataset. See [`tf.data.experimental.OptimizationOptions`](experimental/optimizationoptions) for more details. | | `experimental_slack` | Whether to introduce 'slack' in the last `prefetch` of the input pipeline, if it exists. This may reduce CPU contention with accelerator host-side activity at the start of a step. The slack frequency is determined by the number of devices attached to this input pipeline. If None, defaults to False. | | `experimental_threading` | DEPRECATED. Use `threading` instead. | | `threading` | The threading options associated with the dataset. See [`tf.data.ThreadingOptions`](threadingoptions) for more details. | Methods ------- ### `merge` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/options.py#L630-L644) ``` merge( options ) ``` Merges itself with the given [`tf.data.Options`](options). If this object and the `options` to merge set an option differently, a warning is generated and this object's value is updated with the `options` object's value. | Args | | `options` | The [`tf.data.Options`](options) to merge with. | | Returns | | New [`tf.data.Options`](options) object which is the result of merging self with the input [`tf.data.Options`](options). | ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L38-L44) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L46-L50) ``` __ne__( other ) ``` Return self!=value. tensorflow tf.data.ThreadingOptions tf.data.ThreadingOptions ======================== Represents options for dataset threading. #### View aliases **Main aliases** [`tf.data.experimental.ThreadingOptions`](https://www.tensorflow.org/api_docs/python/tf/data/ThreadingOptions) **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.ThreadingOptions`](https://www.tensorflow.org/api_docs/python/tf/data/ThreadingOptions), [`tf.compat.v1.data.experimental.ThreadingOptions`](https://www.tensorflow.org/api_docs/python/tf/data/ThreadingOptions) ``` tf.data.ThreadingOptions() ``` You can set the threading options of a dataset through the `experimental_threading` property of [`tf.data.Options`](options); the property is an instance of [`tf.data.ThreadingOptions`](threadingoptions). ``` options = tf.data.Options() options.threading.private_threadpool_size = 10 dataset = dataset.with_options(options) ``` | Attributes | | `max_intra_op_parallelism` | If set, it overrides the maximum degree of intra-op parallelism. | | `private_threadpool_size` | If set, the dataset will use a private threadpool of the given size. The value 0 can be used to indicate that the threadpool size should be determined at runtime based on the number of available CPU cores. | Methods ------- ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L38-L44) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L46-L50) ``` __ne__( other ) ``` Return self!=value.
programming_docs
tensorflow tf.data.experimental.shuffle_and_repeat tf.data.experimental.shuffle\_and\_repeat ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/shuffle_ops.py#L50-L105) | Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.shuffle_and_repeat`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/shuffle_and_repeat) ``` tf.data.experimental.shuffle_and_repeat( buffer_size, count=None, seed=None ) ``` ``` d = tf.data.Dataset.from_tensor_slices([1, 2, 3]) d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2)) [elem.numpy() for elem in d] # doctest: +SKIP [2, 3, 1, 1, 3, 2] ``` ``` dataset.apply( tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed)) ``` produces the same output as ``` dataset.shuffle( buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count) ``` In each repetition, this dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, set the buffer size equal to the full size of the dataset. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the maximum number elements that will be buffered when prefetching. | | `count` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.choose_from_datasets tf.data.experimental.choose\_from\_datasets =========================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/interleave_ops.py#L169-L223) | Creates a dataset that deterministically chooses elements from `datasets`. (deprecated) ``` tf.data.experimental.choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=False ) ``` For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](../dataset) of scalar [`tf.int64`](../../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | tensorflow tf.data.experimental.assert_cardinality tf.data.experimental.assert\_cardinality ======================================== Asserts the cardinality of the input dataset. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.assert_cardinality`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/assert_cardinality) ``` tf.data.experimental.assert_cardinality( expected_cardinality ) ``` > > **Note:** The following assumes that "examples.tfrecord" contains 42 records. > ``` dataset = tf.data.TFRecordDataset("examples.tfrecord") cardinality = tf.data.experimental.cardinality(dataset) print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) True dataset = dataset.apply(tf.data.experimental.assert_cardinality(42)) print(tf.data.experimental.cardinality(dataset).numpy()) 42 ``` | Args | | `expected_cardinality` | The expected cardinality of the input dataset. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | | Raises | | `FailedPreconditionError` | The assertion is checked at runtime (when iterating the dataset) and an error is raised if the actual and expected cardinality differ. | tensorflow tf.data.experimental.DatasetInitializer tf.data.experimental.DatasetInitializer ======================================= Creates a table initializer from a [`tf.data.Dataset`](../dataset). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.DatasetInitializer`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/DatasetInitializer) ``` tf.data.experimental.DatasetInitializer( dataset ) ``` #### Sample usage: ``` keys = tf.data.Dataset.range(100) values = tf.data.Dataset.range(100).map( lambda x: tf.strings.as_string(x * 2)) ds = tf.data.Dataset.zip((keys, values)) init = tf.data.experimental.DatasetInitializer(ds) table = tf.lookup.StaticHashTable(init, "") table.lookup(tf.constant([0, 1, 2], dtype=tf.int64)).numpy() array([b'0', b'2', b'4'], dtype=object) ``` Raises: ValueError if `dataset` doesn't conform to specifications. | Args | | `dataset` | A [`tf.data.Dataset`](../dataset) object that produces tuples of scalars. The first scalar is treated as a key and the second as value. | | Attributes | | `dataset` | A [`tf.data.Dataset`](../dataset) object that produces tuples of scalars. The first scalar is treated as a key and the second as value. | | `key_dtype` | The expected table key dtype. | | `value_dtype` | The expected table value dtype. | Methods ------- ### `initialize` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/lookup_ops.py#L94-L99) ``` initialize( table ) ``` Returns the table initialization op. tensorflow tf.data.experimental.unbatch tf.data.experimental.unbatch ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/batching.py#L271-L298) | Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.unbatch`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/unbatch) ``` tf.data.experimental.unbatch() ``` For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` # NOTE: The following example uses `{ ... }` to represent the contents # of a dataset. a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } a.unbatch() == { 'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'} ``` | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.cardinality tf.data.experimental.cardinality ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/cardinality.py#L33-L64) | Returns the cardinality of `dataset`, if known. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.cardinality`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/cardinality) ``` tf.data.experimental.cardinality( dataset ) ``` The operation returns the cardinality of `dataset`. The operation may return [`tf.data.experimental.INFINITE_CARDINALITY`](../experimental#INFINITE_CARDINALITY) if `dataset` contains an infinite number of elements or [`tf.data.experimental.UNKNOWN_CARDINALITY`](../experimental#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in `dataset` (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(tf.data.experimental.cardinality(dataset).numpy()) 42 dataset = dataset.repeat() cardinality = tf.data.experimental.cardinality(dataset) print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = tf.data.experimental.cardinality(dataset) print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) True ``` | Args | | `dataset` | A [`tf.data.Dataset`](../dataset) for which to determine cardinality. | | Returns | | A scalar [`tf.int64`](../../../tf#int64) `Tensor` representing the cardinality of `dataset`. If the cardinality is infinite or unknown, the operation returns the named constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively. | tensorflow tf.data.experimental.make_batched_features_dataset tf.data.experimental.make\_batched\_features\_dataset ===================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/readers.py#L907-L1088) | Returns a `Dataset` of feature dictionaries from `Example` protos. ``` tf.data.experimental.make_batched_features_dataset( file_pattern, batch_size, features, reader=None, label_key=None, reader_args=None, num_epochs=None, shuffle=True, shuffle_buffer_size=10000, shuffle_seed=None, prefetch_buffer_size=None, reader_num_threads=None, parser_num_threads=None, sloppy_ordering=False, drop_final_batch=False ) ``` If label\_key argument is provided, returns a `Dataset` of tuple comprising of feature dictionaries and label. #### Example: ``` serialized_examples = [ features { feature { key: "age" value { int64_list { value: [ 0 ] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } } }, features { feature { key: "age" value { int64_list { value: [] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } feature { key: "kws" value { bytes_list { value: [ "sports" ] } } } } ] ``` #### We can use arguments: ``` features: { "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), "gender": FixedLenFeature([], dtype=tf.string), "kws": VarLenFeature(dtype=tf.string), } ``` And the expected output is: ``` { "age": [[0], [-1]], "gender": [["f"], ["f"]], "kws": SparseTensor( indices=[[0, 0], [0, 1], [1, 0]], values=["code", "art", "sports"] dense_shape=[2, 2]), } ``` | Args | | `file_pattern` | List of files or patterns of file paths containing `Example` records. See [`tf.io.gfile.glob`](../../io/gfile/glob) for pattern rules. | | `batch_size` | An int representing the number of records to combine in a single batch. | | `features` | A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` values. See [`tf.io.parse_example`](../../io/parse_example). | | `reader` | A function or class that can be called with a `filenames` tensor and (optional) `reader_args` and returns a `Dataset` of `Example` tensors. Defaults to [`tf.data.TFRecordDataset`](../tfrecorddataset). | | `label_key` | (Optional) A string corresponding to the key labels are stored in `tf.Examples`. If provided, it must be one of the `features` key, otherwise results in `ValueError`. | | `reader_args` | Additional arguments to pass to the reader class. | | `num_epochs` | Integer specifying the number of times to read through the dataset. If None, cycles through the dataset forever. Defaults to `None`. | | `shuffle` | A boolean, indicates whether the input should be shuffled. Defaults to `True`. | | `shuffle_buffer_size` | Buffer size of the ShuffleDataset. A large capacity ensures better shuffling but would increase memory usage and startup time. | | `shuffle_seed` | Randomization seed to use for shuffling. | | `prefetch_buffer_size` | Number of feature batches to prefetch in order to improve performance. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. | | `reader_num_threads` | Number of threads used to read `Example` records. If >1, the results will be interleaved. Defaults to `1`. | | `parser_num_threads` | Number of threads to use for parsing `Example` tensors into a dictionary of `Feature` tensors. Defaults to `2`. | | `sloppy_ordering` | If `True`, reading performance will be improved at the cost of non-deterministic ordering. If `False`, the order of elements produced is deterministic prior to shuffling (elements are still randomized if `shuffle=True`. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to `False`. | | `drop_final_batch` | If `True`, and the batch size does not evenly divide the input dataset size, the final smaller batch will be dropped. Defaults to `False`. | | Returns | | A dataset of `dict` elements, (or a tuple of `dict` elements and label). Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects. | | Raises | | `TypeError` | If `reader` is of the wrong type. | | `ValueError` | If `label_key` is not one of the `features` keys. | tensorflow tf.data.experimental.enumerate_dataset tf.data.experimental.enumerate\_dataset ======================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/enumerate_ops.py#L20-L54) | A transformation that enumerates the elements of a dataset. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.enumerate_dataset`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/enumerate_dataset) ``` tf.data.experimental.enumerate_dataset( start=0 ) ``` It is similar to python's `enumerate`. For example: ``` # NOTE: The following examples use `{ ... }` to represent the # contents of a dataset. a = { 1, 2, 3 } b = { (7, 8), (9, 10) } # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a.apply(tf.data.experimental.enumerate_dataset(start=5)) => { (5, 1), (6, 2), (7, 3) } b.apply(tf.data.experimental.enumerate_dataset()) => { (0, (7, 8)), (1, (9, 10)) } ``` | Args | | `start` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the start value for enumeration. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.parse_example_dataset tf.data.experimental.parse\_example\_dataset ============================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/parsing_ops.py#L106-L160) | A transformation that parses `Example` protos into a `dict` of tensors. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.parse_example_dataset`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/parse_example_dataset) ``` tf.data.experimental.parse_example_dataset( features, num_parallel_calls=1, deterministic=None ) ``` Parses a number of serialized `Example` protos given in `serialized`. We refer to `serialized` as a batch with `batch_size` many entries of individual `Example` protos. This op parses serialized examples into a dictionary mapping keys to `Tensor`, `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to `VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature` objects. Each `VarLenFeature` and `SparseFeature` is mapped to a `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature` is mapped to a `Tensor`. See [`tf.io.parse_example`](../../io/parse_example) for more details about feature dictionaries. | Args | | `features` | A `dict` mapping feature keys to `FixedLenFeature`, `VarLenFeature`, `RaggedFeature`, and `SparseFeature` values. | | `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../tf#int32) scalar [`tf.Tensor`](../../tensor), representing the number of parsing processes to call in parallel. | | `deterministic` | (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order if some parsing calls complete faster than others. If `deterministic` is `None`, the [`tf.data.Options.deterministic`](../options#deterministic) dataset option (`True` by default) is used to decide whether to produce elements deterministically. | | Returns | | A dataset transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | | Raises | | `ValueError` | if features argument is None. | tensorflow tf.data.experimental.get_single_element tf.data.experimental.get\_single\_element ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/get_single_element.py#L21-L147) | Returns the single element of the `dataset` as a nested structure of tensors. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.get_single_element`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/get_single_element) ``` tf.data.experimental.get_single_element( dataset ) ``` The function enables you to use a [`tf.data.Dataset`](../dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = tf.data.experimental.get_single_element(dataset) ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the [`tf.data.experimental.get_single_element()`](get_single_element) function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../dataset) operations, and you want to use those transformations while serving your model. Keras ===== ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax( self.model(tf.data.experimental.get_single_element(ds)), axis=-1 ) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn}) ``` Estimator ========= In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = tf.data.experimental.get_single_element(dataset) # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `tf.data.experimental.get_single_element(dataset)`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `dataset` | A [`tf.data.Dataset`](../dataset) object containing a single element. | | Returns | | A nested structure of [`tf.Tensor`](../../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `TypeError` | if `dataset` is not a [`tf.data.Dataset`](../dataset) object. | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. |
programming_docs
tensorflow tf.data.experimental.AutoShardPolicy tf.data.experimental.AutoShardPolicy ==================================== Represents the type of auto-sharding to use. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.AutoShardPolicy`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/AutoShardPolicy) OFF: No sharding will be performed. AUTO: Attempts FILE-based sharding, falling back to DATA-based sharding. FILE: Shards by input files (i.e. each worker will get a set of files to process). When this option is selected, make sure that there is at least as many files as workers. If there are fewer input files than workers, a runtime error will be raised. DATA: Shards by elements produced by the dataset. Each worker will process the whole dataset and discard the portion that is not for itself. Note that for this mode to correctly partitions the dataset elements, the dataset needs to produce elements in a deterministic order. HINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a placeholder to replace with `shard(num_workers, worker_index)`. | Class Variables | | AUTO | `<AutoShardPolicy.AUTO: 0>` | | DATA | `<AutoShardPolicy.DATA: 2>` | | FILE | `<AutoShardPolicy.FILE: 1>` | | HINT | `<AutoShardPolicy.HINT: 3>` | | OFF | `<AutoShardPolicy.OFF: -1>` | tensorflow tf.data.experimental.OptimizationOptions tf.data.experimental.OptimizationOptions ======================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/options.py#L294-L422) | Represents options for dataset optimizations. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.OptimizationOptions`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/OptimizationOptions) ``` tf.data.experimental.OptimizationOptions() ``` You can set the optimization options of a dataset through the `experimental_optimization` property of [`tf.data.Options`](../options); the property is an instance of [`tf.data.experimental.OptimizationOptions`](optimizationoptions). ``` options = tf.data.Options() options.experimental_optimization.noop_elimination = True options.experimental_optimization.apply_default_optimizations = False dataset = dataset.with_options(options) ``` | Attributes | | `apply_default_optimizations` | Whether to apply default graph optimizations. If False, only graph optimizations that have been explicitly enabled will be applied. | | `filter_fusion` | Whether to fuse filter transformations. If None, defaults to False. | | `filter_parallelization` | Whether to parallelize stateless filter transformations. If None, defaults to False. | | `map_and_batch_fusion` | Whether to fuse map and batch transformations. If None, defaults to True. | | `map_and_filter_fusion` | Whether to fuse map and filter transformations. If None, defaults to False. | | `map_fusion` | Whether to fuse map transformations. If None, defaults to False. | | `map_parallelization` | Whether to parallelize stateless map transformations. If None, defaults to True. | | `noop_elimination` | Whether to eliminate no-op transformations. If None, defaults to True. | | `parallel_batch` | Whether to parallelize copying of batch elements. If None, defaults to True. | | `shuffle_and_repeat_fusion` | Whether to fuse shuffle and repeat transformations. If None, defaults to True. | Methods ------- ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L38-L44) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L46-L50) ``` __ne__( other ) ``` Return self!=value. tensorflow tf.data.experimental.scan tf.data.experimental.scan ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/scan_ops.py#L20-L45) | A transformation that scans a function across an input dataset. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.scan`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/scan) ``` tf.data.experimental.scan( initial_state, scan_func ) ``` This transformation is a stateful relative of [`tf.data.Dataset.map`](../dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.load tf.data.experimental.load ========================= Loads a previously saved dataset. ``` tf.data.experimental.load( path, element_spec=None, compression=None, reader_func=None ) ``` #### Example usage: ``` import tempfile path = os.path.join(tempfile.gettempdir(), "saved_data") # Save a dataset dataset = tf.data.Dataset.range(2) tf.data.experimental.save(dataset, path) new_dataset = tf.data.experimental.load(path) for elem in new_dataset: print(elem) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) ``` Note that to load a previously saved dataset, you need to specify `element_spec` -- a type signature of the elements of the saved dataset, which can be obtained via [`tf.data.Dataset.element_spec`](../dataset#element_spec). This requirement exists so that shape inference of the loaded dataset does not need to perform I/O. If the default option of sharding the saved dataset was used, the element order of the saved dataset will be preserved when loading it. The `reader_func` argument can be used to specify a custom order in which elements should be loaded from the individual shards. The `reader_func` is expected to take a single argument -- a dataset of datasets, each containing elements of one of the shards -- and return a dataset of elements. For example, the order of shards can be shuffled when loading them as follows: ``` def custom_reader_func(datasets): datasets = datasets.shuffle(NUM_SHARDS) return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = tf.data.experimental.load( path="/path/to/data", ..., reader_func=custom_reader_func) ``` | Args | | `path` | Required. A path pointing to a previously saved dataset. | | `element_spec` | Optional. A nested structure of [`tf.TypeSpec`](../../typespec) objects matching the structure of an element of the saved dataset and specifying the type of individual element components. If not provided, the nested structure of [`tf.TypeSpec`](../../typespec) saved with the saved dataset is used. This argument needs to be provided if the method is executed in graph mode. | | `compression` | Optional. The algorithm to use to decompress the data when reading it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`. | | `reader_func` | Optional. A function to control how to read data from shards. If present, the function will be traced and executed as graph computation. | | Returns | | A [`tf.data.Dataset`](../dataset) instance. | | Raises | | `FileNotFoundError` | If `element_spec` is not specified and the saved nested structure of [`tf.TypeSpec`](../../typespec) can not be located with the saved dataset. | | `ValueError` | If `element_spec` is not specified and the method is executed in graph mode. | tensorflow tf.data.experimental.index_table_from_dataset tf.data.experimental.index\_table\_from\_dataset ================================================ Returns an index lookup table based on the given dataset. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.index_table_from_dataset`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/index_table_from_dataset) ``` tf.data.experimental.index_table_from_dataset( dataset=None, num_oov_buckets=0, vocab_size=None, default_value=-1, hasher_spec=lookup_ops.FastHashSpec, key_dtype=tf.dtypes.string, name=None ) ``` This operation constructs a lookup table based on the given dataset of keys. Any lookup of an out-of-vocabulary token will return a bucket ID based on its hash if `num_oov_buckets` is greater than zero. Otherwise it is assigned the `default_value`. The bucket ID range is `[vocabulary size, vocabulary size + num_oov_buckets - 1]`. #### Sample Usages: ``` ds = tf.data.Dataset.range(100).map(lambda x: tf.strings.as_string(x * 2)) table = tf.data.experimental.index_table_from_dataset( ds, key_dtype=dtypes.int64) table.lookup(tf.constant(['0', '2', '4'], dtype=tf.string)).numpy() array([0, 1, 2]) ``` | Args | | `dataset` | A dataset of keys. | | `num_oov_buckets` | The number of out-of-vocabulary buckets. | | `vocab_size` | Number of the elements in the vocabulary, if known. | | `default_value` | The value to use for out-of-vocabulary feature values. Defaults to -1. | | `hasher_spec` | A `HasherSpec` to specify the hash function to use for assignation of out-of-vocabulary buckets. | | `key_dtype` | The `key` data type. | | `name` | A name for this op (optional). | | Returns | | The lookup table based on the given dataset. | | Raises | | `ValueError` | If * `num_oov_buckets` is negative * `vocab_size` is not greater than zero * The `key_dtype` is not integer or string | tensorflow tf.data.experimental.to_variant tf.data.experimental.to\_variant ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4286-L4296) | Returns a variant representing the given dataset. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.to_variant`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/to_variant) ``` tf.data.experimental.to_variant( dataset ) ``` | Args | | `dataset` | A [`tf.data.Dataset`](../dataset). | | Returns | | A scalar [`tf.variant`](../../../tf#variant) tensor representing the given dataset. | tensorflow tf.data.experimental.ExternalStatePolicy tf.data.experimental.ExternalStatePolicy ======================================== Represents how to handle external state during serialization. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.ExternalStatePolicy`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/ExternalStatePolicy) See the [`tf.data.Options.experimental_external_state_policy`](../options#experimental_external_state_policy) documentation for more information. | Class Variables | | FAIL | `<ExternalStatePolicy.FAIL: 2>` | | IGNORE | `<ExternalStatePolicy.IGNORE: 1>` | | WARN | `<ExternalStatePolicy.WARN: 0>` | tensorflow tf.data.experimental.dense_to_ragged_batch tf.data.experimental.dense\_to\_ragged\_batch ============================================= A transformation that batches ragged elements into [`tf.RaggedTensor`](../../raggedtensor)s. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.dense_to_ragged_batch`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/dense_to_ragged_batch) ``` tf.data.experimental.dense_to_ragged_batch( batch_size, drop_remainder=False, row_splits_dtype=tf.dtypes.int64 ) ``` This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](../dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](../dataset#batch), the input elements to be batched may have different shapes: * If an input element is a [`tf.Tensor`](../../tensor) whose static [`tf.TensorShape`](../../tensorshape) is fully defined, then it is batched as normal. * If an input element is a [`tf.Tensor`](../../tensor) whose static [`tf.TensorShape`](../../tensorshape) contains one or more axes with unknown size (i.e., `shape[i]=None`), then the output will contain a [`tf.RaggedTensor`](../../raggedtensor) that is ragged up to any of such dimensions. * If an input element is a [`tf.RaggedTensor`](../../raggedtensor) or any other type, then it is batched as normal. #### Example: ``` dataset = tf.data.Dataset.from_tensor_slices(np.arange(6)) dataset = dataset.map(lambda x: tf.range(x)) dataset.element_spec.shape TensorShape([None]) dataset = dataset.apply( tf.data.experimental.dense_to_ragged_batch(batch_size=2)) for batch in dataset: print(batch) <tf.RaggedTensor [[], [0]]> <tf.RaggedTensor [[0, 1], [0, 1, 2]]> <tf.RaggedTensor [[0, 1, 2, 3], [0, 1, 2, 3, 4]]> ``` | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `row_splits_dtype` | The dtype that should be used for the `row_splits` of any new ragged tensors. Existing [`tf.RaggedTensor`](../../raggedtensor) elements do not have their row\_splits dtype changed. | | Returns | | `Dataset` | A `Dataset`. | tensorflow tf.data.experimental.prefetch_to_device tf.data.experimental.prefetch\_to\_device ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/prefetching_ops.py#L33-L62) | A transformation that prefetches dataset values to the given `device`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.prefetch_to_device`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/prefetch_to_device) ``` tf.data.experimental.prefetch_to_device( device, buffer_size=None ) ``` > > **Note:** Although the transformation creates a [`tf.data.Dataset`](../dataset), the transformation must be the final `Dataset` in the input pipeline. > For example, ``` >>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) >>> dataset = dataset.apply(tf.data.experimental.prefetch_to_device("/cpu:0")) >>> for element in dataset: ... print(f'Tensor {element} is on device {element.device}') Tensor 1 is on device /job:localhost/replica:0/task:0/device:CPU:0 Tensor 2 is on device /job:localhost/replica:0/task:0/device:CPU:0 Tensor 3 is on device /job:localhost/replica:0/task:0/device:CPU:0 ``` | Args | | `device` | A string. The name of a device to which elements will be prefetched. | | `buffer_size` | (Optional.) The number of elements to buffer on `device`. Defaults to an automatically chosen value. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.get_structure tf.data.experimental.get\_structure =================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4123-L4144) | Returns the type signature for elements of the input dataset / iterator. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.get_structure`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/get_structure) ``` tf.data.experimental.get_structure( dataset_or_iterator ) ``` | Args | | `dataset_or_iterator` | A [`tf.data.Dataset`](../dataset) or an [`tf.data.Iterator`](../iterator). | | Returns | | A (nested) structure of [`tf.TypeSpec`](../../typespec) objects matching the structure of an element of `dataset_or_iterator` and specifying the type of individual components. | | Raises | | `TypeError` | If input is not a [`tf.data.Dataset`](../dataset) or an [`tf.data.Iterator`](../iterator) object. | tensorflow tf.data.experimental.group_by_reducer tf.data.experimental.group\_by\_reducer ======================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/grouping.py#L28-L55) | A transformation that groups elements and performs a reduction. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.group_by_reducer`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/group_by_reducer) ``` tf.data.experimental.group_by_reducer( key_func, reducer ) ``` This transformation maps element of a dataset to a key using `key_func` and groups the elements by key. The `reducer` is used to process each group; its `init_func` is used to initialize state for each group when it is created, the `reduce_func` is used to update the state every time an element is mapped to the matching group, and the `finalize_func` is used to map the final state to an output value. | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../tf#int64) tensor. | | `reducer` | An instance of `Reducer`, which captures the reduction logic using the `init_func`, `reduce_func`, and `finalize_func` functions. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.bucket_by_sequence_length tf.data.experimental.bucket\_by\_sequence\_length ================================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/grouping.py#L110-L257) | A transformation that buckets elements in a `Dataset` by length. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.bucket_by_sequence_length`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/bucket_by_sequence_length) ``` tf.data.experimental.bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False ) ``` Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] ``` ``` dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) ``` ``` dataset = dataset.apply( tf.data.experimental.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2])) ``` ``` for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` There is also a possibility to pad the dataset till the bucket boundary. You can also provide which value to be used while padding the data. Below example uses `-1` as padding and it also shows the input data being bucketizied to two buckets "[0,3], [4,6]". ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] ``` ``` dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int32, output_shapes=[None]) ``` ``` dataset = dataset.apply( tf.data.experimental.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[4, 7], bucket_batch_sizes=[2, 2, 2], pad_to_bucket_boundary=True, padding_values=-1)) ``` ``` for elem in dataset.as_numpy_iterator(): print(elem) [[ 0 -1 -1] [ 5 6 7]] [[ 1 2 3 4 -1 -1] [ 7 8 9 10 11 -1]] [[21 22 -1]] [[13 14 15 16 19 20]] ``` When using `pad_to_bucket_boundary` option, it can be seen that it is not always possible to maintain the bucket batch size. You can drop the batches that do not maintain the bucket batch size by using the option `drop_remainder`. Using the same input data as in the above example you get the following result. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] ``` ``` dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int32, output_shapes=[None]) ``` ``` dataset = dataset.apply( tf.data.experimental.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[4, 7], bucket_batch_sizes=[2, 2, 2], pad_to_bucket_boundary=True, padding_values=-1, drop_remainder=True)) ``` ``` for elem in dataset.as_numpy_iterator(): print(elem) [[ 0 -1 -1] [ 5 6 7]] [[ 1 2 3 4 -1 -1] [ 7 8 9 10 11 -1]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
programming_docs
tensorflow tf.data.experimental.from_variant tf.data.experimental.from\_variant ================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L4271-L4283) | Constructs a dataset from the given variant and (nested) structure. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.from_variant`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/from_variant) ``` tf.data.experimental.from_variant( variant, structure ) ``` | Args | | `variant` | A scalar [`tf.variant`](../../../tf#variant) tensor representing a dataset. | | `structure` | A (nested) structure of [`tf.TypeSpec`](../../typespec) objects representing the structure of each element in the dataset. | | Returns | | A [`tf.data.Dataset`](../dataset) instance. | tensorflow tf.data.experimental.Reducer tf.data.experimental.Reducer ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/grouping.py#L389-L428) | A reducer is used for reducing a set of elements. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.Reducer`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/Reducer) ``` tf.data.experimental.Reducer( init_func, reduce_func, finalize_func ) ``` A reducer is represented as a tuple of the three functions: * init\_func - to define initial value: key => initial state * reducer\_func - operation to perform on values with same key: (old state, input) => new state * finalize\_func - value to return in the end: state => result For example, ``` def init_func(_): return (0.0, 0.0) def reduce_func(state, value): return (state[0] + value['features'], state[1] + 1) def finalize_func(s, n): return s / n reducer = tf.data.experimental.Reducer(init_func, reduce_func, finalize_func) ``` | Attributes | | `finalize_func` | | | `init_func` | | | `reduce_func` | | tensorflow tf.data.experimental.map_and_batch tf.data.experimental.map\_and\_batch ==================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/batching.py#L208-L268) | Fused implementation of `map` and `batch`. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.map_and_batch`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/map_and_batch) ``` tf.data.experimental.map_and_batch( map_func, batch_size, num_parallel_batches=None, drop_remainder=False, num_parallel_calls=None ) ``` Maps `map_func` across `batch_size` consecutive elements of this dataset and then combines them into a batch. Functionally, it is equivalent to `map` followed by `batch`. This API is temporary and deprecated since input pipeline optimization now fuses consecutive `map` and `batch` operations automatically. | Args | | `map_func` | A function mapping a nested structure of tensors to another nested structure of tensors. | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `num_parallel_batches` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int32`](../../../tf#int32) scalar [`tf.Tensor`](../../tensor), representing the number of elements to process in parallel. If not specified, `batch_size * num_parallel_batches` elements will be processed in parallel. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | | Raises | | `ValueError` | If both `num_parallel_batches` and `num_parallel_calls` are specified. | tensorflow tf.data.experimental.take_while tf.data.experimental.take\_while ================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/take_while_ops.py#L20-L38) | A transformation that stops dataset iteration based on a `predicate`. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.take_while`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/take_while) ``` tf.data.experimental.take_while( predicate ) ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../tf#bool) tensor. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.copy_to_device tf.data.experimental.copy\_to\_device ===================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/prefetching_ops.py#L65-L82) | A transformation that copies dataset elements to the given `target_device`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.copy_to_device`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/copy_to_device) ``` tf.data.experimental.copy_to_device( target_device, source_device='/cpu:0' ) ``` | Args | | `target_device` | The name of a device to which elements will be copied. | | `source_device` | The original device on which `input_dataset` will be placed. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.rejection_resample tf.data.experimental.rejection\_resample ======================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/resampling.py#L20-L50) | A transformation that resamples a dataset to achieve a target distribution. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.rejection_resample`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/rejection_resample) ``` tf.data.experimental.rejection_resample( class_func, target_dist, initial_dist=None, seed=None ) ``` > > **Note:** Resampling is performed via rejection sampling; some fraction of the input values will be dropped. > | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.Counter tf.data.experimental.Counter ============================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/counter.py#L23-L61) | Creates a `Dataset` that counts from `start` in steps of size `step`. ``` tf.data.experimental.Counter( start=0, step=1, dtype=tf.dtypes.int64 ) ``` Unlike [`tf.data.Dataset.range`](../dataset#range) which will stop at some ending number, `Counter` will produce elements indefinitely. ``` dataset = tf.data.experimental.Counter().take(5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] dataset.element_spec TensorSpec(shape=(), dtype=tf.int64, name=None) dataset = tf.data.experimental.Counter(dtype=tf.int32) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) dataset = tf.data.experimental.Counter(start=2).take(5) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] dataset = tf.data.experimental.Counter(start=2, step=5).take(5) list(dataset.as_numpy_iterator()) [2, 7, 12, 17, 22] dataset = tf.data.experimental.Counter(start=10, step=-1).take(5) list(dataset.as_numpy_iterator()) [10, 9, 8, 7, 6] ``` | Args | | `start` | (Optional.) The starting value for the counter. Defaults to 0. | | `step` | (Optional.) The step size for the counter. Defaults to 1. | | `dtype` | (Optional.) The data type for counter elements. Defaults to [`tf.int64`](../../../tf#int64). | | Returns | | A `Dataset` of scalar `dtype` elements. | tensorflow tf.data.experimental.snapshot tf.data.experimental.snapshot ============================= API to persist the output of the input dataset. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.snapshot`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot) ``` tf.data.experimental.snapshot( path, compression='AUTO', reader_func=None, shard_func=None ) ``` The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential shard\_func could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...)) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func)) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to AUTO, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.make_saveable_from_iterator tf.data.experimental.make\_saveable\_from\_iterator =================================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L43-L102) | Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.make_saveable_from_iterator`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_saveable_from_iterator) ``` tf.data.experimental.make_saveable_from_iterator( iterator, external_state_policy=None ) ``` | Args | | `iterator` | Iterator. | | `external_state_policy` | A string that identifies how to handle input pipelines that depend on external state. Possible values are 'ignore': The external state is silently ignored. 'warn': The external state is ignored, logging a warning. 'fail': The operation fails upon encountering external state. By default we set it to 'fail'. | | Returns | | A SaveableObject for saving/restoring iterator state using Saver. | | Raises | | `ValueError` | If iterator does not support checkpointing. | | `ValueError` | If `external_state_policy` is not one of 'warn', 'ignore' or 'fail'. | #### For example: ``` with tf.Graph().as_default(): ds = tf.data.Dataset.range(10) iterator = ds.make_initializable_iterator() # Build the iterator SaveableObject. saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator) # Add the SaveableObject to the SAVEABLE_OBJECTS collection so # it can be automatically saved using Saver. tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj) saver = tf.compat.v1.train.Saver() while continue_training: ... Perform training ... if should_save_checkpoint: saver.save() ``` > > **Note:** When restoring the iterator, the existing iterator state is completely discarded. This means that any changes you may have made to the Dataset graph will be discarded as well! This includes the new Dataset graph that you may have built during validation. So, while running validation, make sure to run the initializer for the validation input pipeline after restoring the checkpoint. > > > **Note:** Not all iterators support checkpointing yet. Attempting to save the state of an unsupported iterator will throw an error. > tensorflow tf.data.experimental.get_next_as_optional tf.data.experimental.get\_next\_as\_optional ============================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/iterator_ops.py#L936-L952) | Returns a [`tf.experimental.Optional`](../../experimental/optional) with the next element of the iterator. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.get_next_as_optional`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/get_next_as_optional) ``` tf.data.experimental.get_next_as_optional( iterator ) ``` If the iterator has reached the end of the sequence, the returned [`tf.experimental.Optional`](../../experimental/optional) will have no value. | Args | | `iterator` | A [`tf.data.Iterator`](../iterator). | | Returns | | A [`tf.experimental.Optional`](../../experimental/optional) object which either contains the next element of the iterator (if it exists) or no value. | tensorflow tf.data.experimental.table_from_dataset tf.data.experimental.table\_from\_dataset ========================================= Returns a lookup table based on the given dataset. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.table_from_dataset`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/table_from_dataset) ``` tf.data.experimental.table_from_dataset( dataset=None, num_oov_buckets=0, vocab_size=None, default_value=None, hasher_spec=lookup_ops.FastHashSpec, key_dtype=tf.dtypes.string, name=None ) ``` This operation constructs a lookup table based on the given dataset of pairs of (key, value). Any lookup of an out-of-vocabulary token will return a bucket ID based on its hash if `num_oov_buckets` is greater than zero. Otherwise it is assigned the `default_value`. The bucket ID range is `[vocabulary size, vocabulary size + num_oov_buckets - 1]`. #### Sample Usages: ``` keys = tf.data.Dataset.range(100) values = tf.data.Dataset.range(100).map( lambda x: tf.strings.as_string(x * 2)) ds = tf.data.Dataset.zip((keys, values)) table = tf.data.experimental.table_from_dataset( ds, default_value='n/a', key_dtype=tf.int64) table.lookup(tf.constant([0, 1, 2], dtype=tf.int64)).numpy() array([b'0', b'2', b'4'], dtype=object) ``` | Args | | `dataset` | A dataset containing (key, value) pairs. | | `num_oov_buckets` | The number of out-of-vocabulary buckets. | | `vocab_size` | Number of the elements in the vocabulary, if known. | | `default_value` | The value to use for out-of-vocabulary feature values. Defaults to -1. | | `hasher_spec` | A `HasherSpec` to specify the hash function to use for assignation of out-of-vocabulary buckets. | | `key_dtype` | The `key` data type. | | `name` | A name for this op (optional). | | Returns | | The lookup table based on the given dataset. | | Raises | | `ValueError` | If * `dataset` does not contain pairs * The 2nd item in the `dataset` pairs has a dtype which is incompatible with `default_value` * `num_oov_buckets` is negative * `vocab_size` is not greater than zero * The `key_dtype` is not integer or string | tensorflow tf.data.experimental.RandomDataset tf.data.experimental.RandomDataset ================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/random_ops.py#L26-L27) | A `Dataset` of pseudorandom values. (deprecated) Inherits From: [`Dataset`](../dataset) ``` tf.data.experimental.RandomDataset( seed=None, name=None ) ``` | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../../tf#string) scalar [`tf.Tensor`](../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](../dataset) of scalar [`tf.int64`](../../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](../dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](../dataset#from_generator) uses [`tf.numpy_function`](../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](../dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](../dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](../dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](../dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](../dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](../dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](../dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../py_function) accepts [`tf.Tensor`](../../tensor) whereas [`tf.numpy_function`](../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../../numpy_function) and [`tf.py_function`](../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](../options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](../dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](../dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../tensorshape) or [`tf.int64`](../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](../dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](../dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../tf#int32), [`tf.int64`](../../../tf#int64) or [`tf.string`](../../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](../dataset#flat_map) and [`Dataset.interleave`](../dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](../dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](../options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](../iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.experimental.save tf.data.experimental.save ========================= Saves the content of the given dataset. ``` tf.data.experimental.save( dataset, path, compression=None, shard_func=None, checkpoint_args=None ) ``` #### Example usage: ``` import tempfile path = os.path.join(tempfile.gettempdir(), "saved_data") # Save a dataset dataset = tf.data.Dataset.range(2) tf.data.experimental.save(dataset, path) new_dataset = tf.data.experimental.load(path) for elem in new_dataset: print(elem) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) ``` The saved dataset is saved in multiple file "shards". By default, the dataset output is divided to shards in a round-robin fashion but custom sharding can be specified via the `shard_func` function. For example, you can save the dataset to using a single shard as follows: ``` dataset = make_dataset() def custom_shard_func(element): return 0 dataset = tf.data.experimental.save( path="/path/to/data", ..., shard_func=custom_shard_func) ``` To enable checkpointing, pass in `checkpoint_args` to the `save` method as follows: ``` dataset = tf.data.Dataset.range(100) save_dir = "..." checkpoint_prefix = "..." step_counter = tf.Variable(0, trainable=False) checkpoint_args = { "checkpoint_interval": 50, "step_counter": step_counter, "directory": checkpoint_prefix, "max_to_keep": 20, } dataset.save(dataset, save_dir, checkpoint_args=checkpoint_args) ``` > > **Note:** The directory layout and file format used for saving the dataset is considered an implementation detail and may change. For this reason, datasets saved through [`tf.data.experimental.save`](save) should only be consumed through [`tf.data.experimental.load`](load), which is guaranteed to be backwards compatible. > | Args | | `dataset` | The dataset to save. | | `path` | Required. A directory to use for saving the dataset. | | `compression` | Optional. The algorithm to use to compress data when writing it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`. | | `shard_func` | Optional. A function to control the mapping of dataset elements to file shards. The function is expected to map elements of the input dataset to int64 shard IDs. If present, the function will be traced and executed as graph computation. | | `checkpoint_args` | Optional args for checkpointing which will be passed into the [`tf.train.CheckpointManager`](../../train/checkpointmanager). If `checkpoint_args` are not specified, then checkpointing will not be performed. The `save()` implementation creates a [`tf.train.Checkpoint`](../../train/checkpoint) object internally, so users should not set the `checkpoint` argument in `checkpoint_args`. | | Raises | | ValueError if `checkpoint` is passed into `checkpoint_args`. | tensorflow Module: tf.data.experimental.service Module: tf.data.experimental.service ==================================== API for using the tf.data service. #### This module contains: 1. tf.data server implementations for running the tf.data service. 2. APIs for registering datasets with the tf.data service and reading from the registered datasets. The tf.data service provides the following benefits: * Horizontal scaling of tf.data input pipeline processing to solve input bottlenecks. * Data coordination for distributed training. Coordinated reads enable all replicas to train on similar-length examples across each global training step, improving step times in synchronous training. * Dynamic balancing of data across training replicas. ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode=tf.data.experimental.service.ShardingPolicy.OFF, service=dispatcher.target)) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Setup ----- This section goes over how to set up the tf.data service. ### Run tf.data servers The tf.data service consists of one dispatch server and `n` worker servers. tf.data servers should be brought up alongside your training jobs, then brought down when the jobs are finished. Use [`tf.data.experimental.service.DispatchServer`](service/dispatchserver) to start a dispatch server, and [`tf.data.experimental.service.WorkerServer`](service/workerserver) to start worker servers. Servers can be run in the same process for testing purposes, or scaled up on separate machines. See <https://github.com/tensorflow/ecosystem/tree/master/data_service> for an example of using Google Kubernetes Engine (GKE) to manage the tf.data service. Note that the server implementation in [tf\_std\_data\_server.py](https://github.com/tensorflow/ecosystem/blob/master/data_service/tf_std_data_server.py) is not GKE-specific, and can be used to run the tf.data service in other contexts. ### Custom ops If your dataset uses custom ops, these ops need to be made available to tf.data servers by calling [load\_op\_library](https://www.tensorflow.org/api_docs/python/tf/load_op_library) from the dispatcher and worker processes at startup. Usage ----- Users interact with tf.data service by programmatically registering their datasets with tf.data service, then creating datasets that read from the registered datasets. The [register\_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/register_dataset) function registers a dataset, then the [from\_dataset\_id](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/from_dataset_id) function creates a new dataset which reads from the registered dataset. The [distribute](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/distribute) function wraps `register_dataset` and `from_dataset_id` into a single convenient transformation which registers its input dataset and then reads from it. `distribute` enables tf.data service to be used with a one-line code change. However, it assumes that the dataset is created and consumed by the same entity and this assumption might not always be valid or desirable. In particular, in certain scenarios, such as distributed training, it might be desirable to decouple the creation and consumption of the dataset (via `register_dataset` and `from_dataset_id` respectively) to avoid having to create the dataset on each of the training workers. ### Example #### `distribute` To use the `distribute` transformation, apply the transformation after the prefix of your input pipeline that you would like to be executed using tf.data service (typically at the end). ``` dataset = ... # Define your dataset here. # Move dataset processing from the local machine to the tf.data service dataset = dataset.apply( tf.data.experimental.service.distribute( processing_mode=tf.data.experimental.service.ShardingPolicy.OFF, service=FLAGS.tf_data_service_address, job_name="shared_job")) # Any transformations added after `distribute` will be run on the local machine. dataset = dataset.prefetch(1) ``` The above code will create a tf.data service "job", which iterates through the dataset to generate data. To share the data from a job across multiple clients (e.g. when using TPUStrategy or MultiWorkerMirroredStrategy), set a common `job_name` across all clients. #### `register_dataset` and `from_dataset_id` `register_dataset` registers a dataset with the tf.data service, returning a dataset id for the registered dataset. `from_dataset_id` creates a dataset that reads from the registered dataset. These APIs can be used to reduce dataset building time for distributed training. Instead of building the dataset on all training workers, we can build the dataset just once and then register the dataset using `register_dataset`. Then all workers can call `from_dataset_id` without needing to build the dataset themselves. ``` dataset = ... # Define your dataset here. dataset_id = tf.data.experimental.service.register_dataset( service=FLAGS.tf_data_service_address, dataset=dataset) # Use `from_dataset_id` to create per-worker datasets. per_worker_datasets = {} for worker in workers: per_worker_datasets[worker] = tf.data.experimental.service.from_dataset_id( processing_mode=tf.data.experimental.service.ShardingPolicy.OFF, service=FLAGS.tf_data_service_address, dataset_id=dataset_id, job_name="shared_job") ``` ### Processing Modes `processing_mode` specifies how to shard a dataset among tf.data service workers. tf.data service supports `OFF`, `DYNAMIC`, `FILE`, `DATA`, `FILE_OR_DATA`, `HINT` sharding policies. OFF: No sharding will be performed. The entire input dataset will be processed independently by each of the tf.data service workers. For this reason, it is important to shuffle data (e.g. filenames) non-deterministically, so that each worker will process the elements of the dataset in a different order. This mode can be used to distribute datasets that aren't splittable. If a worker is added or restarted during ShardingPolicy.OFF processing, the worker will instantiate a new copy of the dataset and begin producing data from the beginning. #### Dynamic Sharding DYNAMIC: In this mode, tf.data service divides the dataset into two components: a source component that generates "splits" such as filenames, and a processing component that takes splits and outputs dataset elements. The source component is executed in a centralized fashion by the tf.data service dispatcher, which generates different splits of input data. The processing component is executed in a parallel fashion by the tf.data service workers, each operating on a different set of input data splits. For example, consider the following dataset: ``` dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(TFRecordDataset) dataset = dataset.map(preprocess_fn) dataset = dataset.batch(batch_size) dataset = dataset.apply( tf.data.experimental.service.distribute( processing_mode=tf.data.experimental.service.ShardingPolicy.DYNAMIC, ...)) ``` The `from_tensor_slices` will be run on the dispatcher, while the `interleave`, `map`, and `batch` will be run on tf.data service workers. The workers will pull filenames from the dispatcher for processing. To process a dataset with dynamic sharding, the dataset must have a splittable source, and all of its transformations must be compatible with splitting. While most sources and transformations support splitting, there are exceptions, such as custom datasets which may not implement the splitting API. Please file a Github issue if you would like to use distributed epoch processing for a currently unsupported dataset source or transformation. If no workers are restarted during training, dynamic sharding mode will visit every example exactly once. If workers are restarted during training, the splits they were processing will not be fully visited. The dispatcher maintains a cursor through the dataset's splits. Assuming fault tolerance is enabled (See "Fault Tolerance" below), the dispatcher will store cursor state in write-ahead logs so that the cursor can be restored in case the dispatcher is restarted mid-training. This provides an at-most-once visitation guarantee in the presence of server restarts. #### Static Sharding The following are static sharding policies. The semantics are similar to [`tf.data.experimental.AutoShardPolicy`](autoshardpolicy). These policies require: * The tf.data service cluster is configured with a fixed list of workers in DispatcherConfig. * Each client only reads from the local tf.data service worker. If a worker is restarted while performing static sharding, the worker will begin processing its shard again from the beginning. FILE: Shards by input files (i.e. each worker will get a fixed set of files to process). When this option is selected, make sure that there is at least as many files as workers. If there are fewer input files than workers, a runtime error will be raised. DATA: Shards by elements produced by the dataset. Each worker will process the whole dataset and discard the portion that is not for itself. Note that for this mode to correctly partition the dataset elements, the dataset needs to produce elements in a deterministic order. FILE\_OR\_DATA: Attempts FILE-based sharding, falling back to DATA-based sharding on failure. HINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a placeholder to replace with `shard(num_workers, worker_index)`. For backwards compatibility, `processing_mode` may also be set to the strings `"parallel_epochs"` or `"distributed_epoch"`, which are respectively equivalent to [`ShardingPolicy.OFF`](service/shardingpolicy#OFF) and [`ShardingPolicy.DYNAMIC`](service/shardingpolicy#DYNAMIC). ### Coordinated Data Read By default, when multiple consumers read from the same job, they receive data on a first-come first-served basis. In some use cases, it is advantageous to coordinate the consumers. At each step, consumers read data from the same worker. For example, the tf.data service can be used to coordinate example sizes across a cluster during synchronous training, so that during each step all replicas train on similar-sized elements. To achieve this, define a dataset which generates rounds of `num_consumers` consecutive similar-sized batches, then enable coordinated reads by setting `consumer_index` and `num_consumers`. > > **Note:** To keep consumers in sync, coordinated reads require that the dataset have infinite cardinality. You can get this by adding `.repeat()` at the end of the dataset definition. > ### Jobs A tf.data service "job" refers to the process of reading from a dataset managed by the tf.data service, using one or more data consumers. Jobs are created when iterating over datasets that read from tf.data service. The data produced by a job is determined by (1) dataset associated with the job and (2) the job's processing mode. For example, if a job is created for the dataset [`Dataset.range(5)`](../dataset#range), and the processing mode is [`ShardingPolicy.OFF`](service/shardingpolicy#OFF), each tf.data worker will produce the elements `{0, 1, 2, 3, 4}` for the job, resulting in the job producing `5 * num_workers` elements. If the processing mode is [`ShardingPolicy.DYNAMIC`](service/shardingpolicy#DYNAMIC), the job will only produce `5` elements. One or more consumers can consume data from a job. By default, jobs are "anonymous", meaning that only the consumer which created the job can read from it. To share the output of a job across multiple consumers, you can set a common `job_name`. ### Fault Tolerance By default, the tf.data dispatch server stores its state in-memory, making it a single point of failure during training. To avoid this, pass `fault_tolerant_mode=True` when creating your `DispatchServer`. Dispatcher fault tolerance requires `work_dir` to be configured and accessible from the dispatcher both before and after restart (e.g. a GCS path). With fault tolerant mode enabled, the dispatcher will journal its state to the work directory so that no state is lost when the dispatcher is restarted. WorkerServers may be freely restarted, added, or removed during training. At startup, workers will register with the dispatcher and begin processing all outstanding jobs from the beginning. ### Usage with tf.distribute tf.distribute is the TensorFlow API for distributed training. There are several ways to use tf.data with tf.distribute: `strategy.experimental_distribute_dataset`, `strategy.distribute_datasets_from_function`, and (for PSStrategy) `coordinator.create_per_worker_dataset`. The following sections give code examples for each. In general we recommend using `tf.data.experimental.service.{register_dataset,from_dataset_id}` over [`tf.data.experimental.service.distribute`](service/distribute) for two reasons: * The dataset only needs to be constructed and optimized once, instead of once per worker. This can significantly reduce startup time, because the current `experimental_distribute_dataset` and `distribute_datasets_from_function` implementations create and optimize worker datasets sequentially. * If a dataset depends on lookup tables or variables that are only present on one host, the dataset needs to be registered from that host. Typically this only happens when resources are placed on the chief or worker 0. Registering the dataset from the chief will avoid issues with depending on remote resources. #### strategy.experimental\_distribute\_dataset Nothing special is required when using `strategy.experimental_distribute_dataset`, just apply `register_dataset` and `from_dataset_id` as above, making sure to specify a `job_name` so that all workers consume from the same tf.data service job. ``` dataset = ... # Define your dataset here. dataset_id = tf.data.experimental.service.register_dataset( service=FLAGS.tf_data_service_address, dataset=dataset) dataset = tf.data.experimental.service.from_dataset_id( processing_mode=tf.data.experimental.service.ShardingPolicy.OFF, service=FLAGS.tf_data_service_address, dataset_id=dataset_id, job_name="shared_job") dataset = strategy.experimental_distribute_dataset(dataset) ``` #### strategy.distribute\_datasets\_from\_function First, make sure the dataset produced by the `dataset_fn` does not depend on the `input_context` for the training worker on which it is run. Instead of each worker building its own (sharded) dataset, one worker should register an unsharded dataset, and the remaining workers should consume data from that dataset. ``` dataset = dataset_fn() dataset_id = tf.data.experimental.service.register_dataset( service=FLAGS.tf_data_service_address, dataset=dataset) def new_dataset_fn(input_context): del input_context return tf.data.experimental.service.from_dataset_id( processing_mode=tf.data.experimental.service.ShardingPolicy.OFF, service=FLAGS.tf_data_service_address, dataset_id=dataset_id, job_name="shared_job") dataset = strategy.distribute_datasets_from_function(new_dataset_fn) ``` #### coordinator.create\_per\_worker\_dataset `create_per_worker_dataset` works the same as `distribute_datasets_from_function`. ``` dataset = dataset_fn() dataset_id = tf.data.experimental.service.register_dataset( service=FLAGS.tf_data_service_address, dataset=dataset) def new_dataset_fn(input_context): del input_context return tf.data.experimental.service.from_dataset_id( processing_mode=tf.data.experimental.service.ShardingPolicy.OFF, service=FLAGS.tf_data_service_address, dataset_id=dataset_id, job_name="shared_job") dataset = coordinator.create_per_worker_dataset(new_dataset_fn) ``` Limitations ----------- * Python-based data processing: Datasets which use Python-based data processing (e.g. [`tf.py_function`](../../py_function), [`tf.numpy_function`](../../numpy_function), or [`tf.data.Dataset.from_generator`](../dataset#from_generator)) are currently not supported. * Non-Serializable Resources: Datasets may only depend on TF resources that support serialization. Serialization is currently supported for lookup tables and variables. If your dataset depends on a TF resource that cannot be serialized, please file a Github issue. * Remote Resources: If a dataset depends on a resource, the dataset must be registered from the same process that created the resource (e.g. the "chief" job of ParameterServerStrategy). Classes ------- [`class DispatchServer`](service/dispatchserver): An in-process tf.data service dispatch server. [`class DispatcherConfig`](service/dispatcherconfig): Configuration class for tf.data service dispatchers. [`class ShardingPolicy`](service/shardingpolicy): Specifies how to shard data among tf.data service workers. [`class WorkerConfig`](service/workerconfig): Configuration class for tf.data service dispatchers. [`class WorkerServer`](service/workerserver): An in-process tf.data service worker server. Functions --------- [`distribute(...)`](service/distribute): A transformation that moves dataset processing to the tf.data service. [`from_dataset_id(...)`](service/from_dataset_id): Creates a dataset which reads data from the tf.data service. [`register_dataset(...)`](service/register_dataset): Registers a dataset with the tf.data service.
programming_docs
tensorflow tf.data.experimental.parallel_interleave tf.data.experimental.parallel\_interleave ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/interleave_ops.py#L23-L86) | A parallel version of the [`Dataset.interleave()`](../dataset#interleave) transformation. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.parallel_interleave`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/parallel_interleave) ``` tf.data.experimental.parallel_interleave( map_func, cycle_length, block_length=1, sloppy=False, buffer_output_elements=None, prefetch_input_elements=None ) ``` `parallel_interleave()` maps `map_func` across its input to produce nested datasets, and outputs their elements interleaved. Unlike [`tf.data.Dataset.interleave`](../dataset#interleave), it gets elements from `cycle_length` nested datasets in parallel, which increases the throughput, especially in the presence of stragglers. Furthermore, the `sloppy` argument can be used to improve performance, by relaxing the requirement that the outputs are produced in a deterministic order, and allowing the implementation to skip over nested datasets whose elements are not readily available when requested. #### Example usage: ``` # Preprocess 4 files concurrently. filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords") dataset = filenames.apply( tf.data.experimental.parallel_interleave( lambda filename: tf.data.TFRecordDataset(filename), cycle_length=4)) ``` | Args | | `map_func` | A function mapping a nested structure of tensors to a `Dataset`. | | `cycle_length` | The number of input `Dataset`s to interleave from in parallel. | | `block_length` | The number of consecutive elements to pull from an input `Dataset` before advancing to the next input `Dataset`. | | `sloppy` | A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If `sloppy` is `None`, the [`tf.data.Options.deterministic`](../options#deterministic) dataset option (`True` by default) is used to decide whether to enforce a deterministic order. | | `buffer_output_elements` | The number of elements each iterator being interleaved should buffer (similar to the `.prefetch()` transformation for each interleaved iterator). | | `prefetch_input_elements` | The number of input elements to transform to iterators before they are needed for interleaving. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.AutotuneAlgorithm tf.data.experimental.AutotuneAlgorithm ====================================== Represents the type of autotuning algorithm to use. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.AutotuneAlgorithm`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/AutotuneAlgorithm) DEFAULT: The default behavior is implementation specific and may change over time. HILL\_CLIMB: In each optimization step, this algorithm chooses the optimial parameter and increases its value by 1. GRADIENT\_DESCENT: In each optimization step, this algorithm updates the parameter values in the optimal direction. MAX\_PARALLELISM: Similar to HILL\_CLIMB but uses a relaxed stopping condition, allowing the optimization to oversubscribe the CPU. | Class Variables | | DEFAULT | `<AutotuneAlgorithm.DEFAULT: 0>` | | GRADIENT\_DESCENT | `<AutotuneAlgorithm.GRADIENT_DESCENT: 2>` | | HILL\_CLIMB | `<AutotuneAlgorithm.HILL_CLIMB: 1>` | | MAX\_PARALLELISM | `<AutotuneAlgorithm.MAX_PARALLELISM: 3>` | tensorflow tf.data.experimental.CheckpointInputPipelineHook tf.data.experimental.CheckpointInputPipelineHook ================================================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L106-L296) | Checkpoints input pipeline state every N steps or seconds. Inherits From: [`SessionRunHook`](../../estimator/sessionrunhook) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.CheckpointInputPipelineHook`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/CheckpointInputPipelineHook) ``` tf.data.experimental.CheckpointInputPipelineHook( estimator, external_state_policy=None ) ``` This hook saves the state of the iterators in the `Graph` so that when training is resumed the input pipeline continues from where it left off. This could potentially avoid overfitting in certain pipelines where the number of training steps per eval are small compared to the dataset size or if the training pipeline is pre-empted. Differences from `CheckpointSaverHook`: 1. Saves only the input pipelines in the "iterators" collection and not the global variables or other saveable objects. 2. Does not write the `GraphDef` and `MetaGraphDef` to the summary. Example of checkpointing the training pipeline: ``` est = tf.estimator.Estimator(model_fn) while True: est.train( train_input_fn, hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)], steps=train_steps_per_eval) # Note: We do not pass the hook here. metrics = est.evaluate(eval_input_fn) if should_stop_the_training(metrics): break ``` This hook should be used if the input pipeline state needs to be saved separate from the model checkpoint. Doing so may be useful for a few reasons: 1. The input pipeline checkpoint may be large, if there are large shuffle or prefetch buffers for instance, and may bloat the checkpoint size. 2. If the input pipeline is shared between training and validation, restoring the checkpoint during validation may override the validation input pipeline. For saving the input pipeline checkpoint alongside the model weights use [`tf.data.experimental.make_saveable_from_iterator`](make_saveable_from_iterator) directly to create a `SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however, that you will need to be careful not to restore the training iterator during eval. You can do that by not adding the iterator to the SAVEABLE\_OBJECTS collector when building the eval graph. | Args | | `estimator` | Estimator. | | `external_state_policy` | A string that identifies how to handle input pipelines that depend on external state. Possible values are 'ignore': The external state is silently ignored. 'warn': The external state is ignored, logging a warning. 'fail': The operation fails upon encountering external state. By default we set it to 'fail'. | | Raises | | `ValueError` | One of `save_steps` or `save_secs` should be set. | | `ValueError` | At most one of saver or scaffold should be set. | | `ValueError` | If `external_state_policy` is not one of 'warn', 'ignore' or 'fail'. | Methods ------- ### `after_create_session` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L246-L249) ``` after_create_session( session, coord ) ``` Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which `begin` is called: * When this is called, the graph is finalized and ops can no longer be added to the graph. * This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session. | Args | | `session` | A TensorFlow Session that has been created. | | `coord` | A Coordinator object which keeps track of all threads. | ### `after_run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L292-L293) ``` after_run( run_context, run_values ) ``` Called after each call to run(). The `run_values` argument contains results of requested ops/tensors by `before_run()`. The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration. If `session.run()` raises any exceptions then `after_run()` is not called. | Args | | `run_context` | A `SessionRunContext` object. | | `run_values` | A SessionRunValues object. | ### `before_run` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L286-L290) ``` before_run( run_context ) ``` Called before each call to run(). You can return from this call a `SessionRunArgs` object indicating ops or tensors to add to the upcoming `run()` call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The `run_context` argument is a `SessionRunContext` that provides information about the upcoming `run()` call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops. | Args | | `run_context` | A `SessionRunContext` object. | | Returns | | None or a `SessionRunArgs` object. | ### `begin` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L229-L244) ``` begin() ``` Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the `begin()` call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of `begin()` on the same graph, should not change the graph. ### `end` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/iterator_ops.py#L295-L296) ``` end( session ) ``` Called at the end of session. The `session` argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If `session.run()` raises exception other than OutOfRangeError or StopIteration then `end()` is not called. Note the difference between `end()` and `after_run()` behavior when `session.run()` raises OutOfRangeError or StopIteration. In that case `end()` is called but `after_run()` is not called. | Args | | `session` | A TensorFlow Session that will be soon closed. | tensorflow tf.data.experimental.DistributeOptions tf.data.experimental.DistributeOptions ====================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/options.py#L252-L290) | Represents options for distributed data processing. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.DistributeOptions`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/DistributeOptions) ``` tf.data.experimental.DistributeOptions() ``` You can set the distribution options of a dataset through the `experimental_distribute` property of [`tf.data.Options`](../options); the property is an instance of [`tf.data.experimental.DistributeOptions`](distributeoptions). ``` options = tf.data.Options() options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF dataset = dataset.with_options(options) ``` | Attributes | | `auto_shard_policy` | The type of sharding to use. See [`tf.data.experimental.AutoShardPolicy`](autoshardpolicy) for additional information. | | `num_devices` | The number of devices attached to this input pipeline. This will be automatically set by `MultiDeviceIterator`. | Methods ------- ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L38-L44) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L46-L50) ``` __ne__( other ) ``` Return self!=value. tensorflow tf.data.experimental.dense_to_sparse_batch tf.data.experimental.dense\_to\_sparse\_batch ============================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/batching.py#L98-L145) | A transformation that batches ragged elements into [`tf.sparse.SparseTensor`](../../sparse/sparsetensor)s. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.dense_to_sparse_batch`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/dense_to_sparse_batch) ``` tf.data.experimental.dense_to_sparse_batch( batch_size, row_shape ) ``` Like [`Dataset.padded_batch()`](../dataset#padded_batch), this transformation combines multiple consecutive elements of the dataset, which might have different shapes, into a single element. The resulting element has three components (`indices`, `values`, and `dense_shape`), which comprise a [`tf.sparse.SparseTensor`](../../sparse/sparsetensor) that represents the same data. The `row_shape` represents the dense shape of each row in the resulting [`tf.sparse.SparseTensor`](../../sparse/sparsetensor), to which the effective batch size is prepended. For example: ``` # NOTE: The following examples use `{ ... }` to represent the # contents of a dataset. a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } a.apply(tf.data.experimental.dense_to_sparse_batch( batch_size=2, row_shape=[6])) == { ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices ['a', 'b', 'c', 'a', 'b'], # values [2, 6]), # dense_shape ([[0, 0], [0, 1], [0, 2], [0, 3]], ['a', 'b', 'c', 'd'], [1, 6]) } ``` | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `row_shape` | A [`tf.TensorShape`](../../tensorshape) or [`tf.int64`](../../../tf#int64) vector tensor-like object representing the equivalent dense shape of a row in the resulting [`tf.sparse.SparseTensor`](../../sparse/sparsetensor). Each element of this dataset must have the same rank as `row_shape`, and must have size less than or equal to `row_shape` in each dimension. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.sample_from_datasets tf.data.experimental.sample\_from\_datasets =========================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/interleave_ops.py#L89-L152) | Samples elements at random from the datasets in `datasets`. (deprecated) ``` tf.data.experimental.sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose also that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | tensorflow tf.data.experimental.make_csv_dataset tf.data.experimental.make\_csv\_dataset ======================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/readers.py#L325-L621) | Reads CSV files into a dataset. ``` tf.data.experimental.make_csv_dataset( file_pattern, batch_size, column_names=None, column_defaults=None, label_name=None, select_columns=None, field_delim=',', use_quote_delim=True, na_value='', header=True, num_epochs=None, shuffle=True, shuffle_buffer_size=10000, shuffle_seed=None, prefetch_buffer_size=None, num_parallel_reads=None, sloppy=False, num_rows_for_inference=100, compression_type=None, ignore_errors=False ) ``` Reads CSV files into a dataset, where each element of the dataset is a (features, labels) tuple that corresponds to a batch of CSV rows. The features dictionary maps feature column names to `Tensor`s containing the corresponding feature data, and labels is a `Tensor` containing the batch's label data. By default, the first rows of the CSV files are expected to be headers listing the column names. If the first rows are not headers, set `header=False` and provide the column names with the `column_names` argument. By default, the dataset is repeated indefinitely, reshuffling the order each time. This behavior can be modified by setting the `num_epochs` and `shuffle` arguments. For example, suppose you have a CSV file containing | Feature\_A | Feature\_B | | --- | --- | | 1 | "a" | | 2 | "b" | | 3 | "c" | | 4 | "d" | ``` # No label column specified dataset = tf.data.experimental.make_csv_dataset(filename, batch_size=2) iterator = dataset.as_numpy_iterator() print(dict(next(iterator))) # prints a dictionary of batched features: # OrderedDict([('Feature_A', array([1, 4], dtype=int32)), # ('Feature_B', array([b'a', b'd'], dtype=object))]) ``` ``` # Set Feature_B as label column dataset = tf.data.experimental.make_csv_dataset( filename, batch_size=2, label_name="Feature_B") iterator = dataset.as_numpy_iterator() print(next(iterator)) # prints (features, labels) tuple: # (OrderedDict([('Feature_A', array([1, 2], dtype=int32))]), # array([b'a', b'b'], dtype=object)) ``` See the [Load CSV data guide](https://www.tensorflow.org/tutorials/load_data/csv) for more examples of using `make_csv_dataset` to read CSV data. | Args | | `file_pattern` | List of files or patterns of file paths containing CSV records. See [`tf.io.gfile.glob`](../../io/gfile/glob) for pattern rules. | | `batch_size` | An int representing the number of records to combine in a single batch. | | `column_names` | An optional list of strings that corresponds to the CSV columns, in order. One per column of the input record. If this is not provided, infers the column names from the first row of the records. These names will be the keys of the features dict of each dataset element. | | `column_defaults` | A optional list of default values for the CSV fields. One item per selected column of the input record. Each item in the list is either a valid CSV dtype (float32, float64, int32, int64, or string), or a `Tensor` with one of the aforementioned types. The tensor can either be a scalar default value (if the column is optional), or an empty tensor (if the column is required). If a dtype is provided instead of a tensor, the column is also treated as required. If this list is not provided, tries to infer types based on reading the first num\_rows\_for\_inference rows of files specified, and assumes all columns are optional, defaulting to `0` for numeric values and `""` for string values. If both this and `select_columns` are specified, these must have the same lengths, and `column_defaults` is assumed to be sorted in order of increasing column index. | | `label_name` | A optional string corresponding to the label column. If provided, the data for this column is returned as a separate `Tensor` from the features dictionary, so that the dataset complies with the format expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input function. | | `select_columns` | An optional list of integer indices or string column names, that specifies a subset of columns of CSV data to select. If column names are provided, these must correspond to names provided in `column_names` or inferred from the file header lines. When this argument is specified, only a subset of CSV columns will be parsed and returned, corresponding to the columns specified. Using this results in faster parsing and lower memory usage. If both this and `column_defaults` are specified, these must have the same lengths, and `column_defaults` is assumed to be sorted in order of increasing column index. | | `field_delim` | An optional `string`. Defaults to `","`. Char delimiter to separate fields in a record. | | `use_quote_delim` | An optional bool. Defaults to `True`. If false, treats double quotation marks as regular characters inside of the string fields. | | `na_value` | Additional string to recognize as NA/NaN. | | `header` | A bool that indicates whether the first rows of provided CSV files correspond to header lines with column names, and should not be included in the data. | | `num_epochs` | An int specifying the number of times this dataset is repeated. If None, cycles through the dataset forever. | | `shuffle` | A bool that indicates whether the input should be shuffled. | | `shuffle_buffer_size` | Buffer size to use for shuffling. A large buffer size ensures better shuffling, but increases memory usage and startup time. | | `shuffle_seed` | Randomization seed to use for shuffling. | | `prefetch_buffer_size` | An int specifying the number of feature batches to prefetch for performance improvement. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. | | `num_parallel_reads` | Number of threads used to read CSV records from files. If >1, the results will be interleaved. Defaults to `1`. | | `sloppy` | If `True`, reading performance will be improved at the cost of non-deterministic ordering. If `False`, the order of elements produced is deterministic prior to shuffling (elements are still randomized if `shuffle=True`. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to `False`. | | `num_rows_for_inference` | Number of rows of a file to use for type inference if record\_defaults is not provided. If None, reads all the rows of all the files. Defaults to 100. | | `compression_type` | (Optional.) A [`tf.string`](../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression. | | `ignore_errors` | (Optional.) If `True`, ignores errors with CSV file parsing, such as malformed data or empty lines, and moves on to the next valid CSV record. Otherwise, the dataset raises an error and stops processing when encountering any invalid records. Defaults to `False`. | | Returns | | A dataset, where each element is a (features, labels) tuple that corresponds to a batch of `batch_size` CSV rows. The features dictionary maps feature column names to `Tensor`s containing the corresponding column data, and labels is a `Tensor` containing the column data for the label column specified by `label_name`. | | Raises | | `ValueError` | If any of the arguments is malformed. |
programming_docs
tensorflow tf.data.experimental.enable_debug_mode tf.data.experimental.enable\_debug\_mode ======================================== Enables debug mode for tf.data. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.enable_debug_mode`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/enable_debug_mode) ``` tf.data.experimental.enable_debug_mode() ``` Example usage with pdb module: ``` import tensorflow as tf import pdb tf.data.experimental.enable_debug_mode() def func(x): # Python 3.7 and older requires `pdb.Pdb(nosigint=True).set_trace()` pdb.set_trace() x = x + 1 return x dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.map(func) for item in dataset: print(item) ``` The effect of debug mode is two-fold: 1) Any transformations that would introduce asynchrony, parallelism, or non-determinism to the input pipeline execution will be forced to execute synchronously, sequentially, and deterministically. 2) Any user-defined functions passed into tf.data transformations such as `map` will be wrapped in [`tf.py_function`](../../py_function) so that their body is executed "eagerly" as a Python function as opposed to a traced TensorFlow graph, which is the default behavior. Note that even when debug mode is enabled, the user-defined function is still traced to infer the shape and type of its outputs; as a consequence, any `print` statements or breakpoints will be triggered once during the tracing before the actual execution of the input pipeline. > > **Note:** As the debug mode setting affects the construction of the tf.data input pipeline, it should be enabled before any tf.data definitions. > | Raises | | `ValueError` | When invoked from graph mode. | tensorflow tf.data.experimental.CsvDataset tf.data.experimental.CsvDataset =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/readers.py#L660-L813) | A Dataset comprising lines from one or more CSV files. Inherits From: [`Dataset`](../dataset) ``` tf.data.experimental.CsvDataset( filenames, record_defaults, compression_type=None, buffer_size=None, header=False, field_delim=',', use_quote_delim=True, na_value='', select_cols=None, exclude_cols=None ) ``` The [`tf.data.experimental.CsvDataset`](csvdataset) class provides a minimal CSV Dataset interface. There is also a richer [`tf.data.experimental.make_csv_dataset`](make_csv_dataset) function which provides additional convenience features such as column header parsing, column type-inference, automatic shuffling, and file interleaving. The elements of this dataset correspond to records from the file(s). RFC 4180 format is expected for CSV files (<https://tools.ietf.org/html/rfc4180>) Note that we allow leading and trailing spaces for int or float fields. For example, suppose we have a file 'my\_file0.csv' with four CSV columns of different data types: ``` with open('/tmp/my_file0.csv', 'w') as f: f.write('abcdefg,4.28E10,5.55E6,12\n') f.write('hijklmn,-5.3E14,,2\n') ``` We can construct a CsvDataset from it as follows: ``` dataset = tf.data.experimental.CsvDataset( "/tmp/my_file0.csv", [tf.float32, # Required field, use dtype or empty tensor tf.constant([0.0], dtype=tf.float32), # Optional field, default to 0.0 tf.int32, # Required field, use dtype or empty tensor ], select_cols=[1,2,3] # Only parse last three columns ) ``` The expected output of its iterations is: ``` for element in dataset.as_numpy_iterator(): print(element) (4.28e10, 5.55e6, 12) (-5.3e14, 0.0, 2) ``` See <https://www.tensorflow.org/tutorials/load_data/csv#tfdataexperimentalcsvdataset> for more in-depth example usage. | Args | | `filenames` | A [`tf.string`](../../../tf#string) tensor containing one or more filenames. | | `record_defaults` | A list of default values for the CSV fields. Each item in the list is either a valid CSV `DType` (float32, float64, int32, int64, string), or a `Tensor` object with one of the above types. One per column of CSV data, with either a scalar `Tensor` default value for the column if it is optional, or `DType` or empty `Tensor` if required. If both this and `select_columns` are specified, these must have the same lengths, and `column_defaults` is assumed to be sorted in order of increasing column index. If both this and 'exclude\_cols' are specified, the sum of lengths of record\_defaults and exclude\_cols should equal the total number of columns in the CSV file. | | `compression_type` | (Optional.) A [`tf.string`](../../../tf#string) scalar evaluating to one of `""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression. | | `buffer_size` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar denoting the number of bytes to buffer while reading files. Defaults to 4MB. | | `header` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar indicating whether the CSV file(s) have header line(s) that should be skipped when parsing. Defaults to `False`. | | `field_delim` | (Optional.) A [`tf.string`](../../../tf#string) scalar containing the delimiter character that separates fields in a record. Defaults to `","`. | | `use_quote_delim` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar. If `False`, treats double quotation marks as regular characters inside of string fields (ignoring RFC 4180, Section 2, Bullet 5). Defaults to `True`. | | `na_value` | (Optional.) A [`tf.string`](../../../tf#string) scalar indicating a value that will be treated as NA/NaN. | | `select_cols` | (Optional.) A sorted list of column indices to select from the input data. If specified, only this subset of columns will be parsed. Defaults to parsing all columns. At most one of `select_cols` and `exclude_cols` can be specified. | | `exclude_cols` | (Optional.) A sorted list of column indices to exclude from the input data. If specified, only the complement of this set of column will be parsed. Defaults to parsing all columns. At most one of `select_cols` and `exclude_cols` can be specified. | | Raises | | `InvalidArgumentError` | If exclude\_cols is not None and len(exclude\_cols) + len(record\_defaults) does not match the total number of columns in the file(s) | | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../../tf#string) scalar [`tf.Tensor`](../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](../dataset) of scalar [`tf.int64`](../../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](../dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](../dataset#from_generator) uses [`tf.numpy_function`](../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](../dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](../dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](../dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](../dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](../dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](../dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](../dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../py_function) accepts [`tf.Tensor`](../../tensor) whereas [`tf.numpy_function`](../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../../numpy_function) and [`tf.py_function`](../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](../options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](../dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](../dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../tensorshape) or [`tf.int64`](../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](../dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](../dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../tf#int32), [`tf.int64`](../../../tf#int64) or [`tf.string`](../../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](../dataset#flat_map) and [`Dataset.interleave`](../dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](../dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](../options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](../iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.experimental.AutotuneOptions tf.data.experimental.AutotuneOptions ==================================== Represents options for autotuning dataset performance. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.AutotuneOptions`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/AutotuneOptions) ``` tf.data.experimental.AutotuneOptions() ``` ``` options = tf.data.Options() options.autotune.enabled = False dataset = dataset.with_options(options) ``` | Attributes | | `autotune_algorithm` | When autotuning is enabled (through `autotune`), determines the algorithm to use. | | `cpu_budget` | When autotuning is enabled (through `autotune`), determines the CPU budget to use. Values greater than the number of schedulable CPU cores are allowed but may result in CPU contention. If None, defaults to the number of schedulable CPU cores. | | `enabled` | Whether to automatically tune performance knobs. If None, defaults to True. | | `ram_budget` | When autotuning is enabled (through `autotune`), determines the RAM budget to use. Values greater than the available RAM in bytes may result in OOM. If None, defaults to half of the available RAM in bytes. | Methods ------- ### `__eq__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L38-L44) ``` __eq__( other ) ``` Return self==value. ### `__ne__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/util/options.py#L46-L50) ``` __ne__( other ) ``` Return self!=value. tensorflow tf.data.experimental.ignore_errors tf.data.experimental.ignore\_errors =================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/error_ops.py#L21-L52) | Creates a `Dataset` from another `Dataset` and silently ignores any errors. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.ignore_errors`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/ignore_errors) ``` tf.data.experimental.ignore_errors( log_warning=False ) ``` Use this transformation to produce a dataset that contains the same elements as the input, but silently drops any elements that caused an error. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.]) # Computing `tf.debugging.check_numerics(1. / 0.)` will raise an InvalidArgumentError. dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, "error")) # Using `ignore_errors()` will drop the element that causes an error. dataset = dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2} ``` Args: log\_warning: (Optional.) A 'tf.bool' scalar indicating whether ignored errors should be logged to stderr. Defaults to 'False'. | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.group_by_window tf.data.experimental.group\_by\_window ====================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/grouping.py#L58-L107) | A transformation that groups windows of elements by key and reduces them. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.group_by_window`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/group_by_window) ``` tf.data.experimental.group_by_window( key_func, reduce_func, window_size=None, window_size_func=None ) ``` This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | tensorflow tf.data.experimental.SqlDataset tf.data.experimental.SqlDataset =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/readers.py#L1148-L1191) | A `Dataset` consisting of the results from a SQL query. Inherits From: [`Dataset`](../dataset) ``` tf.data.experimental.SqlDataset( driver_name, data_source_name, query, output_types ) ``` `SqlDataset` allows a user to read data from the result set of a SQL query. For example: ``` dataset = tf.data.experimental.SqlDataset("sqlite", "/foo/bar.sqlite3", "SELECT name, age FROM people", (tf.string, tf.int32)) # Prints the rows of the result set of the above query. for element in dataset: print(element) ``` | Args | | `driver_name` | A 0-D [`tf.string`](../../../tf#string) tensor containing the database type. Currently, the only supported value is 'sqlite'. | | `data_source_name` | A 0-D [`tf.string`](../../../tf#string) tensor containing a connection string to connect to the database. | | `query` | A 0-D [`tf.string`](../../../tf#string) tensor containing the SQL query to execute. | | `output_types` | A tuple of [`tf.DType`](../../dtypes/dtype) objects representing the types of the columns returned by `query`. | | Attributes | | `element_spec` | The type specification of an element of this dataset. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) ``` For more information, read [this guide](https://www.tensorflow.org/guide/data#dataset_structure). | Methods ------- ### `apply` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2248-L2276) ``` apply( transformation_func ) ``` Applies a transformation function to this dataset. `apply` enables chaining of custom `Dataset` transformations, which are represented as functions that take one `Dataset` argument and return a transformed `Dataset`. ``` dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `transformation_func` | A function that takes one `Dataset` argument and returns a `Dataset`. | | Returns | | `Dataset` | The `Dataset` returned by applying `transformation_func` to this dataset. | ### `as_numpy_iterator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L564-L620) ``` as_numpy_iterator() ``` Returns an iterator which converts all elements of the dataset to numpy. Use `as_numpy_iterator` to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using `as_numpy_iterator`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) ``` This method requires that you are running in eager mode and the dataset's element\_spec contains only `TensorSpec` components. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 ``` ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] ``` `as_numpy_iterator()` will preserve the nested structure of dataset elements. ``` dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True ``` | Returns | | An iterable over the elements of the dataset, with their tensors converted to numpy arrays. | | Raises | | `TypeError` | if an element contains a non-`Tensor` value. | | `RuntimeError` | if eager execution is not enabled. | ### `batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1687-L1754) ``` batch( batch_size, drop_remainder=False, num_parallel_calls=None, deterministic=None, name=None ) ``` Combines consecutive elements of this dataset into batches. ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] ``` ``` dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] ``` The components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. > > **Note:** If your program requires data to have a statically known shape (e.g., when using XLA), you should use `drop_remainder=True`. Without `drop_remainder=True` the shape of the output dataset will have an unknown leading dimension due to the possibility of a smaller final batch. > | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of batches to compute asynchronously in parallel. If not specified, batches will be computed sequentially. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available resources. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `bucket_by_sequence_length` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2826-L2971) ``` bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False, name=None ) ``` A transformation that buckets elements in a `Dataset` by length. Elements of the `Dataset` are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Below is an example to bucketize the input data to the 3 buckets "[0, 3), [3, 5), [5, inf)" based on sequence length, with batch size 2. ``` elements = [ [0], [1, 2, 3, 4], [5, 6, 7], [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]] dataset = tf.data.Dataset.from_generator( lambda: elements, tf.int64, output_shapes=[None]) dataset = dataset.bucket_by_sequence_length( element_length_func=lambda elem: tf.shape(elem)[0], bucket_boundaries=[3, 5], bucket_batch_sizes=[2, 2, 2]) for elem in dataset.as_numpy_iterator(): print(elem) [[1 2 3 4] [5 6 7 0]] [[ 7 8 9 10 11 0] [13 14 15 16 19 20]] [[ 0 0] [21 22]] ``` | Args | | `element_length_func` | function from element in `Dataset` to [`tf.int32`](../../../tf#int32), determines the length of the element, which will determine the bucket it goes into. | | `bucket_boundaries` | `list<int>`, upper length boundaries of the buckets. | | `bucket_batch_sizes` | `list<int>`, batch size per bucket. Length should be `len(bucket_boundaries) + 1`. | | `padded_shapes` | Nested structure of [`tf.TensorShape`](../../tensorshape) to pass to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). If not provided, will use `dataset.output_shapes`, which will result in variable length dimensions being padded out to the maximum length in each batch. | | `padding_values` | Values to pad with, passed to [`tf.data.Dataset.padded_batch`](../dataset#padded_batch). Defaults to padding with 0. | | `pad_to_bucket_boundary` | bool, if `False`, will pad dimensions with unknown size to maximum length in batch. If `True`, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source `Dataset` does not contain any elements with length longer than `max(bucket_boundaries)`. | | `no_padding` | `bool`, indicates whether to pad the batch features (features need to be either of type [`tf.sparse.SparseTensor`](../../sparse/sparsetensor) or of same shape). | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. | ### `cache` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1525-L1576) ``` cache( filename='', name=None ) ``` Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. > > **Note:** For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. > ``` dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] ``` When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to `.cache()` will have no effect until the cache file is removed or the filename is changed. ``` dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! list(dataset.as_numpy_iterator()) # [0, 1, 2, 3, 4] ``` > > **Note:** `cache` will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call `shuffle` *after* calling `cache`. > | Args | | `filename` | A [`tf.string`](../../../tf#string) scalar [`tf.Tensor`](../../tensor), representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `cardinality` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2728-L2754) ``` cardinality() ``` Returns the cardinality of the dataset, if known. `cardinality` may return [`tf.data.INFINITE_CARDINALITY`](../../data#INFINITE_CARDINALITY) if the dataset contains an infinite number of elements or [`tf.data.UNKNOWN_CARDINALITY`](../../data#UNKNOWN_CARDINALITY) if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). ``` dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True ``` | Returns | | A scalar [`tf.int64`](../../../tf#int64) `Tensor` representing the cardinality of the dataset. If the cardinality is infinite or unknown, `cardinality` returns the named constants [`tf.data.INFINITE_CARDINALITY`](../../data#INFINITE_CARDINALITY) and [`tf.data.UNKNOWN_CARDINALITY`](../../data#UNKNOWN_CARDINALITY) respectively. | ### `choose_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3414-L3471) ``` @staticmethod choose_from_datasets( datasets, choice_dataset, stop_on_empty_dataset=True ) ``` Creates a dataset that deterministically chooses elements from `datasets`. For example, given the following datasets: ``` datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.Dataset.choose_from_datasets(datasets, choice_dataset) ``` The elements of `result` will be: ``` "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `choice_dataset` | A [`tf.data.Dataset`](../dataset) of scalar [`tf.int64`](../../../tf#int64) tensors between `0` and `len(datasets) - 1`. | | `stop_on_empty_dataset` | If `True`, selection stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the selected elements start off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Defaults to `True`. | | Returns | | A dataset that interleaves elements from `datasets` according to the values of `choice_dataset`. | | Raises | | `TypeError` | If `datasets` or `choice_dataset` has the wrong type. | | `ValueError` | If `datasets` is empty. | ### `concatenate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1261-L1289) ``` concatenate( dataset, name=None ) ``` Creates a `Dataset` by concatenating the given dataset with this dataset. ``` a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have # compatible element specs. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> ``` | Args | | `dataset` | `Dataset` to be concatenated. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `enumerate` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1418-L1451) ``` enumerate( start=0, name=None ) ``` Enumerates the elements of this dataset. It is similar to python's `enumerate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) ``` ``` # The (nested) structure of the input dataset determines the # structure of elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) ``` | Args | | `start` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the start value for enumeration. | | `name` | Optional. A name for the tf.data operations used by `enumerate`. | | Returns | | `Dataset` | A `Dataset`. | ### `filter` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2224-L2246) ``` filter( predicate, name=None ) ``` Filters this dataset according to `predicate`. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] ``` | Args | | `predicate` | A function mapping a dataset element to a boolean. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | The `Dataset` containing the elements of this dataset for which `predicate` is `True`. | ### `flat_map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2058-L2092) ``` flat_map( map_func, name=None ) ``` Maps `map_func` across this dataset and flattens the result. #### The type signature is: ``` def flat_map( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` Use `flat_map` if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: ``` dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map( lambda x: tf.data.Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` [`tf.data.Dataset.interleave()`](../dataset#interleave) is a generalization of `flat_map`, since `flat_map` produces the same output as [`tf.data.Dataset.interleave(cycle_length=1)`](../dataset#interleave) | Args | | `map_func` | A function mapping a dataset element to a dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_generator` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L855-L1173) ``` @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None, name=None ) ``` Creates a `Dataset` whose elements are generated by `generator`. (deprecated arguments) > > **Note:** The current implementation of [`Dataset.from_generator()`](../dataset#from_generator) uses [`tf.numpy_function`](../../numpy_function) and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called [`Dataset.from_generator()`](../dataset#from_generator). In particular, using `from_generator` will preclude the use of tf.data service for scaling out dataset processing. The body of `generator` will not be serialized in a `GraphDef`, and you should not use this method if you need to serialize your model and restore it in a different environment. > The `generator` argument must be a callable object that returns an object that supports the `iter()` protocol (e.g. a generator function). The elements generated by `generator` must be compatible with either the given `output_signature` argument or with the given `output_types` and (optionally) `output_shapes` arguments, whichever was specified. The recommended way to call `from_generator` is to use the `output_signature` argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by [`tf.TypeSpec`](../../typespec) objects from `output_signature` argument: ``` def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] ``` There is also a deprecated way to call `from_generator` by either with `output_types` argument alone or together with `output_shapes` argument. In this case the output of the function will be assumed to consist of [`tf.Tensor`](../../tensor) objects with the types defined by `output_types` and with the shapes which are either unknown or defined by `output_shapes`. > > **Note:** If `generator` depends on mutable global variables or other external state, be aware that the runtime may invoke `generator` multiple times (in order to support repeating the `Dataset`) and at any time between the call to [`Dataset.from_generator()`](../dataset#from_generator) and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in `generator` before calling [`Dataset.from_generator()`](../dataset#from_generator). > > > **Note:** While the `output_signature` parameter makes it possible to yield `Dataset` elements, the scope of [`Dataset.from_generator()`](../dataset#from_generator) should be limited to logic that cannot be expressed through tf.data operations. Using tf.data operations within the generator function is an anti-pattern and may result in incremental memory growth. > | Args | | `generator` | A callable object that returns an object that supports the `iter()` protocol. If `args` is not specified, `generator` must take no arguments; otherwise it must take as many arguments as there are values in `args`. | | `output_types` | (Optional.) A (nested) structure of [`tf.DType`](../../dtypes/dtype) objects corresponding to each component of an element yielded by `generator`. | | `output_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../tensorshape) objects corresponding to each component of an element yielded by `generator`. | | `args` | (Optional.) A tuple of [`tf.Tensor`](../../tensor) objects that will be evaluated and passed to `generator` as NumPy-array arguments. | | `output_signature` | (Optional.) A (nested) structure of [`tf.TypeSpec`](../../typespec) objects corresponding to each component of an element yielded by `generator`. | | `name` | (Optional.) A name for the tf.data operations used by `from_generator`. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensor_slices` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L731-L809) ``` @staticmethod from_tensor_slices( tensors, name=None ) ``` Creates a `Dataset` whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. ``` # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] ``` ``` # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] ``` ``` # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] ``` ``` # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True ``` ``` # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset element, whose components have the same first dimension. Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `from_tensors` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L692-L729) ``` @staticmethod from_tensors( tensors, name=None ) ``` Creates a `Dataset` with a single element, comprising the given tensors. `from_tensors` produces a dataset containing only a single element. To slice the input tensor into multiple elements, use `from_tensor_slices` instead. ``` dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] ``` ``` # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] ``` Note that if `tensors` contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more [`tf.constant`](../../constant) operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If `tensors` contains one or more large NumPy arrays, consider the alternative described in [this guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). | Args | | `tensors` | A dataset "element". Supported values are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `get_single_element` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2546-L2671) ``` get_single_element( name=None ) ``` Returns the single element of the `dataset`. The function enables you to use a [`tf.data.Dataset`](../dataset) in a stateless "tensor-in tensor-out" expression, without creating an iterator. This facilitates the ease of data transformation on tensors using the optimized [`tf.data.Dataset`](../dataset) abstraction on top of them. For example, lets consider a `preprocessing_fn` which would take as an input the raw features and returns the processed feature along with it's label. ``` def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature raw_features = ... # input batch of BATCH_SIZE elements. dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() ``` In the above example, the `raw_features` tensor of length=BATCH\_SIZE was converted to a [`tf.data.Dataset`](../dataset). Next, each of the `raw_feature` was mapped using the `preprocessing_fn` and the processed features were grouped into a single batch. The final `dataset` contains only one element which is a batch of all the processed features. > > **Note:** The `dataset` should contain only one element. > Now, instead of creating an iterator for the `dataset` and retrieving the batch of features, the `tf.data.get_single_element()` function is used to skip the iterator creation process and directly output the batch of features. This can be particularly useful when your tensor transformations are expressed as [`tf.data.Dataset`](../dataset) operations, and you want to use those transformations while serving your model. #### Keras ``` model = ... # A pre-built or custom model class PreprocessingModel(tf.keras.Model): def __init__(self, model): super().__init__(self) self.model = model @tf.function(input_signature=[...]) def serving_fn(self, data): ds = tf.data.Dataset.from_tensor_slices(data) ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) ds = ds.batch(batch_size=BATCH_SIZE) return tf.argmax(self.model(ds.get_single_element()), axis=-1) preprocessing_model = PreprocessingModel(model) your_exported_model_dir = ... # save the model to this path. tf.saved_model.save(preprocessing_model, your_exported_model_dir, signatures={'serving_default': preprocessing_model.serving_fn} ) ``` #### Estimator In the case of estimators, you need to generally define a `serving_input_fn` which would require the features to be processed by the model while inferencing. ``` def serving_input_fn(): raw_feature_spec = ... # Spec for the raw_features input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) ) serving_input_receiver = input_fn() raw_features = serving_input_receiver.features def preprocessing_fn(raw_feature): # ... the raw_feature is preprocessed as per the use-case return feature dataset = (tf.data.Dataset.from_tensor_slices(raw_features) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) processed_features = dataset.get_single_element() # Please note that the value of `BATCH_SIZE` should be equal to # the size of the leading dimension of `raw_features`. This ensures # that `dataset` has only element, which is a pre-requisite for # using `dataset.get_single_element()`. return tf.estimator.export.ServingInputReceiver( processed_features, serving_input_receiver.receiver_tensors) estimator = ... # A pre-built or custom estimator estimator.export_saved_model(your_exported_model_dir, serving_input_fn) ``` | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A nested structure of [`tf.Tensor`](../../tensor) objects, corresponding to the single element of `dataset`. | | Raises | | `InvalidArgumentError` | (at runtime) if `dataset` does not contain exactly one element. | ### `group_by_window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2756-L2824) ``` group_by_window( key_func, reduce_func, window_size=None, window_size_func=None, name=None ) ``` Groups windows of elements by key and reduces them. This transformation maps each consecutive element in a dataset to a key using `key_func` and groups the elements by key. It then applies `reduce_func` to at most `window_size_func(key)` elements matching the same key. All except the final window for each key will contain `window_size_func(key)` elements; the final window may be smaller. You may provide either a constant `window_size` or a window size determined by the key through `window_size_func`. ``` dataset = tf.data.Dataset.range(10) window_size = 5 key_func = lambda x: x%2 reduce_func = lambda key, dataset: dataset.batch(window_size) dataset = dataset.group_by_window( key_func=key_func, reduce_func=reduce_func, window_size=window_size) for elem in dataset.as_numpy_iterator(): print(elem) [0 2 4 6 8] [1 3 5 7 9] ``` | Args | | `key_func` | A function mapping a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.int64`](../../../tf#int64) tensor. | | `reduce_func` | A function mapping a key and a dataset of up to `window_size` consecutive elements matching that key to another dataset. | | `window_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size_func`. | | `window_size_func` | A function mapping a key to a [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to `reduce_func`. Mutually exclusive with `window_size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | | Raises | | `ValueError` | if neither or both of {`window_size`, `window_size_func`} are passed. | ### `interleave` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2094-L2222) ``` interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across this dataset, and interleaves the results. #### The type signature is: ``` def interleave( self: Dataset[T], map_func: Callable[[T], Dataset[S]] ) -> Dataset[S] ``` For example, you can use [`Dataset.interleave()`](../dataset#interleave) to process many input files concurrently: ``` # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) ``` The `cycle_length` and `block_length` arguments control the order in which elements are produced. `cycle_length` controls the number of input elements that are processed concurrently. If you set `cycle_length` to 1, this transformation will handle one input element at a time, and will produce identical results to [`tf.data.Dataset.flat_map`](../dataset#flat_map). In general, this transformation will apply `map_func` to `cycle_length` input elements, open iterators on the returned `Dataset` objects, and cycle through them producing `block_length` consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. #### For example: ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] ``` > > **Note:** The order of elements yielded by this transformation is deterministic, as long as `map_func` is a pure function and `deterministic=True`. If `map_func` contains any stateful operations, the order in which that state is accessed is undefined. > Performance can often be improved by setting `num_parallel_calls` so that `interleave` will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set `deterministic=False`. ``` filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` | Args | | `map_func` | A function that takes a dataset element and returns a [`tf.data.Dataset`](../dataset). | | `cycle_length` | (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If `num_parallel_calls` is set to [`tf.data.AUTOTUNE`](../../data#AUTOTUNE), the `cycle_length` argument identifies the maximum degree of parallelism. | | `block_length` | (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. | | `num_parallel_calls` | (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `list_files` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1323-L1393) ``` @staticmethod list_files( file_pattern, shuffle=None, seed=None, name=None ) ``` A dataset of all files matching one or more glob patterns. The `file_pattern` argument should be a small number of glob patterns. If your filenames have already been globbed, use [`Dataset.from_tensor_slices(filenames)`](../dataset#from_tensor_slices) instead, as re-globbing every filename with `list_files` may result in poor performance with remote storage systems. > > **Note:** The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` to get results in a deterministic order. > #### Example: If we had the following files on our filesystem: * /path/to/dir/a.txt * /path/to/dir/b.py * /path/to/dir/c.py If we pass "/path/to/dir/\*.py" as the directory, the dataset would produce: * /path/to/dir/b.py * /path/to/dir/c.py | Args | | `file_pattern` | A string, a list of strings, or a [`tf.Tensor`](../../tensor) of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. | | `shuffle` | (Optional.) If `True`, the file names will be shuffled randomly. Defaults to `True`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `name` | Optional. A name for the tf.data operations used by `list_files`. | | Returns | | `Dataset` | A `Dataset` of strings corresponding to file names. | ### `map` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1891-L2056) ``` map( map_func, num_parallel_calls=None, deterministic=None, name=None ) ``` Maps `map_func` across the elements of this dataset. This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. `map_func` can be used to change both the values and the structure of a dataset's elements. Supported structure constructs are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). For example, `map` can be used for adding 1 to each element, or projecting a subset of element components. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] ``` The input signature of `map_func` is determined by the structure of each element in this dataset. ``` dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) ``` ``` # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] ``` ``` # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) ``` The value or values returned by `map_func` determine the structure of each element in the returned dataset. ``` dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) ``` `map_func` can accept as arguments and return any type of dataset element. Note that irrespective of the context in which `map_func` is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use [`tf.py_function`](../../py_function), which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` 3) Use [`tf.numpy_function`](../../numpy_function), which also allows you to write arbitrary Python code. Note that [`tf.py_function`](../../py_function) accepts [`tf.Tensor`](../../tensor) whereas [`tf.numpy_function`](../../numpy_function) accepts numpy arrays and returns only numpy arrays. For example: ``` d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] ``` Note that the use of [`tf.numpy_function`](../../numpy_function) and [`tf.py_function`](../../py_function) in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting `num_parallel_calls` so that `map` will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set `deterministic=False`. ``` dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) ``` The order of elements yielded by this transformation is deterministic if `deterministic=True`. If `map_func` contains stateful operations and `num_parallel_calls > 1`, the order in which that state is accessed is undefined, so the values of output elements may not be deterministic regardless of the `deterministic` flag value. | Args | | `map_func` | A function mapping a dataset element to another dataset element. | | `num_parallel_calls` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the number of parallel calls is set dynamically based on available CPU. | | `deterministic` | (Optional.) When `num_parallel_calls` is specified, if this boolean is specified (`True` or `False`), it controls the order in which the transformation produces elements. If set to `False`, the transformation is allowed to yield elements out of order to trade determinism for performance. If not specified, the [`tf.data.Options.deterministic`](../options#deterministic) option (`True` by default) controls the behavior. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L446-L464) ``` options() ``` Returns the options for this dataset and its inputs. | Returns | | A [`tf.data.Options`](../options) object representing the dataset options. | ### `padded_batch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1756-L1889) ``` padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None ) ``` Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like [`tf.data.Dataset.batch`](../dataset#batch), the components of the resulting element will have an additional outer dimension, which will be `batch_size` (or `N % batch_size` for the last element if `batch_size` does not divide the number of input elements `N` evenly and `drop_remainder` is `False`). If your program depends on the batches having the same outer dimension, you should set the `drop_remainder` argument to `True` to prevent the smaller batch from being produced. Unlike [`tf.data.Dataset.batch`](../dataset#batch), the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in `padded_shapes`. The `padded_shapes` argument determines the resulting shape for each dimension of each component in an output element: * If the dimension is a constant, the component will be padded out to that length in that dimension. * If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. ``` A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) ``` See also [`tf.data.experimental.dense_to_sparse_batch`](dense_to_sparse_batch), which combines elements that may have different shapes into a [`tf.sparse.SparseTensor`](../../sparse/sparsetensor). | Args | | `batch_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of consecutive elements of this dataset to combine in a single batch. | | `padded_shapes` | (Optional.) A (nested) structure of [`tf.TensorShape`](../../tensorshape) or [`tf.int64`](../../../tf#int64) vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. `padded_shapes` must be set if any component has an unknown rank. | | `padding_values` | (Optional.) A (nested) structure of scalar-shaped [`tf.Tensor`](../../tensor), representing the padding values to use for the respective components. None represents that the (nested) structure should be padded with default values. Defaults are `0` for numeric types and the empty string for string types. The `padding_values` should have the same (nested) structure as the input dataset. If `padding_values` is a single element and the input dataset has multiple components, then the same `padding_values` will be used to pad every component of the dataset. If `padding_values` is a scalar, then its value will be broadcasted to match the shape of each component. | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last batch should be dropped in the case it has fewer than `batch_size` elements; the default behavior is not to drop the smaller batch. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `ValueError` | If a component has an unknown rank, and the `padded_shapes` argument is not set. | | `TypeError` | If a component is of an unsupported type. The list of supported types is documented in <https://www.tensorflow.org/guide/data#dataset_structure> | ### `prefetch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1291-L1321) ``` prefetch( buffer_size, name=None ) ``` Creates a `Dataset` that prefetches elements from this dataset. Most dataset input pipelines should end with a call to `prefetch`. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. > > **Note:** Like other `Dataset` methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. `examples.prefetch(2)` will prefetch two elements (2 examples), while `examples.batch(20).prefetch(2)` will prefetch 2 elements (2 batches, of 20 examples each). > ``` dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the maximum number of elements that will be buffered when prefetching. If the value [`tf.data.AUTOTUNE`](../../data#AUTOTUNE) is used, then the buffer size is dynamically tuned. | | `name` | Optional. A name for the tf.data transformation. | | Returns | | `Dataset` | A `Dataset`. | ### `random` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2973-L2992) ``` @staticmethod random( seed=None, name=None ) ``` Creates a `Dataset` of pseudorandom values. The dataset generates a sequence of uniformly distributed integer values. ``` ds1 = tf.data.Dataset.random(seed=4).take(10) ds2 = tf.data.Dataset.random(seed=4).take(10) print(list(ds2.as_numpy_iterator())==list(ds2.as_numpy_iterator())) True ``` | Args | | `seed` | (Optional) If specified, the dataset produces a deterministic sequence of values. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `range` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1175-L1211) ``` @staticmethod range( *args, **kwargs ) ``` Creates a `Dataset` of a step-separated range of values. ``` list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] ``` | Args | | `*args` | follows the same semantics as python's range. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. | | `**kwargs` | * output\_type: Its expected dtype. (Optional, default: [`tf.int64`](../../../tf#int64)). * name: (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `RangeDataset`. | | Raises | | `ValueError` | if len(args) == 0. | ### `reduce` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2428-L2544) ``` reduce( initial_state, reduce_func, name=None ) ``` Reduces the input dataset to a single element. The transformation calls `reduce_func` successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The `initial_state` argument is used for the initial state and the final state is returned as the result. ``` tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 ``` | Args | | `initial_state` | An element representing the initial state of the transformation. | | `reduce_func` | A function that maps `(old_state, input_element)` to `new_state`. It must take two arguments and return a new element The structure of `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A dataset element corresponding to the final state of the transformation. | ### `rejection_resample` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3175-L3272) ``` rejection_resample( class_func, target_dist, initial_dist=None, seed=None, name=None ) ``` A transformation that resamples a dataset to a target distribution. Lets consider the following example where a dataset with an initial data distribution of `init_dist` needs to be resampled into a dataset with `target_dist` distribution. ``` initial_dist = [0.6, 0.4] num_classes = len(initial_dist) num_samples = 1000 data_np = np.random.choice(num_classes, num_samples, p=initial_dist) dataset = tf.data.Dataset.from_tensor_slices(data_np) ``` The value of `x` will be close to `{0: 50000, 1: 50000}` as per the `initial_dist` distribution. ``` target_dist = [0.5, 0.5] resampled_dataset = dataset.rejection_resample( class_func=lambda x: x, target_dist=target_dist, initial_dist=initial_dist) resampled_dataset = resampled_dataset.map( lambda class_func_result, data: data) ``` The value distribution of classes in the resampled\_distribution will be now be close to the target distribution. | Args | | `class_func` | A function mapping an element of the input dataset to a scalar [`tf.int32`](../../../tf#int32) tensor. Values should be in `[0, num_classes)`. | | `target_dist` | A floating point type tensor, shaped `[num_classes]`. | | `initial_dist` | (Optional.) A floating point type tensor, shaped `[num_classes]`. If not provided, the true class distribution is estimated live in a streaming fashion. | | `seed` | (Optional.) Python integer seed for the resampler. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset` | ### `repeat` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1395-L1416) ``` repeat( count=None, name=None ) ``` Repeats this dataset so each original value is seen `count` times. ``` dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` > > **Note:** If the input dataset depends on global state (e.g. a random number generator) or its output is non-deterministic (e.g. because of upstream `shuffle`), then different repetitions may produce different elements. > | Args | | `count` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `sample_from_datasets` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3274-L3412) ``` @staticmethod sample_from_datasets( datasets, weights=None, seed=None, stop_on_empty_dataset=False ) ``` Samples elements at random from the datasets in `datasets`. Creates a dataset by interleaving elements of `datasets` with `weight[i]` probability of picking an element from dataset `i`. Sampling is done without replacement. For example, suppose we have 2 datasets: ``` dataset1 = tf.data.Dataset.range(0, 3) dataset2 = tf.data.Dataset.range(100, 103) ``` Suppose that we sample from these 2 datasets with the following weights: ``` sample_dataset = tf.data.Dataset.sample_from_datasets( [dataset1, dataset2], weights=[0.5, 0.5]) ``` One possible outcome of elements in sample\_dataset is: ``` print(list(sample_dataset.as_numpy_iterator())) # [100, 0, 1, 101, 2, 102] ``` | Args | | `datasets` | A non-empty list of [`tf.data.Dataset`](../dataset) objects with compatible structure. | | `weights` | (Optional.) A list or Tensor of `len(datasets)` floating-point values where `weights[i]` represents the probability to sample from `datasets[i]`, or a [`tf.data.Dataset`](../dataset) object where each element is such a list. Defaults to a uniform distribution across `datasets`. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `stop_on_empty_dataset` | If `True`, sampling stops if it encounters an empty dataset. If `False`, it skips empty datasets. It is recommended to set it to `True`. Otherwise, the distribution of samples starts off as the user intends, but may change as input datasets become empty. This can be difficult to detect since the dataset starts off looking correct. Default to `False` for backward compatibility. | | Returns | | A dataset that interleaves elements from `datasets` at random, according to `weights` if provided, otherwise with uniform probability. | | Raises | | `TypeError` | If the `datasets` or `weights` arguments have the wrong type. | | `ValueError` | * If `datasets` is empty, or * If `weights` is specified and does not match the length of `datasets`. | ### `scan` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3101-L3130) ``` scan( initial_state, scan_func, name=None ) ``` A transformation that scans a function across an input dataset. This transformation is a stateful relative of [`tf.data.Dataset.map`](../dataset#map). In addition to mapping `scan_func` across the elements of the input dataset, `scan()` accumulates one or more state tensors, whose initial values are `initial_state`. ``` dataset = tf.data.Dataset.range(10) initial_state = tf.constant(0, dtype=tf.int64) scan_func = lambda state, i: (state + i, state + i) dataset = dataset.scan(initial_state=initial_state, scan_func=scan_func) list(dataset.as_numpy_iterator()) [0, 1, 3, 6, 10, 15, 21, 28, 36, 45] ``` | Args | | `initial_state` | A nested structure of tensors, representing the initial state of the accumulator. | | `scan_func` | A function that maps `(old_state, input_element)` to `(new_state, output_element)`. It must take two arguments and return a pair of nested structures of tensors. The `new_state` must match the structure of `initial_state`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `shard` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1618-L1685) ``` shard( num_shards, index, name=None ) ``` Creates a `Dataset` that includes only 1/`num_shards` of this dataset. `shard` is deterministic. The Dataset produced by `A.shard(n, i)` will contain all elements of A whose index mod n = i. ``` A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] ``` This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: ``` d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` #### Important caveats: * Be sure to shard before you use any randomizing operator (such as shuffle). * Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: ``` d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) ``` | Args | | `num_shards` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of shards operating in parallel. | | `index` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the worker index. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | | Raises | | `InvalidArgumentError` | if `num_shards` or `index` are illegal values. **Note:** error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) | ### `shuffle` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1453-L1523) ``` shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ``` Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1,000, then `shuffle` will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. `reshuffle_each_iteration` controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the `repeat` transformation: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # [1, 0, 2, 1, 0, 2] ``` In TF 2.0, [`tf.data.Dataset`](../dataset) objects are Python iterables which makes it possible to also create epochs through Python iteration: ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 2, 0] ``` ``` dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # [1, 0, 2] list(dataset.as_numpy_iterator()) # [1, 0, 2] ``` | Args | | `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements from this dataset from which the new dataset will sample. | | `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the random seed that will be used to create the distribution. See [`tf.random.set_seed`](../../random/set_seed) for behavior. | | `reshuffle_each_iteration` | (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to `True`.) | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `skip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1598-L1616) ``` skip( count, name=None ) ``` Creates a `Dataset` that skips `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] ``` | Args | | `count` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of this dataset that should be skipped to form the new dataset. If `count` is greater than the size of this dataset, the new dataset will contain no elements. If `count` is -1, skips the entire dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `snapshot` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2994-L3099) ``` snapshot( path, compression='AUTO', reader_func=None, shard_func=None, name=None ) ``` API to persist the output of the input dataset. The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. <https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md> has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the `reader_func` and `shard_func` parameters. `shard_func` is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential `shard_func` could be written. ``` dataset = ... dataset = dataset.enumerate() dataset = dataset.snapshot("/path/to/snapshot/dir", shard_func=lambda x, y: x % NUM_SHARDS, ...) dataset = dataset.map(lambda x, y: y) ``` `reader_func` is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the `shard_func` (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: ``` def user_reader_func(datasets): # shuffle the datasets splits datasets = datasets.shuffle(NUM_CORES) # read datasets in parallel and interleave their elements return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = dataset.snapshot("/path/to/snapshot/dir", reader_func=user_reader_func) ``` By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data. | Args | | `path` | Required. A directory to use for storing / loading the snapshot to / from. | | `compression` | Optional. The type of compression to apply to the snapshot written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None. Defaults to `AUTO`, which attempts to pick an appropriate compression algorithm for the dataset. | | `reader_func` | Optional. A function to control how to read data from snapshot shards. | | `shard_func` | Optional. A function to control how to shard data when writing a snapshot. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `take` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1578-L1596) ``` take( count, name=None ) ``` Creates a `Dataset` with at most `count` elements from this dataset. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] ``` | Args | | `count` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of this dataset that should be taken to form the new dataset. If `count` is -1, or if `count` is greater than the size of this dataset, the new dataset will contain all elements of this dataset. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `take_while` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3132-L3150) ``` take_while( predicate, name=None ) ``` A transformation that stops dataset iteration based on a `predicate`. ``` dataset = tf.data.Dataset.range(10) dataset = dataset.take_while(lambda x: x < 5) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] ``` | Args | | `predicate` | A function that maps a nested structure of tensors (having shapes and types defined by `self.output_shapes` and `self.output_types`) to a scalar [`tf.bool`](../../../tf#bool) tensor. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unbatch` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2673-L2698) ``` unbatch( name=None ) ``` Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, where `B` may vary for each input element, then for each element in the dataset, the unbatched dataset will contain `B` consecutive elements of shape `[a0, a1, ...]`. ``` elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] ``` > > **Note:** `unbatch` requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of `unbatch`. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `unique` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L3152-L3173) ``` unique( name=None ) ``` A transformation that discards duplicate elements of a `Dataset`. Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) dataset = dataset.unique() sorted(list(dataset.as_numpy_iterator())) [1, 2, 37] ``` > > **Note:** This transformation only supports datasets which fit into memory and have elements of either [`tf.int32`](../../../tf#int32), [`tf.int64`](../../../tf#int64) or [`tf.string`](../../../tf#string) type. > | Args | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | A `Dataset`. | ### `window` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2278-L2426) ``` window( size, shift=None, stride=1, drop_remainder=False, name=None ) ``` Returns a dataset of "windows". Each "window" is a dataset that contains a subset of elements of the input dataset. These are finite datasets of size `size` (or possibly fewer if there are not enough input elements to fill the window and `drop_remainder` evaluates to `False`). #### For example: ``` dataset = tf.data.Dataset.range(7).window(3) for window in dataset: print(window) <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int64, name=None)> ``` Since windows are datasets, they can be iterated over: ``` for window in dataset: print([item.numpy() for item in window]) [0, 1, 2] [3, 4, 5] [6] ``` #### Shift The `shift` argument determines the number of input elements to shift between the start of each window. If windows and elements are both numbered starting at 0, the first element in window `k` will be element `k * shift` of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] [4, 5, 6] ``` #### Stride The `stride` argument determines the stride between input elements within a window. ``` dataset = tf.data.Dataset.range(7).window(3, shift=1, stride=2, drop_remainder=True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] ``` #### Nested elements When the `window` transformation is applied to a dataset whos elements are nested structures, it produces a dataset where the elements have the same nested structure but each leaf is replaced by a window. In other words, the nesting is applied outside of the windows as opposed inside of them. #### The type signature is: ``` def window( self: Dataset[Nest[T]], ... ) -> Dataset[Nest[Dataset[T]]] ``` Applying `window` to a `Dataset` of tuples gives a tuple of windows: ``` dataset = tf.data.Dataset.from_tensor_slices(([1, 2, 3, 4, 5], [6, 7, 8, 9, 10])) dataset = dataset.window(2) windows = next(iter(dataset)) windows (<...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>, <...Dataset element_spec=TensorSpec(shape=(), dtype=tf.int32, name=None)>) ``` ``` def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(to_numpy(windows[0]), to_numpy(windows[1])) [1, 2] [6, 7] [3, 4] [8, 9] [5] [10] ``` Applying `window` to a `Dataset` of dictionaries gives a dictionary of `Datasets`: ``` dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]}) dataset = dataset.window(2) def to_numpy(ds): return list(ds.as_numpy_iterator()) for windows in dataset: print(tf.nest.map_structure(to_numpy, windows)) {'a': [1, 2], 'b': [4, 5], 'c': [7, 8]} {'a': [3], 'b': [6], 'c': [9]} ``` #### Flatten a dataset of windows The [`Dataset.flat_map`](../dataset#flat_map) and [`Dataset.interleave`](../dataset#interleave) methods can be used to flatten a dataset of windows into a single dataset. The argument to `flat_map` is a function that takes an element from the dataset and returns a `Dataset`. `flat_map` chains together the resulting datasets sequentially. For example, to turn each window into a dense tensor: ``` size = 3 dataset = tf.data.Dataset.range(7).window(size, shift=1, drop_remainder=True) batched = dataset.flat_map(lambda x:x.batch(3)) for batch in batched: print(batch.numpy()) [0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] ``` | Args | | `size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of elements of the input dataset to combine into a window. Must be positive. | | `shift` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the number of input elements by which the window moves in each iteration. Defaults to `size`. Must be positive. | | `stride` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../tensor), representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". | | `drop_remainder` | (Optional.) A [`tf.bool`](../../../tf#bool) scalar [`tf.Tensor`](../../tensor), representing whether the last windows should be dropped if their size is smaller than `size`. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` of (nests of) windows. Each window is a finite datasets of flat elements. | ### `with_options` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L2700-L2726) ``` with_options( options, name=None ) ``` Returns a new [`tf.data.Dataset`](../dataset) with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ``` ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.deterministic = False ds = ds.with_options(options) ``` | Args | | `options` | A [`tf.data.Options`](../options) that identifies the options the use. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset` with the given options. | | Raises | | `ValueError` | when an option is set more than once to a non-default value | ### `zip` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L1213-L1259) ``` @staticmethod zip( datasets, name=None ) ``` Creates a `Dataset` by zipping together the given datasets. This method has similar semantics to the built-in `zip()` function in Python, with the main difference being that the `datasets` argument can be a (nested) structure of `Dataset` objects. The supported nesting mechanisms are documented [here](https://www.tensorflow.org/guide/data#dataset_structure). ``` # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] ``` | Args | | `datasets` | A (nested) structure of datasets. | | `name` | (Optional.) A name for the tf.data operation. | | Returns | | `Dataset` | A `Dataset`. | ### `__bool__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __bool__() ``` ### `__iter__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L481-L497) ``` __iter__() ``` Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. | Returns | | An [`tf.data.Iterator`](../iterator) for the elements of this dataset. | | Raises | | `RuntimeError` | If not inside of tf.function and not executing eagerly. | ### `__len__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L504-L527) ``` __len__() ``` Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use [`tf.data.Dataset.cardinality`](../dataset#cardinality) instead. | Returns | | An integer representing the length of the dataset. | | Raises | | `RuntimeError` | If the dataset length is unknown or infinite, or if eager execution is not enabled. | ### `__nonzero__` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/ops/dataset_ops.py#L499-L500) ``` __nonzero__() ```
programming_docs
tensorflow tf.data.experimental.unique tf.data.experimental.unique =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/unique.py#L20-L43) | Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.unique`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/unique) ``` tf.data.experimental.unique() ``` Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: ``` dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) # Using `unique()` will drop the duplicate elements. dataset = dataset.apply(tf.data.experimental.unique()) # ==> { 1, 37, 2 } ``` | Returns | | A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../dataset#apply). | tensorflow tf.data.experimental.TFRecordWriter tf.data.experimental.TFRecordWriter =================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/writers.py#L31-L125) | Writes a dataset to a TFRecord file. (deprecated) #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.TFRecordWriter`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/TFRecordWriter) ``` tf.data.experimental.TFRecordWriter( filename, compression_type=None ) ``` The elements of the dataset must be scalar strings. To serialize dataset elements as strings, you can use the [`tf.io.serialize_tensor`](../../io/serialize_tensor) function. ``` dataset = tf.data.Dataset.range(3) dataset = dataset.map(tf.io.serialize_tensor) writer = tf.data.experimental.TFRecordWriter("/path/to/file.tfrecord") writer.write(dataset) ``` To read back the elements, use `TFRecordDataset`. ``` dataset = tf.data.TFRecordDataset("/path/to/file.tfrecord") dataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64)) ``` To shard a `dataset` across multiple TFRecord files: ``` dataset = ... # dataset to be written def reduce_func(key, dataset): filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)]) writer = tf.data.experimental.TFRecordWriter(filename) writer.write(dataset.map(lambda _, x: x)) return tf.data.Dataset.from_tensors(filename) dataset = dataset.enumerate() dataset = dataset.apply(tf.data.experimental.group_by_window( lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max )) # Iterate through the dataset to trigger data writing. for _ in dataset: pass ``` | Args | | `filename` | a string path indicating where to write the TFRecord data. | | `compression_type` | (Optional.) a string indicating what type of compression to use when writing the file. See `tf.io.TFRecordCompressionType` for what types of compression are available. Defaults to `None`. | Methods ------- ### `write` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/ops/writers.py#L90-L125) ``` write( dataset ) ``` Writes a dataset to a TFRecord file. An operation that writes the content of the specified dataset to the file specified in the constructor. If the file exists, it will be overwritten. | Args | | `dataset` | a [`tf.data.Dataset`](../dataset) whose elements are to be written to a file | | Returns | | In graph mode, this returns an operation which when executed performs the write. In eager mode, the write is performed by the method itself and there is no return value. | Raises TypeError: if `dataset` is not a [`tf.data.Dataset`](../dataset). TypeError: if the elements produced by the dataset are not scalar strings. tensorflow tf.data.experimental.service.DispatchServer tf.data.experimental.service.DispatchServer =========================================== An in-process tf.data service dispatch server. ``` tf.data.experimental.service.DispatchServer( config=None, start=True ) ``` A [`tf.data.experimental.service.DispatchServer`](dispatchserver) coordinates a cluster of [`tf.data.experimental.service.WorkerServer`](workerserver)s. When the workers start, they register themselves with the dispatcher. ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` When starting a dedicated tf.data dispatch process, use join() to block indefinitely after starting up the server. ``` dispatcher = tf.data.experimental.service.DispatchServer( tf.data.experimental.service.DispatcherConfig(port=5050)) dispatcher.join() ``` To start a `DispatchServer` in fault-tolerant mode, set `work_dir` and `fault_tolerant_mode` like below: ``` dispatcher = tf.data.experimental.service.DispatchServer( tf.data.experimental.service.DispatcherConfig( port=5050, work_dir="gs://my-bucket/dispatcher/work_dir", fault_tolerant_mode=True)) ``` | Args | | `config` | (Optional.) A [`tf.data.experimental.service.DispatcherConfig`](dispatcherconfig) configration. If `None`, the dispatcher will use default configuration values. | | `start` | (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True. | | Attributes | | `target` | Returns a target that can be used to connect to the server. ``` dispatcher = tf.data.experimental.service.DispatchServer() dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) ``` The returned string will be in the form protocol://address, e.g. "grpc://localhost:5050". | Methods ------- ### `join` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/service/server_lib.py#L186-L201) ``` join() ``` Blocks until the server has shut down. This is useful when starting a dedicated dispatch process. ``` dispatcher = tf.data.experimental.service.DispatchServer( tf.data.experimental.service.DispatcherConfig(port=5050)) dispatcher.join() ``` | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while joining the server. | ### `start` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/service/server_lib.py#L174-L184) ``` start() ``` Starts this server. ``` dispatcher = tf.data.experimental.service.DispatchServer(start=False) dispatcher.start() ``` | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while starting the server. | tensorflow tf.data.experimental.service.ShardingPolicy tf.data.experimental.service.ShardingPolicy =========================================== Specifies how to shard data among tf.data service workers. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.service.ShardingPolicy`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/ShardingPolicy) OFF: No sharding will be performed. Each worker produces the entire dataset without any sharding. With this mode, the best practice is to shuffle the dataset nondeterministically so that workers process the dataset in different orders. If workers are restarted or join the cluster mid-job, they will begin processing the dataset from the beginning. DYNAMIC: The input dataset is dynamically split among workers at runtime. Each worker gets the next split when it reads data from the dispatcher. Data is produced non-deterministically in this mode. Dynamic sharding works well with varying-sized tf.data service clusters, e.g., when you need to auto-scale your workers. Dynamic sharding provides at-most once visitation guarantees. No examples will be repeated, but some may be missed if a tf.data service worker gets restarted while processing a file. The following are static sharding policies. The semantics are similar to [`tf.data.experimental.AutoShardPolicy`](../autoshardpolicy). These policies require: * The tf.data service cluster is configured with a fixed list of workers in DispatcherConfig. * Each client only reads from the local tf.data service worker. If a worker is restarted while performing static sharding, the worker will begin processing its shard again from the beginning. FILE: Shards by input files (i.e. each worker will get a fixed set of files to process). When this option is selected, make sure that there is at least as many files as workers. If there are fewer input files than workers, a runtime error will be raised. DATA: Shards by elements produced by the dataset. Each worker will process the whole dataset and discard the portion that is not for itself. Note that for this mode to correctly partition the dataset elements, the dataset needs to produce elements in a deterministic order. FILE\_OR\_DATA: Attempts FILE-based sharding, falling back to DATA-based sharding on failure. HINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a placeholder to replace with `shard(num_workers, worker_index)`. | Class Variables | | DATA | `<ShardingPolicy.DATA: 3>` | | DYNAMIC | `<ShardingPolicy.DYNAMIC: 1>` | | FILE | `<ShardingPolicy.FILE: 2>` | | FILE\_OR\_DATA | `<ShardingPolicy.FILE_OR_DATA: 4>` | | HINT | `<ShardingPolicy.HINT: 5>` | | OFF | `<ShardingPolicy.OFF: 0>` | tensorflow tf.data.experimental.service.distribute tf.data.experimental.service.distribute ======================================= A transformation that moves dataset processing to the tf.data service. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.service.distribute`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/distribute) ``` tf.data.experimental.service.distribute( processing_mode, service, job_name=None, consumer_index=None, num_consumers=None, max_outstanding_requests=None, data_transfer_protocol=None, compression='AUTO', target_workers='AUTO' ) ``` When you iterate over a dataset containing the `distribute` transformation, the tf.data service creates a "job" which produces data for the dataset iteration. The tf.data service uses a cluster of workers to prepare data for training your model. The `processing_mode` argument to [`tf.data.experimental.service.distribute`](distribute) describes how to leverage multiple workers to process the input dataset. Currently, there are two processing modes to choose from: "distributed\_epoch" and "parallel\_epochs". "distributed\_epoch" means that the dataset will be split across all tf.data service workers. The dispatcher produces "splits" for the dataset and sends them to workers for further processing. For example, if a dataset begins with a list of filenames, the dispatcher will iterate through the filenames and send the filenames to tf.data workers, which will perform the rest of the dataset transformations on those files. "distributed\_epoch" is useful when your model needs to see each element of the dataset exactly once, or if it needs to see the data in a generally-sequential order. "distributed\_epoch" only works for datasets with splittable sources, such as [`Dataset.from_tensor_slices`](../../dataset#from_tensor_slices), [`Dataset.list_files`](../../dataset#list_files), or [`Dataset.range`](../../dataset#range). "parallel\_epochs" means that the entire input dataset will be processed independently by each of the tf.data service workers. For this reason, it is important to shuffle data (e.g. filenames) non-deterministically, so that each worker will process the elements of the dataset in a different order. "parallel\_epochs" can be used to distribute datasets that aren't splittable. With two workers, "parallel\_epochs" will produce every element of the dataset twice: ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] # Start two workers workers = [ tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) for _ in range(2) ] dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) print(sorted(list(dataset.as_numpy_iterator()))) [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9] ``` "distributed\_epoch", on the other hand, will still produce each element once: ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] workers = [ tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) for _ in range(2) ] dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="distributed_epoch", service=dispatcher.target)) print(sorted(list(dataset.as_numpy_iterator()))) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` When using `apply(tf.data.experimental.service.distribute(...))`, the dataset before the `apply` transformation executes within the tf.data service, while the operations after `apply` happen within the local process. ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] workers = [ tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) for _ in range(2) ] dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x*x) dataset = dataset.apply( tf.data.experimental.service.distribute("parallel_epochs", dispatcher.target)) dataset = dataset.map(lambda x: x+1) print(sorted(list(dataset.as_numpy_iterator()))) [1, 1, 2, 2, 5, 5, 10, 10, 17, 17] ``` In the above example, the dataset operations (before applying the `distribute` function on the elements) will be executed on the tf.data workers, and the elements are provided over RPC. The remaining transformations (after the call to `distribute`) will be executed locally. The dispatcher and the workers will bind to usused free ports (which are chosen at random), in order to communicate with each other. However, to bind them to specific ports, the `port` parameter can be passed. The `job_name` argument allows jobs to be shared across multiple datasets. Instead of each dataset creating its own job, all datasets with the same `job_name` will consume from the same job. A new job will be created for each iteration of the dataset (with each repetition of [`Dataset.repeat`](../../dataset#repeat) counting as a new iteration). Suppose the `DispatchServer` is serving on `localhost:5000` and two training workers (in either a single client or multi-client setup) iterate over the below dataset, and there is a single tf.data worker: ``` range5_dataset = tf.data.Dataset.range(5) dataset = range5_dataset.apply(tf.data.experimental.service.distribute( "parallel_epochs", "localhost:5000", job_name="my_job_name")) for iteration in range(3): print(list(dataset)) ``` The elements of each job will be split between the two processes, with elements being consumed by the processes on a first-come first-served basis. One possible result is that process 1 prints ``` [0, 2, 4] [0, 1, 3] [1] ``` and process 2 prints ``` [1, 3] [2, 4] [0, 2, 3, 4] ``` Job names must not be re-used across different training jobs within the lifetime of the tf.data service. In general, the tf.data service is expected to live for the duration of a single training job. To use the tf.data service with multiple training jobs, make sure to use different job names to avoid conflicts. For example, suppose a training job calls `distribute` with `job_name="job"` and reads until end of input. If another independent job connects to the same tf.data service and tries to read from `job_name="job"`, it will immediately receive end of input, without getting any data. **Coordinated data read** By default, when multiple consumers read from the same job, they receive data on a first-come first-served basis. In some use cases, it is advantageous to coordinate the consumers. At each step, consumers read data from the same worker. For example, the tf.data service can be used to coordinate example sizes across a cluster during synchronous training, so that during each step all replicas train on similar-sized elements. To achieve this, define a dataset which generates rounds of `num_consumers` consecutive similar-sized batches, then enable coordinated reads by setting `consumer_index` and `num_consumers`. > > **Note:** To keep consumers in sync, round robin data consumption requires that the dataset have infinite cardinality. You can get this by adding `.repeat()` at the end of the dataset definition. > **Keras and Distribution Strategies** The dataset produced by the `distribute` transformation can be passed to Keras' [`Model.fit`](../../../keras/model#fit) or Distribution Strategy's [`tf.distribute.Strategy.experimental_distribute_dataset`](../../../distribute/strategy#experimental_distribute_dataset) like any other [`tf.data.Dataset`](../../dataset). We recommend setting a `job_name` on the call to `distribute` so that if there are multiple workers, they read data from the same job. Note that the autosharding normally performed by `experimental_distribute_dataset` will be disabled when setting a `job_name`, since sharing the job already results in splitting data across the workers. When using a shared job, data will be dynamically balanced across workers, so that they reach end of input about the same time. This results in better worker utilization than with autosharding, where each worker processes an independent set of files, and some workers may run out of data earlier than others. | Args | | `processing_mode` | A [`tf.data.experimental.service.ShardingPolicy`](shardingpolicy) specifying how to shard the dataset among tf.data workers. See [`tf.data.experimental.service.ShardingPolicy`](shardingpolicy) for details. For backwards compatibility, `processing_mode` may also be set to the strings `"parallel_epochs"` or `"distributed_epoch"`, which are respectively equivalent to [`ShardingPolicy.OFF`](shardingpolicy#OFF) and [`ShardingPolicy.DYNAMIC`](shardingpolicy#DYNAMIC). | | `service` | A string or a tuple indicating how to connect to the tf.data service. If it's a string, it should be in the format `[<protocol>://]<address>`, where `<address>` identifies the dispatcher address and `<protocol>` can optionally be used to override the default protocol to use. If it's a tuple, it should be (protocol, address). | | `job_name` | (Optional.) The name of the job. If provided, it must be a non-empty string. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs. | | `consumer_index` | (Optional.) The index of the consumer in the range from `0` to `num_consumers`. Must be specified alongside `num_consumers`. When specified, consumers will read from the job in a strict round-robin order, instead of the default first-come-first-served order. | | `num_consumers` | (Optional.) The number of consumers which will consume from the job. Must be specified alongside `consumer_index`. When specified, consumers will read from the job in a strict round-robin order, instead of the default first-come-first-served order. When `num_consumers` is specified, the dataset must have infinite cardinality to prevent a producer from running out of data early and causing consumers to go out of sync. | | `max_outstanding_requests` | (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since `distribute` won't use more than `element_size` \* `max_outstanding_requests` of memory. | | `data_transfer_protocol` | (Optional.) The protocol to use for transferring data with the tf.data service. By default, data is transferred using gRPC. | | `compression` | How to compress the dataset's elements before transferring them over the network. "AUTO" leaves the decision of how to compress up to the tf.data service runtime. `None` indicates not to compress. | | `target_workers` | (Optional.) Which workers to read from. If `"AUTO"`, tf.data runtime decides which workers to read from. If `"ANY"`, reads from any tf.data service workers. If `"LOCAL"`, only reads from local in-processs tf.data service workers. `"AUTO"` works well for most cases, while users can specify other targets. For example, `"LOCAL"` helps avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Consumers of a shared job must use the same `target_workers`. Defaults to `"AUTO"`. | | Returns | | `Dataset` | A `Dataset` of the elements produced by the data service. |
programming_docs
tensorflow tf.data.experimental.service.DispatcherConfig tf.data.experimental.service.DispatcherConfig ============================================= Configuration class for tf.data service dispatchers. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.service.DispatcherConfig`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/DispatcherConfig) ``` tf.data.experimental.service.DispatcherConfig( port=0, protocol=None, work_dir=None, fault_tolerant_mode=False, worker_addresses=None, job_gc_check_interval_ms=None, job_gc_timeout_ms=None ) ``` #### Fields: * **`port`**: Specifies the port to bind to. A value of 0 indicates that the server may bind to any available port. * **`protocol`**: The protocol to use for communicating with the tf.data service, e.g. "grpc". * **`work_dir`**: A directory to store dispatcher state in. This argument is required for the dispatcher to be able to recover from restarts. * **`fault_tolerant_mode`**: Whether the dispatcher should write its state to a journal so that it can recover from restarts. Dispatcher state, including registered datasets and created jobs, is synchronously written to the journal before responding to RPCs. If `True`, `work_dir` must also be specified. * **`worker_addresses`**: If the job uses auto-sharding, it needs to specify a fixed list of worker addresses that will register with the dispatcher. The worker addresses should be in the format `"host"` or `"host:port"`, where `"port"` is an integer, named port, or `%port%` to match any port. * **`job_gc_check_interval_ms`**: How often the dispatcher should scan through to delete old and unused jobs, in milliseconds. If not set, the runtime will select a reasonable default. A higher value will reduce load on the dispatcher, while a lower value will reduce the time it takes for the dispatcher to garbage collect expired jobs. * **`job_gc_timeout_ms`**: How long a job needs to be unused before it becomes a candidate for garbage collection, in milliseconds. A value of -1 indicates that jobs should never be garbage collected. If not set, the runtime will select a reasonable default. A higher value will cause jobs to stay around longer with no consumers. This is useful if there is a large gap in time between when consumers read from the job. A lower value will reduce the time it takes to reclaim the resources from expired jobs. | Attributes | | `port` | A `namedtuple` alias for field number 0 | | `protocol` | A `namedtuple` alias for field number 1 | | `work_dir` | A `namedtuple` alias for field number 2 | | `fault_tolerant_mode` | A `namedtuple` alias for field number 3 | | `worker_addresses` | A `namedtuple` alias for field number 4 | | `job_gc_check_interval_ms` | A `namedtuple` alias for field number 5 | | `job_gc_timeout_ms` | A `namedtuple` alias for field number 6 | tensorflow tf.data.experimental.service.WorkerServer tf.data.experimental.service.WorkerServer ========================================= An in-process tf.data service worker server. ``` tf.data.experimental.service.WorkerServer( config, start=True ) ``` A [`tf.data.experimental.service.WorkerServer`](workerserver) performs [`tf.data.Dataset`](../../dataset) processing for user-defined datasets, and provides the resulting elements over RPC. A worker is associated with a single [`tf.data.experimental.service.DispatchServer`](dispatchserver). ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` When starting a dedicated tf.data worker process, use join() to block indefinitely after starting up the server. ``` worker = tf.data.experimental.service.WorkerServer( port=5051, dispatcher_address="localhost:5050") worker.join() ``` | Args | | `config` | A [`tf.data.experimental.service.WorkerConfig`](workerconfig) configration. | | `start` | (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True. | Methods ------- ### `join` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/service/server_lib.py#L356-L373) ``` join() ``` Blocks until the server has shut down. This is useful when starting a dedicated worker process. ``` worker_server = tf.data.experimental.service.WorkerServer( port=5051, dispatcher_address="localhost:5050") worker_server.join() ``` This method currently blocks forever. | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while joining the server. | ### `start` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/data/experimental/service/server_lib.py#L347-L354) ``` start() ``` Starts this server. | Raises | | [`tf.errors.OpError`](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while starting the server. | tensorflow tf.data.experimental.service.WorkerConfig tf.data.experimental.service.WorkerConfig ========================================= Configuration class for tf.data service dispatchers. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.service.WorkerConfig`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/WorkerConfig) ``` tf.data.experimental.service.WorkerConfig( dispatcher_address, worker_address=None, port=0, protocol=None, heartbeat_interval_ms=None, dispatcher_timeout_ms=None ) ``` #### Fields: * **`dispatcher_address`**: Specifies the address of the dispatcher. * **`worker_address`**: Specifies the address of the worker server. This address is passed to the dispatcher so that the dispatcher can tell clients how to connect to this worker. * **`port`**: Specifies the port to bind to. A value of 0 indicates that the worker can bind to any available port. * **`protocol`**: (Optional.) Specifies the protocol to be used by the server, e.g. "grpc". * **`heartbeat_interval_ms`**: How often the worker should heartbeat to the dispatcher, in milliseconds. If not set, the runtime will select a reasonable default. A higher value will reduce the load on the dispatcher, while a lower value will reduce the time it takes to reclaim resources from finished jobs. * **`dispatcher_timeout_ms`**: How long, in milliseconds, to retry requests to the dispatcher before giving up and reporting an error. Defaults to 1 hour. | Attributes | | `dispatcher_address` | A `namedtuple` alias for field number 0 | | `worker_address` | A `namedtuple` alias for field number 1 | | `port` | A `namedtuple` alias for field number 2 | | `protocol` | A `namedtuple` alias for field number 3 | | `heartbeat_interval_ms` | A `namedtuple` alias for field number 4 | | `dispatcher_timeout_ms` | A `namedtuple` alias for field number 5 | tensorflow tf.data.experimental.service.register_dataset tf.data.experimental.service.register\_dataset ============================================== Registers a dataset with the tf.data service. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.service.register_dataset`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/register_dataset) ``` tf.data.experimental.service.register_dataset( service, dataset, compression='AUTO' ) ``` `register_dataset` registers a dataset with the tf.data service so that datasets can be created later with [`tf.data.experimental.service.from_dataset_id`](from_dataset_id). This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use [`tf.data.experimental.service.distribute`](distribute) instead. If the dataset is already registered with the tf.data service, `register_dataset` returns the already-registered dataset's id. ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset_id = tf.data.experimental.service.register_dataset( dispatcher.target, dataset) dataset = tf.data.experimental.service.from_dataset_id( processing_mode="parallel_epochs", service=dispatcher.target, dataset_id=dataset_id, element_spec=dataset.element_spec) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` | Args | | `service` | A string or a tuple indicating how to connect to the tf.data service. If it's a string, it should be in the format `[<protocol>://]<address>`, where `<address>` identifies the dispatcher address and `<protocol>` can optionally be used to override the default protocol to use. If it's a tuple, it should be (protocol, address). | | `dataset` | A [`tf.data.Dataset`](../../dataset) to register with the tf.data service. | | `compression` | (Optional.) How to compress the dataset's elements before transferring them over the network. "AUTO" leaves the decision of how to compress up to the tf.data service runtime. `None` indicates not to compress. | | Returns | | A scalar int64 tensor of the registered dataset's id. | tensorflow tf.data.experimental.service.from_dataset_id tf.data.experimental.service.from\_dataset\_id ============================================== Creates a dataset which reads data from the tf.data service. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.data.experimental.service.from_dataset_id`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/from_dataset_id) ``` tf.data.experimental.service.from_dataset_id( processing_mode, service, dataset_id, element_spec=None, job_name=None, consumer_index=None, num_consumers=None, max_outstanding_requests=None, data_transfer_protocol=None, target_workers='AUTO' ) ``` This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use [`tf.data.experimental.service.distribute`](distribute) instead. Before using `from_dataset_id`, the dataset must have been registered with the tf.data service using [`tf.data.experimental.service.register_dataset`](register_dataset). `register_dataset` returns a dataset id for the registered dataset. That is the `dataset_id` which should be passed to `from_dataset_id`. The `element_spec` argument indicates the [`tf.TypeSpec`](../../../typespec)s for the elements produced by the dataset. Currently `element_spec` must be explicitly specified, and match the dataset registered under `dataset_id`. `element_spec` defaults to `None` so that in the future we can support automatically discovering the `element_spec` by querying the tf.data service. [`tf.data.experimental.service.distribute`](distribute) is a convenience method which combines `register_dataset` and `from_dataset_id` into a dataset transformation. See the documentation for [`tf.data.experimental.service.distribute`](distribute) for more detail about how `from_dataset_id` works. ``` dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset_id = tf.data.experimental.service.register_dataset( dispatcher.target, dataset) dataset = tf.data.experimental.service.from_dataset_id( processing_mode="parallel_epochs", service=dispatcher.target, dataset_id=dataset_id, element_spec=dataset.element_spec) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` | Args | | `processing_mode` | A [`tf.data.experimental.service.ShardingPolicy`](shardingpolicy) specifying how to shard the dataset among tf.data workers. See [`tf.data.experimental.service.ShardingPolicy`](shardingpolicy) for details. For backwards compatibility, `processing_mode` may also be set to the strings `"parallel_epochs"` or `"distributed_epoch"`, which are respectively equivalent to [`ShardingPolicy.OFF`](shardingpolicy#OFF) and [`ShardingPolicy.DYNAMIC`](shardingpolicy#DYNAMIC). | | `service` | A string or a tuple indicating how to connect to the tf.data service. If it's a string, it should be in the format `[<protocol>://]<address>`, where `<address>` identifies the dispatcher address and `<protocol>` can optionally be used to override the default protocol to use. If it's a tuple, it should be (protocol, address). | | `dataset_id` | The id of the dataset to read from. This id is returned by `register_dataset` when the dataset is registered with the tf.data service. | | `element_spec` | A nested structure of [`tf.TypeSpec`](../../../typespec)s representing the type of elements produced by the dataset. This argument is only required inside a tf.function. Use [`tf.data.Dataset.element_spec`](../../dataset#element_spec) to get the element spec for a given dataset. | | `job_name` | (Optional.) The name of the job. If provided, it must be a non-empty string. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs. | | `consumer_index` | (Optional.) The index of the consumer in the range from `0` to `num_consumers`. Must be specified alongside `num_consumers`. When specified, consumers will read from the job in a strict round-robin order, instead of the default first-come-first-served order. | | `num_consumers` | (Optional.) The number of consumers which will consume from the job. Must be specified alongside `consumer_index`. When specified, consumers will read from the job in a strict round-robin order, instead of the default first-come-first-served order. When `num_consumers` is specified, the dataset must have infinite cardinality to prevent a producer from running out of data early and causing consumers to go out of sync. | | `max_outstanding_requests` | (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since `distribute` won't use more than `element_size` \* `max_outstanding_requests` of memory. | | `data_transfer_protocol` | (Optional.) The protocol to use for transferring data with the tf.data service. By default, data is transferred using gRPC. | | `target_workers` | (Optional.) Which workers to read from. If `"AUTO"`, tf.data runtime decides which workers to read from. If `"ANY"`, reads from any tf.data service workers. If `"LOCAL"`, only reads from local in-processs tf.data service workers. `"AUTO"` works well for most cases, while users can specify other targets. For example, `"LOCAL"` helps avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Consumers of a shared job must use the same `target_workers`. Defaults to `"AUTO"`. | | Returns | | A [`tf.data.Dataset`](../../dataset) which reads from the tf.data service. | tensorflow tf.signal.rfft2d tf.signal.rfft2d ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L113-L139) | 2D real-valued fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.rfft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/rfft2d), [`tf.compat.v1.spectral.rfft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/rfft2d) ``` tf.signal.rfft2d( input_tensor, fft_length=None, name=None ) ``` Computes the 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of `input`. Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension of `output`: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms. Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros. | Args | | `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`. A float32 tensor. | | `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [2]. The FFT length for each dimension. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `Tcomplex`. | tensorflow tf.signal.linear_to_mel_weight_matrix tf.signal.linear\_to\_mel\_weight\_matrix ========================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/mel_ops.py#L89-L215) | Returns a matrix to warp linear scale spectrograms to the [mel scale](https://en.wikipedia.org/wiki/Mel_scale). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.linear_to_mel_weight_matrix`](https://www.tensorflow.org/api_docs/python/tf/signal/linear_to_mel_weight_matrix) ``` tf.signal.linear_to_mel_weight_matrix( num_mel_bins=20, num_spectrogram_bins=129, sample_rate=8000, lower_edge_hertz=125.0, upper_edge_hertz=3800.0, dtype=tf.dtypes.float32, name=None ) ``` Returns a weight matrix that can be used to re-weight a `Tensor` containing `num_spectrogram_bins` linearly sampled frequency information from `[0, sample_rate / 2]` into `num_mel_bins` frequency information from `[lower_edge_hertz, upper_edge_hertz]` on the [mel scale](https://en.wikipedia.org/wiki/Mel_scale). This function follows the [Hidden Markov Model Toolkit (HTK)](http://htk.eng.cam.ac.uk/) convention, defining the mel scale in terms of a frequency in hertz according to the following formula: ``` $$\textrm{mel}(f) = 2595 * \textrm{log}_{10}(1 + \frac{f}{700})$$ ``` In the returned matrix, all the triangles (filterbanks) have a peak value of 1.0. For example, the returned matrix `A` can be used to right-multiply a spectrogram `S` of shape `[frames, num_spectrogram_bins]` of linear scale spectrum values (e.g. STFT magnitudes) to generate a "mel spectrogram" `M` of shape `[frames, num_mel_bins]`. ``` # `S` has shape [frames, num_spectrogram_bins] # `M` has shape [frames, num_mel_bins] M = tf.matmul(S, A) ``` The matrix can be used with [`tf.tensordot`](../tensordot) to convert an arbitrary rank `Tensor` of linear-scale spectral bins into the mel scale. ``` # S has shape [..., num_spectrogram_bins]. # M has shape [..., num_mel_bins]. M = tf.tensordot(S, A, 1) ``` | Args | | `num_mel_bins` | Python int. How many bands in the resulting mel spectrum. | | `num_spectrogram_bins` | An integer `Tensor`. How many bins there are in the source spectrogram data, which is understood to be `fft_size // 2 + 1`, i.e. the spectrogram only contains the nonredundant FFT bins. | | `sample_rate` | An integer or float `Tensor`. Samples per second of the input signal used to create the spectrogram. Used to figure out the frequencies corresponding to each spectrogram bin, which dictates how they are mapped into the mel scale. | | `lower_edge_hertz` | Python float. Lower bound on the frequencies to be included in the mel spectrum. This corresponds to the lower edge of the lowest triangular band. | | `upper_edge_hertz` | Python float. The desired top edge of the highest frequency band. | | `dtype` | The `DType` of the result matrix. Must be a floating point type. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of shape `[num_spectrogram_bins, num_mel_bins]`. | | Raises | | `ValueError` | If `num_mel_bins`/`num_spectrogram_bins`/`sample_rate` are not positive, `lower_edge_hertz` is negative, frequency edges are incorrectly ordered, `upper_edge_hertz` is larger than the Nyquist frequency. |
programming_docs
tensorflow tf.signal.stft tf.signal.stft ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/spectral_ops.py#L34-L92) | Computes the [Short-time Fourier Transform](https://en.wikipedia.org/wiki/Short-time_Fourier_transform) of `signals`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.stft`](https://www.tensorflow.org/api_docs/python/tf/signal/stft) ``` tf.signal.stft( signals, frame_length, frame_step, fft_length=None, window_fn=tf.signal.hann_window, pad_end=False, name=None ) ``` Implemented with TPU/GPU-compatible ops and supports gradients. | Args | | `signals` | A `[..., samples]` `float32`/`float64` `Tensor` of real-valued signals. | | `frame_length` | An integer scalar `Tensor`. The window length in samples. | | `frame_step` | An integer scalar `Tensor`. The number of samples to step. | | `fft_length` | An integer scalar `Tensor`. The size of the FFT to apply. If not provided, uses the smallest power of 2 enclosing `frame_length`. | | `window_fn` | A callable that takes a window length and a `dtype` keyword argument and returns a `[window_length]` `Tensor` of samples in the provided datatype. If set to `None`, no windowing is used. | | `pad_end` | Whether to pad the end of `signals` with zeros when the provided frame length and step produces a frame that lies partially past its end. | | `name` | An optional name for the operation. | | Returns | | A `[..., frames, fft_unique_bins]` `Tensor` of `complex64`/`complex128` STFT values where `fft_unique_bins` is `fft_length // 2 + 1` (the unique components of the FFT). | | Raises | | `ValueError` | If `signals` is not at least rank 1, `frame_length` is not scalar, or `frame_step` is not scalar. | tensorflow tf.signal.irfft3d tf.signal.irfft3d ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L147-L169) | Inverse 3D real-valued fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.irfft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/irfft3d), [`tf.compat.v1.spectral.irfft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/irfft3d) ``` tf.signal.irfft3d( input_tensor, fft_length=None, name=None ) ``` Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of `input`. The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`: The inner-most dimension contains the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most 3 dimensions of `input`. If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly. Along each axis `IRFFT3D` is computed on, if `fft_length` (or `fft_length / 2 + 1` for the inner-most dimension) is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [3]. The FFT length for each dimension. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `Treal`. | tensorflow tf.signal.mdct tf.signal.mdct ============== Computes the [Modified Discrete Cosine Transform](https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform) of `signals`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.mdct`](https://www.tensorflow.org/api_docs/python/tf/signal/mdct) ``` tf.signal.mdct( signals, frame_length, window_fn=tf.signal.vorbis_window, pad_end=False, norm=None, name=None ) ``` Implemented with TPU/GPU-compatible ops and supports gradients. | Args | | `signals` | A `[..., samples]` `float32`/`float64` `Tensor` of real-valued signals. | | `frame_length` | An integer scalar `Tensor`. The window length in samples which must be divisible by 4. | | `window_fn` | A callable that takes a frame\_length and a `dtype` keyword argument and returns a `[frame_length]` `Tensor` of samples in the provided datatype. If set to `None`, a rectangular window with a scale of 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct` followed by `inverse_mdct`, please use [`tf.signal.vorbis_window`](vorbis_window), [`tf.signal.kaiser_bessel_derived_window`](kaiser_bessel_derived_window) or `None`. If using another window function, make sure that w[n]^2 + w[n + frame\_length // 2]^2 = 1 and w[n] = w[frame\_length - n - 1] for n = 0,...,frame\_length // 2 - 1 to achieve perfect reconstruction. | | `pad_end` | Whether to pad the end of `signals` with zeros when the provided frame length and step produces a frame that lies partially past its end. | | `norm` | If it is None, unnormalized dct4 is used, if it is "ortho" orthonormal dct4 is used. | | `name` | An optional name for the operation. | | Returns | | A `[..., frames, frame_length // 2]` `Tensor` of `float32`/`float64` MDCT values where `frames` is roughly `samples // (frame_length // 2)` when `pad_end=False`. | | Raises | | `ValueError` | If `signals` is not at least rank 1, `frame_length` is not scalar, or `frame_length` is not a multiple of `4`. | tensorflow tf.signal.fft tf.signal.fft ============= Fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fft`](https://www.tensorflow.org/api_docs/python/tf/signal/fft), [`tf.compat.v1.signal.fft`](https://www.tensorflow.org/api_docs/python/tf/signal/fft), [`tf.compat.v1.spectral.fft`](https://www.tensorflow.org/api_docs/python/tf/signal/fft) ``` tf.signal.fft( input, name=None ) ``` Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of `input`. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `input`. | tensorflow tf.signal.frame tf.signal.frame =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/shape_ops.py#L55-L231) | Expands `signal`'s `axis` dimension into frames of `frame_length`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.frame`](https://www.tensorflow.org/api_docs/python/tf/signal/frame) ``` tf.signal.frame( signal, frame_length, frame_step, pad_end=False, pad_value=0, axis=-1, name=None ) ``` Slides a window of size `frame_length` over `signal`'s `axis` dimension with a stride of `frame_step`, replacing the `axis` dimension with `[frames, frame_length]` frames. If `pad_end` is True, window positions that are past the end of the `axis` dimension are padded with `pad_value` until the window moves fully past the end of the dimension. Otherwise, only window positions that fully overlap the `axis` dimension are produced. #### For example: ``` # A batch size 3 tensor of 9152 audio samples. audio = tf.random.normal([3, 9152]) # Compute overlapping frames of length 512 with a step of 180 (frames overlap # by 332 samples). By default, only 49 frames are generated since a frame # with start position j*180 for j > 48 would overhang the end. frames = tf.signal.frame(audio, 512, 180) frames.shape.assert_is_compatible_with([3, 49, 512]) # When pad_end is enabled, the final two frames are kept (padded with zeros). frames = tf.signal.frame(audio, 512, 180, pad_end=True) frames.shape.assert_is_compatible_with([3, 51, 512]) ``` If the dimension along `axis` is N, and `pad_end=False`, the number of frames can be computed by: ``` num_frames = 1 + (N - frame_size) // frame_step ``` If `pad_end=True`, the number of frames can be computed by: ``` num_frames = -(-N // frame_step) # ceiling division ``` | Args | | `signal` | A `[..., samples, ...]` `Tensor`. The rank and dimensions may be unknown. Rank must be at least 1. | | `frame_length` | The frame length in samples. An integer or scalar `Tensor`. | | `frame_step` | The frame hop size in samples. An integer or scalar `Tensor`. | | `pad_end` | Whether to pad the end of `signal` with `pad_value`. | | `pad_value` | An optional scalar `Tensor` to use where the input signal does not exist when `pad_end` is True. | | `axis` | A scalar integer `Tensor` indicating the axis to frame. Defaults to the last axis. Supports negative values for indexing from the end. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of frames with shape `[..., num_frames, frame_length, ...]`. | | Raises | | `ValueError` | If `frame_length`, `frame_step`, `pad_value`, or `axis` are not scalar. | tensorflow tf.signal.irfft2d tf.signal.irfft2d ================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L147-L169) | Inverse 2D real-valued fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.irfft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/irfft2d), [`tf.compat.v1.spectral.irfft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/irfft2d) ``` tf.signal.irfft2d( input_tensor, fft_length=None, name=None ) ``` Computes the inverse 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of `input`. The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`: The inner-most dimension contains the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most 2 dimensions of `input`. If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly. Along each axis `IRFFT2D` is computed on, if `fft_length` (or `fft_length / 2 + 1` for the inner-most dimension) is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [2]. The FFT length for each dimension. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `Treal`. | tensorflow tf.signal.rfft3d tf.signal.rfft3d ================ [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L113-L139) | 3D real-valued fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.rfft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/rfft3d), [`tf.compat.v1.spectral.rfft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/rfft3d) ``` tf.signal.rfft3d( input_tensor, fft_length=None, name=None ) ``` Computes the 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of `input`. Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension of `output`: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms. Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros. | Args | | `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`. A float32 tensor. | | `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [3]. The FFT length for each dimension. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `Tcomplex`. | tensorflow tf.signal.hann_window tf.signal.hann\_window ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/window_ops.py#L145-L168) | Generate a [Hann window](https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.hann_window`](https://www.tensorflow.org/api_docs/python/tf/signal/hann_window) ``` tf.signal.hann_window( window_length, periodic=True, dtype=tf.dtypes.float32, name=None ) ``` | Args | | `window_length` | A scalar `Tensor` indicating the window length to generate. | | `periodic` | A bool `Tensor` indicating whether to generate a periodic or symmetric window. Periodic windows are typically used for spectral analysis while symmetric windows are typically used for digital filter design. | | `dtype` | The data type to produce. Must be a floating point type. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of shape `[window_length]` of type `dtype`. | | Raises | | `ValueError` | If `dtype` is not a floating point type. | tensorflow tf.signal.irfft tf.signal.irfft =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L147-L169) | Inverse real-valued fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.irfft`](https://www.tensorflow.org/api_docs/python/tf/signal/irfft), [`tf.compat.v1.spectral.irfft`](https://www.tensorflow.org/api_docs/python/tf/signal/irfft) ``` tf.signal.irfft( input_tensor, fft_length=None, name=None ) ``` Computes the inverse 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of `input`. The inner-most dimension of `input` is assumed to be the result of `RFFT`: the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly. Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [1]. The FFT length. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `Treal`. | tensorflow tf.signal.dct tf.signal.dct ============= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/dct_ops.py#L49-L177) | Computes the 1D [Discrete Cosine Transform (DCT)](https://en.wikipedia.org/wiki/Discrete_cosine_transform) of `input`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.dct`](https://www.tensorflow.org/api_docs/python/tf/signal/dct), [`tf.compat.v1.spectral.dct`](https://www.tensorflow.org/api_docs/python/tf/signal/dct) ``` tf.signal.dct( input, type=2, n=None, axis=-1, norm=None, name=None ) ``` Types I, II, III and IV are supported. Type I is implemented using a length `2N` padded [`tf.signal.rfft`](rfft). Type II is implemented using a length `2N` padded [`tf.signal.rfft`](rfft), as described here: [Type 2 DCT using 2N FFT padded (Makhoul)](https://dsp.stackexchange.com/a/10606). Type III is a fairly straightforward inverse of Type II (i.e. using a length `2N` padded [`tf.signal.irfft`](irfft)). Type IV is calculated through 2N length DCT2 of padded signal and picking the odd indices. | Args | | `input` | A `[..., samples]` `float32`/`float64` `Tensor` containing the signals to take the DCT of. | | `type` | The DCT type to perform. Must be 1, 2, 3 or 4. | | `n` | The length of the transform. If length is less than sequence length, only the first n elements of the sequence are considered for the DCT. If n is greater than the sequence length, zeros are padded and then the DCT is computed as usual. | | `axis` | For future expansion. The axis to compute the DCT along. Must be `-1`. | | `norm` | The normalization to apply. `None` for no normalization or `'ortho'` for orthonormal normalization. | | `name` | An optional name for the operation. | | Returns | | A `[..., samples]` `float32`/`float64` `Tensor` containing the DCT of `input`. | | Raises | | `ValueError` | If `type` is not `1`, `2`, `3` or `4`, `axis` is not `-1`, `n` is not `None` or greater than 0, or `norm` is not `None` or `'ortho'`. | | `ValueError` | If `type` is `1` and `norm` is `ortho`. | scipy compatibility ------------------- Equivalent to [scipy.fftpack.dct](https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.dct.html) for Type-I, Type-II, Type-III and Type-IV DCT. tensorflow tf.signal.mdct tf.signal.mdct ============== Computes the [Modified Discrete Cosine Transform](https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform) of `signals`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.mdct`](https://www.tensorflow.org/api_docs/python/tf/signal/mdct) ``` tf.signal.mdct( signals, frame_length, window_fn=tf.signal.vorbis_window, pad_end=False, norm=None, name=None ) ``` Implemented with TPU/GPU-compatible ops and supports gradients. | Args | | `signals` | A `[..., samples]` `float32`/`float64` `Tensor` of real-valued signals. | | `frame_length` | An integer scalar `Tensor`. The window length in samples which must be divisible by 4. | | `window_fn` | A callable that takes a frame\_length and a `dtype` keyword argument and returns a `[frame_length]` `Tensor` of samples in the provided datatype. If set to `None`, a rectangular window with a scale of 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct` followed by `inverse_mdct`, please use [`tf.signal.vorbis_window`](vorbis_window), [`tf.signal.kaiser_bessel_derived_window`](kaiser_bessel_derived_window) or `None`. If using another window function, make sure that w[n]^2 + w[n + frame\_length // 2]^2 = 1 and w[n] = w[frame\_length - n - 1] for n = 0,...,frame\_length // 2 - 1 to achieve perfect reconstruction. | | `pad_end` | Whether to pad the end of `signals` with zeros when the provided frame length and step produces a frame that lies partially past its end. | | `norm` | If it is None, unnormalized dct4 is used, if it is "ortho" orthonormal dct4 is used. | | `name` | An optional name for the operation. | | Returns | | A `[..., frames, frame_length // 2]` `Tensor` of `float32`/`float64` MDCT values where `frames` is roughly `samples // (frame_length // 2)` when `pad_end=False`. | | Raises | | `ValueError` | If `signals` is not at least rank 1, `frame_length` is not scalar, or `frame_length` is not a multiple of `4`. |
programming_docs
tensorflow tf.signal.vorbis_window tf.signal.vorbis\_window ======================== Generate a [Vorbis power complementary window](https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform#Window_functions). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.vorbis_window`](https://www.tensorflow.org/api_docs/python/tf/signal/vorbis_window) ``` tf.signal.vorbis_window( window_length, dtype=tf.dtypes.float32, name=None ) ``` | Args | | `window_length` | A scalar `Tensor` indicating the window length to generate. | | `dtype` | The data type to produce. Must be a floating point type. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of shape `[window_length]` of type `dtype`. | tensorflow tf.signal.rfft tf.signal.rfft ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L113-L139) | Real-valued fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.rfft`](https://www.tensorflow.org/api_docs/python/tf/signal/rfft), [`tf.compat.v1.spectral.rfft`](https://www.tensorflow.org/api_docs/python/tf/signal/rfft) ``` tf.signal.rfft( input_tensor, fft_length=None, name=None ) ``` Computes the 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of `input`. Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the `fft_length / 2 + 1` unique components of the FFT: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms. Along the axis `RFFT` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros. | Args | | `input` | A `Tensor`. Must be one of the following types: `float32`, `float64`. A float32 tensor. | | `fft_length` | A `Tensor` of type `int32`. An int32 tensor of shape [1]. The FFT length. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor` of type `Tcomplex`. | tensorflow tf.signal.ifft3d tf.signal.ifft3d ================ Inverse 3D fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.ifft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft3d), [`tf.compat.v1.signal.ifft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft3d), [`tf.compat.v1.spectral.ifft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft3d) ``` tf.signal.ifft3d( input, name=None ) ``` Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of `input`. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `input`. | tensorflow tf.signal.hamming_window tf.signal.hamming\_window ========================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/window_ops.py#L171-L196) | Generate a [Hamming](https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows) window. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.hamming_window`](https://www.tensorflow.org/api_docs/python/tf/signal/hamming_window) ``` tf.signal.hamming_window( window_length, periodic=True, dtype=tf.dtypes.float32, name=None ) ``` | Args | | `window_length` | A scalar `Tensor` indicating the window length to generate. | | `periodic` | A bool `Tensor` indicating whether to generate a periodic or symmetric window. Periodic windows are typically used for spectral analysis while symmetric windows are typically used for digital filter design. | | `dtype` | The data type to produce. Must be a floating point type. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of shape `[window_length]` of type `dtype`. | | Raises | | `ValueError` | If `dtype` is not a floating point type. | tensorflow tf.signal.fft3d tf.signal.fft3d =============== 3D fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft3d), [`tf.compat.v1.signal.fft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft3d), [`tf.compat.v1.spectral.fft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft3d) ``` tf.signal.fft3d( input, name=None ) ``` Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of `input`. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `input`. | tensorflow tf.signal.ifftshift tf.signal.ifftshift =================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L415-L457) | The inverse of fftshift. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.ifftshift`](https://www.tensorflow.org/api_docs/python/tf/signal/ifftshift) ``` tf.signal.ifftshift( x, axes=None, name=None ) ``` Although identical for even-length x, the functions differ by one sample for odd-length x. #### For example: ``` x = tf.signal.ifftshift([[ 0., 1., 2.],[ 3., 4., -4.],[-3., -2., -1.]]) x.numpy() # array([[ 4., -4., 3.],[-2., -1., -3.],[ 1., 2., 0.]]) ``` | Args | | `x` | `Tensor`, input tensor. | | `axes` | `int` or shape `tuple` Axes over which to calculate. Defaults to None, which shifts all axes. | | `name` | An optional name for the operation. | | Returns | | A `Tensor`, The shifted tensor. | numpy compatibility ------------------- Equivalent to numpy.fft.ifftshift. <https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.ifftshift.html> tensorflow tf.signal.kaiser_window tf.signal.kaiser\_window ======================== Generate a [Kaiser window](https://docs.scipy.org/doc/numpy/reference/generated/numpy.kaiser.html). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.kaiser_window`](https://www.tensorflow.org/api_docs/python/tf/signal/kaiser_window) ``` tf.signal.kaiser_window( window_length, beta=12.0, dtype=tf.dtypes.float32, name=None ) ``` | Args | | `window_length` | A scalar `Tensor` indicating the window length to generate. | | `beta` | Beta parameter for Kaiser window, see reference below. | | `dtype` | The data type to produce. Must be a floating point type. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of shape `[window_length]` of type `dtype`. | tensorflow tf.signal.overlap_and_add tf.signal.overlap\_and\_add =========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/reconstruction_ops.py#L26-L163) | Reconstructs a signal from a framed representation. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.overlap_and_add`](https://www.tensorflow.org/api_docs/python/tf/signal/overlap_and_add) ``` tf.signal.overlap_and_add( signal, frame_step, name=None ) ``` Adds potentially overlapping frames of a signal with shape `[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`. The resulting tensor has shape `[..., output_size]` where ``` output_size = (frames - 1) * frame_step + frame_length ``` | Args | | `signal` | A [..., frames, frame\_length] `Tensor`. All dimensions may be unknown, and rank must be at least 2. | | `frame_step` | An integer or scalar `Tensor` denoting overlap offsets. Must be less than or equal to `frame_length`. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` with shape `[..., output_size]` containing the overlap-added frames of `signal`'s inner-most two dimensions. | | Raises | | `ValueError` | If `signal`'s rank is less than 2, or `frame_step` is not a scalar integer. | tensorflow tf.signal.idct tf.signal.idct ============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/dct_ops.py#L181-L224) | Computes the 1D [Inverse Discrete Cosine Transform (DCT)](https://en.wikipedia.org/wiki/Discrete_cosine_transform#Inverse_transforms) of `input`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.idct`](https://www.tensorflow.org/api_docs/python/tf/signal/idct), [`tf.compat.v1.spectral.idct`](https://www.tensorflow.org/api_docs/python/tf/signal/idct) ``` tf.signal.idct( input, type=2, n=None, axis=-1, norm=None, name=None ) ``` Currently Types I, II, III, IV are supported. Type III is the inverse of Type II, and vice versa. Note that you must re-normalize by 1/(2n) to obtain an inverse if `norm` is not `'ortho'`. That is: `signal == idct(dct(signal)) * 0.5 / signal.shape[-1]`. When `norm='ortho'`, we have: `signal == idct(dct(signal, norm='ortho'), norm='ortho')`. | Args | | `input` | A `[..., samples]` `float32`/`float64` `Tensor` containing the signals to take the DCT of. | | `type` | The IDCT type to perform. Must be 1, 2, 3 or 4. | | `n` | For future expansion. The length of the transform. Must be `None`. | | `axis` | For future expansion. The axis to compute the DCT along. Must be `-1`. | | `norm` | The normalization to apply. `None` for no normalization or `'ortho'` for orthonormal normalization. | | `name` | An optional name for the operation. | | Returns | | A `[..., samples]` `float32`/`float64` `Tensor` containing the IDCT of `input`. | | Raises | | `ValueError` | If `type` is not `1`, `2` or `3`, `n` is not `None,`axis`is not`-1`, or`norm`is not`None`or`'ortho'`. | scipy compatibility ------------------- Equivalent to [scipy.fftpack.idct](https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.idct.html) for Type-I, Type-II, Type-III and Type-IV DCT. tensorflow tf.signal.inverse_mdct tf.signal.inverse\_mdct ======================= Computes the inverse modified DCT of `mdcts`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.inverse_mdct`](https://www.tensorflow.org/api_docs/python/tf/signal/inverse_mdct) ``` tf.signal.inverse_mdct( mdcts, window_fn=tf.signal.vorbis_window, norm=None, name=None ) ``` To reconstruct an original waveform, the same window function should be used with `mdct` and `inverse_mdct`. #### Example usage: ``` @tf.function def compare_round_trip(): samples = 1000 frame_length = 400 halflen = frame_length // 2 waveform = tf.random.normal(dtype=tf.float32, shape=[samples]) waveform_pad = tf.pad(waveform, [[halflen, 0],]) mdct = tf.signal.mdct(waveform_pad, frame_length, pad_end=True, window_fn=tf.signal.vorbis_window) inverse_mdct = tf.signal.inverse_mdct(mdct, window_fn=tf.signal.vorbis_window) inverse_mdct = inverse_mdct[halflen: halflen + samples] return waveform, inverse_mdct waveform, inverse_mdct = compare_round_trip() np.allclose(waveform.numpy(), inverse_mdct.numpy(), rtol=1e-3, atol=1e-4) True ``` Implemented with TPU/GPU-compatible ops and supports gradients. | Args | | `mdcts` | A `float32`/`float64` `[..., frames, frame_length // 2]` `Tensor` of MDCT bins representing a batch of `frame_length // 2`-point MDCTs. | | `window_fn` | A callable that takes a frame\_length and a `dtype` keyword argument and returns a `[frame_length]` `Tensor` of samples in the provided datatype. If set to `None`, a rectangular window with a scale of 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct` followed by `inverse_mdct`, please use [`tf.signal.vorbis_window`](vorbis_window), [`tf.signal.kaiser_bessel_derived_window`](kaiser_bessel_derived_window) or `None`. If using another window function, make sure that w[n]^2 + w[n + frame\_length // 2]^2 = 1 and w[n] = w[frame\_length - n - 1] for n = 0,...,frame\_length // 2 - 1 to achieve perfect reconstruction. | | `norm` | If "ortho", orthonormal inverse DCT4 is performed, if it is None, a regular dct4 followed by scaling of `1/frame_length` is performed. | | `name` | An optional name for the operation. | | Returns | | A `[..., samples]` `Tensor` of `float32`/`float64` signals representing the inverse MDCT for each input MDCT in `mdcts` where `samples` is `(frames - 1) * (frame_length // 2) + frame_length`. | | Raises | | `ValueError` | If `mdcts` is not at least rank 2. | tensorflow tf.signal.inverse_stft tf.signal.inverse\_stft ======================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/spectral_ops.py#L155-L274) | Computes the inverse [Short-time Fourier Transform](https://en.wikipedia.org/wiki/Short-time_Fourier_transform) of `stfts`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.inverse_stft`](https://www.tensorflow.org/api_docs/python/tf/signal/inverse_stft) ``` tf.signal.inverse_stft( stfts, frame_length, frame_step, fft_length=None, window_fn=tf.signal.hann_window, name=None ) ``` To reconstruct an original waveform, a complementary window function should be used with `inverse_stft`. Such a window function can be constructed with [`tf.signal.inverse_stft_window_fn`](inverse_stft_window_fn). Example: ``` frame_length = 400 frame_step = 160 waveform = tf.random.normal(dtype=tf.float32, shape=[1000]) stft = tf.signal.stft(waveform, frame_length, frame_step) inverse_stft = tf.signal.inverse_stft( stft, frame_length, frame_step, window_fn=tf.signal.inverse_stft_window_fn(frame_step)) ``` If a custom `window_fn` is used with [`tf.signal.stft`](stft), it must be passed to [`tf.signal.inverse_stft_window_fn`](inverse_stft_window_fn): ``` frame_length = 400 frame_step = 160 window_fn = tf.signal.hamming_window waveform = tf.random.normal(dtype=tf.float32, shape=[1000]) stft = tf.signal.stft( waveform, frame_length, frame_step, window_fn=window_fn) inverse_stft = tf.signal.inverse_stft( stft, frame_length, frame_step, window_fn=tf.signal.inverse_stft_window_fn( frame_step, forward_window_fn=window_fn)) ``` Implemented with TPU/GPU-compatible ops and supports gradients. | Args | | `stfts` | A `complex64`/`complex128` `[..., frames, fft_unique_bins]` `Tensor` of STFT bins representing a batch of `fft_length`-point STFTs where `fft_unique_bins` is `fft_length // 2 + 1` | | `frame_length` | An integer scalar `Tensor`. The window length in samples. | | `frame_step` | An integer scalar `Tensor`. The number of samples to step. | | `fft_length` | An integer scalar `Tensor`. The size of the FFT that produced `stfts`. If not provided, uses the smallest power of 2 enclosing `frame_length`. | | `window_fn` | A callable that takes a window length and a `dtype` keyword argument and returns a `[window_length]` `Tensor` of samples in the provided datatype. If set to `None`, no windowing is used. | | `name` | An optional name for the operation. | | Returns | | A `[..., samples]` `Tensor` of `float32`/`float64` signals representing the inverse STFT for each input STFT in `stfts`. | | Raises | | `ValueError` | If `stfts` is not at least rank 2, `frame_length` is not scalar, `frame_step` is not scalar, or `fft_length` is not scalar. | tensorflow tf.signal.kaiser_bessel_derived_window tf.signal.kaiser\_bessel\_derived\_window ========================================= Generate a [Kaiser Bessel derived window](https://en.wikipedia.org/wiki/Kaiser_window#Kaiser%E2%80%93Bessel-derived_(KBD)_window). #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.kaiser_bessel_derived_window`](https://www.tensorflow.org/api_docs/python/tf/signal/kaiser_bessel_derived_window) ``` tf.signal.kaiser_bessel_derived_window( window_length, beta=12.0, dtype=tf.dtypes.float32, name=None ) ``` | Args | | `window_length` | A scalar `Tensor` indicating the window length to generate. | | `beta` | Beta parameter for Kaiser window. | | `dtype` | The data type to produce. Must be a floating point type. | | `name` | An optional name for the operation. | | Returns | | A `Tensor` of shape `[window_length]` of type `dtype`. | tensorflow tf.signal.ifft2d tf.signal.ifft2d ================ Inverse 2D fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.ifft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft2d), [`tf.compat.v1.signal.ifft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft2d), [`tf.compat.v1.spectral.ifft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft2d) ``` tf.signal.ifft2d( input, name=None ) ``` Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of `input`. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `input`. | tensorflow tf.signal.fft2d tf.signal.fft2d =============== 2D fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.fft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft2d), [`tf.compat.v1.signal.fft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft2d), [`tf.compat.v1.spectral.fft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft2d) ``` tf.signal.fft2d( input, name=None ) ``` Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of `input`. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `input`. | tensorflow tf.signal.mfccs_from_log_mel_spectrograms tf.signal.mfccs\_from\_log\_mel\_spectrograms ============================================= [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/mfcc_ops.py#L25-L107) | Computes [MFCCs](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum) of `log_mel_spectrograms`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.mfccs_from_log_mel_spectrograms`](https://www.tensorflow.org/api_docs/python/tf/signal/mfccs_from_log_mel_spectrograms) ``` tf.signal.mfccs_from_log_mel_spectrograms( log_mel_spectrograms, name=None ) ``` Implemented with GPU-compatible ops and supports gradients. [Mel-Frequency Cepstral Coefficient (MFCC)](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum) calculation consists of taking the DCT-II of a log-magnitude mel-scale spectrogram. [HTK](https://en.wikipedia.org/wiki/HTK_(software))'s MFCCs use a particular scaling of the DCT-II which is almost orthogonal normalization. We follow this convention. All `num_mel_bins` MFCCs are returned and it is up to the caller to select a subset of the MFCCs based on their application. For example, it is typical to only use the first few for speech recognition, as this results in an approximately pitch-invariant representation of the signal. #### For example: ``` batch_size, num_samples, sample_rate = 32, 32000, 16000.0 # A Tensor of [batch_size, num_samples] mono PCM samples in the range [-1, 1]. pcm = tf.random.normal([batch_size, num_samples], dtype=tf.float32) # A 1024-point STFT with frames of 64 ms and 75% overlap. stfts = tf.signal.stft(pcm, frame_length=1024, frame_step=256, fft_length=1024) spectrograms = tf.abs(stfts) # Warp the linear scale spectrograms into the mel-scale. num_spectrogram_bins = stfts.shape[-1].value lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80 linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix( num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz, upper_edge_hertz) mel_spectrograms = tf.tensordot( spectrograms, linear_to_mel_weight_matrix, 1) mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate( linear_to_mel_weight_matrix.shape[-1:])) # Compute a stabilized log to get log-magnitude mel-scale spectrograms. log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6) # Compute MFCCs from log_mel_spectrograms and take the first 13. mfccs = tf.signal.mfccs_from_log_mel_spectrograms( log_mel_spectrograms)[..., :13] ``` | Args | | `log_mel_spectrograms` | A `[..., num_mel_bins]` `float32`/`float64` `Tensor` of log-magnitude mel-scale spectrograms. | | `name` | An optional name for the operation. | | Returns | | A `[..., num_mel_bins]` `float32`/`float64` `Tensor` of the MFCCs of `log_mel_spectrograms`. | | Raises | | `ValueError` | If `num_mel_bins` is not positive. |
programming_docs
tensorflow tf.signal.fftshift tf.signal.fftshift ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/fft_ops.py#L370-L412) | Shift the zero-frequency component to the center of the spectrum. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.fftshift`](https://www.tensorflow.org/api_docs/python/tf/signal/fftshift) ``` tf.signal.fftshift( x, axes=None, name=None ) ``` This function swaps half-spaces for all axes listed (defaults to all). Note that `y[0]` is the Nyquist component only if `len(x)` is even. #### For example: ``` x = tf.signal.fftshift([ 0., 1., 2., 3., 4., -5., -4., -3., -2., -1.]) x.numpy() # array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]) ``` | Args | | `x` | `Tensor`, input tensor. | | `axes` | `int` or shape `tuple`, optional Axes over which to shift. Default is None, which shifts all axes. | | `name` | An optional name for the operation. | | Returns | | A `Tensor`, The shifted tensor. | numpy compatibility ------------------- Equivalent to numpy.fft.fftshift. <https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftshift.html> tensorflow tf.signal.ifft tf.signal.ifft ============== Inverse fast Fourier transform. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.ifft`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft), [`tf.compat.v1.signal.ifft`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft), [`tf.compat.v1.spectral.ifft`](https://www.tensorflow.org/api_docs/python/tf/signal/ifft) ``` tf.signal.ifft( input, name=None ) ``` Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of `input`. | Args | | `input` | A `Tensor`. Must be one of the following types: `complex64`, `complex128`. A complex tensor. | | `name` | A name for the operation (optional). | | Returns | | A `Tensor`. Has the same type as `input`. | tensorflow tf.signal.inverse_stft_window_fn tf.signal.inverse\_stft\_window\_fn =================================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/signal/spectral_ops.py#L95-L152) | Generates a window function that can be used in `inverse_stft`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.signal.inverse_stft_window_fn`](https://www.tensorflow.org/api_docs/python/tf/signal/inverse_stft_window_fn) ``` tf.signal.inverse_stft_window_fn( frame_step, forward_window_fn=tf.signal.hann_window, name=None ) ``` Constructs a window that is equal to the forward window with a further pointwise amplitude correction. `inverse_stft_window_fn` is equivalent to `forward_window_fn` in the case where it would produce an exact inverse. See examples in `inverse_stft` documentation for usage. | Args | | `frame_step` | An integer scalar `Tensor`. The number of samples to step. | | `forward_window_fn` | window\_fn used in the forward transform, `stft`. | | `name` | An optional name for the operation. | | Returns | | A callable that takes a window length and a `dtype` keyword argument and returns a `[window_length]` `Tensor` of samples in the provided datatype. The returned window is suitable for reconstructing original waveform in inverse\_stft. | tensorflow tf.nest.is_nested tf.nest.is\_nested ================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/util/nest.py#L314-L327) | Returns true if its input is a nested structure. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nest.is_nested`](https://www.tensorflow.org/api_docs/python/tf/nest/is_nested) ``` tf.nest.is_nested( seq ) ``` Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest) for the definition of a nested structure. | Args | | `seq` | the value to test. | | Returns | | True if the input is a nested structure. | tensorflow tf.nest.assert_same_structure tf.nest.assert\_same\_structure =============================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/util/nest.py#L475-L575) | Asserts that two structures are nested in the same way. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nest.assert_same_structure`](https://www.tensorflow.org/api_docs/python/tf/nest/assert_same_structure) ``` tf.nest.assert_same_structure( nest1, nest2, check_types=True, expand_composites=False ) ``` Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest) for the definition of a structure. Note the method does not check the types of atoms inside the structures. #### Examples: * These atom vs. atom comparisons will pass: ``` tf.nest.assert_same_structure(1.5, tf.Variable(1, tf.uint32)) tf.nest.assert_same_structure("abc", np.array([1, 2])) ``` * These nested structure vs. nested structure comparisons will pass: ``` structure1 = (((1, 2), 3), 4, (5, 6)) structure2 = ((("foo1", "foo2"), "foo3"), "foo4", ("foo5", "foo6")) structure3 = [(("a", "b"), "c"), "d", ["e", "f"]] tf.nest.assert_same_structure(structure1, structure2) tf.nest.assert_same_structure(structure1, structure3, check_types=False) ``` ``` import collections tf.nest.assert_same_structure( collections.namedtuple("bar", "a b")(1, 2), collections.namedtuple("foo", "a b")(2, 3), check_types=False) ``` ``` tf.nest.assert_same_structure( collections.namedtuple("bar", "a b")(1, 2), { "a": 1, "b": 2 }, check_types=False) ``` ``` tf.nest.assert_same_structure( { "a": 1, "b": 2, "c": 3 }, { "c": 6, "b": 5, "a": 4 }) ``` ``` ragged_tensor1 = tf.RaggedTensor.from_row_splits( values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8]) ragged_tensor2 = tf.RaggedTensor.from_row_splits( values=[3, 1, 4], row_splits=[0, 3]) tf.nest.assert_same_structure( ragged_tensor1, ragged_tensor2, expand_composites=True) ``` * These examples will raise exceptions: ``` tf.nest.assert_same_structure([0, 1], np.array([0, 1])) Traceback (most recent call last): ValueError: The two structures don't have the same nested structure ``` ``` tf.nest.assert_same_structure( collections.namedtuple('bar', 'a b')(1, 2), collections.namedtuple('foo', 'a b')(2, 3)) Traceback (most recent call last): TypeError: The two structures don't have the same nested structure ``` | Args | | `nest1` | an atom or a nested structure. | | `nest2` | an atom or a nested structure. | | `check_types` | if `True` (default) types of structures are checked as well, including the keys of dictionaries. If set to `False`, for example a list and a tuple of objects will look the same if they have the same size. Note that namedtuples with identical name and fields are always considered to have the same shallow structure. Two types will also be considered the same if they are both list subtypes (which allows "list" and "\_ListWrapper" from trackable dependency tracking to compare equal). `check_types=True` only checks type of sub-structures. The types of atoms are not checked. | | `expand_composites` | If true, then composite tensors such as [`tf.sparse.SparseTensor`](../sparse/sparsetensor) and [`tf.RaggedTensor`](../raggedtensor) are expanded into their component tensors. | | Raises | | `ValueError` | If the two structures do not have the same number of atoms or if the two structures are not nested in the same way. | | `TypeError` | If the two structures differ in the type of sequence in any of their substructures. Only possible if `check_types` is `True`. | tensorflow tf.nest.flatten tf.nest.flatten =============== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/util/nest.py#L355-L453) | Returns a flat list from a given structure. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nest.flatten`](https://www.tensorflow.org/api_docs/python/tf/nest/flatten) ``` tf.nest.flatten( structure, expand_composites=False ) ``` Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest) for the definition of a structure. If the structure is an atom, then returns a single-item list: [structure]. This is the inverse of the [`nest.pack_sequence_as`](pack_sequence_as) method that takes in a flattened list and re-packs it into the nested structure. In the case of dict instances, the sequence consists of the values, sorted by key to ensure deterministic behavior. This is true also for OrderedDict instances: their sequence order is ignored, the sorting order of keys is used instead. The same convention is followed in [`nest.pack_sequence_as`](pack_sequence_as). This correctly repacks dicts and OrderedDicts after they have been flattened, and also allows flattening an OrderedDict and then repacking it back using a corresponding plain dict, or vice-versa. Dictionaries with non-sortable keys cannot be flattened. Users must not modify any collections used in nest while this function is running. #### Examples: 1. Python dict (ordered by key): ``` dict = { "key3": "value3", "key1": "value1", "key2": "value2" } tf.nest.flatten(dict) ['value1', 'value2', 'value3'] ``` 1. For a nested python tuple: ``` tuple = ((1.0, 2.0), (3.0, 4.0, 5.0), 6.0) tf.nest.flatten(tuple) [1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ``` 1. For a nested dictionary of dictionaries: ``` dict = { "key3": {"c": (1.0, 2.0), "a": (3.0)}, "key1": {"m": "val1", "g": "val2"} } tf.nest.flatten(dict) ['val2', 'val1', 3.0, 1.0, 2.0] ``` 1. Numpy array (will not flatten): ``` array = np.array([[1, 2], [3, 4]]) tf.nest.flatten(array) [array([[1, 2], [3, 4]])] ``` 1. [`tf.Tensor`](../tensor) (will not flatten): ``` tensor = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) tf.nest.flatten(tensor) [<tf.Tensor: shape=(3, 3), dtype=float32, numpy= array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]], dtype=float32)>] ``` 1. [`tf.RaggedTensor`](../raggedtensor): This is a composite tensor thats representation consists of a flattened list of 'values' and a list of 'row\_splits' which indicate how to chop up the flattened list into different rows. For more details on [`tf.RaggedTensor`](../raggedtensor), please visit https://www.tensorflow.org/api\_docs/python/tf/RaggedTensor. with `expand_composites=False`, we just return the RaggedTensor as is. ``` tensor = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2]]) tf.nest.flatten(tensor, expand_composites=False) [<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2]]>] ``` with `expand_composites=True`, we return the component Tensors that make up the RaggedTensor representation (the values and row\_splits tensors) ``` tensor = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2]]) tf.nest.flatten(tensor, expand_composites=True) [<tf.Tensor: shape=(7,), dtype=int32, numpy=array([3, 1, 4, 1, 5, 9, 2], dtype=int32)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([0, 4, 4, 7])>] ``` | Args | | `structure` | an atom or a nested structure. Note, numpy arrays are considered atoms and are not flattened. | | `expand_composites` | If true, then composite tensors such as [`tf.sparse.SparseTensor`](../sparse/sparsetensor) and [`tf.RaggedTensor`](../raggedtensor) are expanded into their component tensors. | | Returns | | A Python list, the flattened version of the input. | | Raises | | `TypeError` | The nest is or contains a dict with non-sortable keys. | tensorflow tf.nest.pack_sequence_as tf.nest.pack\_sequence\_as ========================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/util/nest.py#L691-L805) | Returns a given flattened sequence packed into a given structure. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nest.pack_sequence_as`](https://www.tensorflow.org/api_docs/python/tf/nest/pack_sequence_as) ``` tf.nest.pack_sequence_as( structure, flat_sequence, expand_composites=False ) ``` Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest) for the definition of a structure. If `structure` is an atom, `flat_sequence` must be a single-item list; in this case the return value is `flat_sequence[0]`. If `structure` is or contains a dict instance, the keys will be sorted to pack the flat sequence in deterministic order. This is true also for `OrderedDict` instances: their sequence order is ignored, the sorting order of keys is used instead. The same convention is followed in `flatten`. This correctly repacks dicts and `OrderedDict`s after they have been flattened, and also allows flattening an `OrderedDict` and then repacking it back using a corresponding plain dict, or vice-versa. Dictionaries with non-sortable keys cannot be flattened. #### Examples: 1. Python dict: ``` structure = { "key3": "", "key1": "", "key2": "" } flat_sequence = ["value1", "value2", "value3"] tf.nest.pack_sequence_as(structure, flat_sequence) {'key3': 'value3', 'key1': 'value1', 'key2': 'value2'} ``` 1. For a nested python tuple: ``` structure = (('a','b'), ('c','d','e'), 'f') flat_sequence = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0] tf.nest.pack_sequence_as(structure, flat_sequence) ((1.0, 2.0), (3.0, 4.0, 5.0), 6.0) ``` 1. For a nested dictionary of dictionaries: ``` structure = { "key3": {"c": ('alpha', 'beta'), "a": ('gamma')}, "key1": {"e": "val1", "d": "val2"} } flat_sequence = ['val2', 'val1', 3.0, 1.0, 2.0] tf.nest.pack_sequence_as(structure, flat_sequence) {'key3': {'c': (1.0, 2.0), 'a': 3.0}, 'key1': {'e': 'val1', 'd': 'val2'} } ``` 1. Numpy array (considered a scalar): ``` structure = ['a'] flat_sequence = [np.array([[1, 2], [3, 4]])] tf.nest.pack_sequence_as(structure, flat_sequence) [array([[1, 2], [3, 4]])] ``` 1. tf.Tensor (considered a scalar): ``` structure = ['a'] flat_sequence = [tf.constant([[1., 2., 3.], [4., 5., 6.]])] tf.nest.pack_sequence_as(structure, flat_sequence) [<tf.Tensor: shape=(2, 3), dtype=float32, numpy= array([[1., 2., 3.], [4., 5., 6.]], dtype=float32)>] ``` 1. [`tf.RaggedTensor`](../raggedtensor): This is a composite tensor thats representation consists of a flattened list of 'values' and a list of 'row\_splits' which indicate how to chop up the flattened list into different rows. For more details on [`tf.RaggedTensor`](../raggedtensor), please visit https://www.tensorflow.org/api\_docs/python/tf/RaggedTensor. With `expand_composites=False`, we treat RaggedTensor as a scalar. ``` structure = { "foo": tf.ragged.constant([[1, 2], [3]]), "bar": tf.constant([[5]]) } flat_sequence = [ "one", "two" ] tf.nest.pack_sequence_as(structure, flat_sequence, expand_composites=False) {'foo': 'two', 'bar': 'one'} ``` With `expand_composites=True`, we expect that the flattened input contains the tensors making up the ragged tensor i.e. the values and row\_splits tensors. ``` structure = { "foo": tf.ragged.constant([[1., 2.], [3.]]), "bar": tf.constant([[5.]]) } tensors = tf.nest.flatten(structure, expand_composites=True) print(tensors) [<tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[5.]], dtype=float32)>, <tf.Tensor: shape=(3,), dtype=float32, numpy=array([1., 2., 3.], dtype=float32)>, <tf.Tensor: shape=(3,), dtype=int64, numpy=array([0, 2, 3])>] verified_tensors = [tf.debugging.check_numerics(t, 'invalid tensor: ') if t.dtype==tf.float32 else t for t in tensors] tf.nest.pack_sequence_as(structure, verified_tensors, expand_composites=True) {'foo': <tf.RaggedTensor [[1.0, 2.0], [3.0]]>, 'bar': <tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[5.]], dtype=float32)>} ``` | Args | | `structure` | Nested structure, whose structure is given by nested lists, tuples, and dicts. Note: numpy arrays and strings are considered scalars. | | `flat_sequence` | flat sequence to pack. | | `expand_composites` | If true, then composite tensors such as [`tf.sparse.SparseTensor`](../sparse/sparsetensor) and [`tf.RaggedTensor`](../raggedtensor) are expanded into their component tensors. | | Returns | | `packed` | `flat_sequence` converted to have the same recursive structure as `structure`. | | Raises | | `ValueError` | If `flat_sequence` and `structure` have different atom counts. | | `TypeError` | `structure` is or contains a dict with non-sortable keys. | tensorflow tf.nest.map_structure tf.nest.map\_structure ====================== [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/util/nest.py#L808-L917) | Creates a new structure by applying `func` to each atom in `structure`. #### View aliases **Compat aliases for migration** See [Migration guide](https://www.tensorflow.org/guide/migrate) for more details. [`tf.compat.v1.nest.map_structure`](https://www.tensorflow.org/api_docs/python/tf/nest/map_structure) ``` tf.nest.map_structure( func, *structure, **kwargs ) ``` Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest) for the definition of a structure. Applies `func(x[0], x[1], ...)` where x[i] enumerates all atoms in `structure[i]`. All items in `structure` must have the same arity, and the return value will contain results with the same structure layout. #### Examples: * A single Python dict: ``` a = {"hello": 24, "world": 76} tf.nest.map_structure(lambda p: p * 2, a) {'hello': 48, 'world': 152} ``` * Multiple Python dictionaries: ``` d1 = {"hello": 24, "world": 76} d2 = {"hello": 36, "world": 14} tf.nest.map_structure(lambda p1, p2: p1 + p2, d1, d2) {'hello': 60, 'world': 90} ``` * A single Python list: ``` a = [24, 76, "ab"] tf.nest.map_structure(lambda p: p * 2, a) [48, 152, 'abab'] ``` * Scalars: ``` tf.nest.map_structure(lambda x, y: x + y, 3, 4) 7 ``` * Empty structures: ``` tf.nest.map_structure(lambda x: x + 1, ()) () ``` * Check the types of iterables: ``` s1 = (((1, 2), 3), 4, (5, 6)) s1_list = [[[1, 2], 3], 4, [5, 6]] tf.nest.map_structure(lambda x, y: None, s1, s1_list) Traceback (most recent call last): TypeError: The two structures don't have the same nested structure ``` * Type check is set to False: ``` s1 = (((1, 2), 3), 4, (5, 6)) s1_list = [[[1, 2], 3], 4, [5, 6]] tf.nest.map_structure(lambda x, y: None, s1, s1_list, check_types=False) (((None, None), None), None, (None, None)) ``` | Args | | `func` | A callable that accepts as many arguments as there are structures. | | `*structure` | atom or nested structure. | | `**kwargs` | Valid keyword args are: * `check_types`: If set to `True` (default) the types of iterables within the structures have to be same (e.g. `map_structure(func, [1], (1,))` raises a `TypeError` exception). To allow this set this argument to `False`. Note that namedtuples with identical name and fields are always considered to have the same shallow structure. * `expand_composites`: If set to `True`, then composite tensors such as [`tf.sparse.SparseTensor`](../sparse/sparsetensor) and [`tf.RaggedTensor`](../raggedtensor) are expanded into their component tensors. If `False` (the default), then composite tensors are not expanded. | | Returns | | A new structure with the same arity as `structure[0]`, whose atoms correspond to `func(x[0], x[1], ...)` where `x[i]` is the atom in the corresponding location in `structure[i]`. If there are different structure types and `check_types` is `False` the structure types of the first structure will be used. | | Raises | | `TypeError` | If `func` is not callable or if the structures do not match each other by depth tree. | | `ValueError` | If no structure is provided or if the structures do not match each other by type. | | `ValueError` | If wrong keyword arguments are provided. |
programming_docs
tensorflow tf.summary.scalar tf.summary.scalar ================= [View source on GitHub](https://github.com/tensorflow/tensorboard/tree/2.9.0/tensorboard/plugins/scalar/summary_v2.py#L30-L94) | Write a scalar summary. ``` tf.summary.scalar( name, data, step=None, description=None ) ``` See also [`tf.summary.image`](image), [`tf.summary.histogram`](histogram), [`tf.summary.SummaryWriter`](summarywriter). Writes simple numeric values for later analysis in TensorBoard. Writes go to the current default summary writer. Each summary point is associated with an integral `step` value. This enables the incremental logging of time series data. A common usage of this API is to log loss during training to produce a loss curve. #### For example: ``` test_summary_writer = tf.summary.create_file_writer('test/logdir') with test_summary_writer.as_default(): tf.summary.scalar('loss', 0.345, step=1) tf.summary.scalar('loss', 0.234, step=2) tf.summary.scalar('loss', 0.123, step=3) ``` Multiple independent time series may be logged by giving each series a unique `name` value. See [Get started with TensorBoard](https://www.tensorflow.org/tensorboard/get_started) for more examples of effective usage of [`tf.summary.scalar`](scalar). In general, this API expects that data points are logged iwth a monotonically increasing step value. Duplicate points for a single step or points logged out of order by step are not guaranteed to display as desired in TensorBoard. | Arguments | | `name` | A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes. | | `data` | A real numeric scalar value, convertible to a `float32` Tensor. | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `description` | Optional long-form description for this summary, as a constant `str`. Markdown is supported. Defaults to empty. | | Returns | | True on success, or false if no summary was written because no default summary writer was available. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. | tensorflow tf.summary.trace_on tf.summary.trace\_on ==================== Starts a trace to record computation graphs and profiling information. ``` tf.summary.trace_on( graph=True, profiler=False ) ``` Must be invoked in eager mode. When enabled, TensorFlow runtime will collect information that can later be exported and consumed by TensorBoard. The trace is activated across the entire TensorFlow runtime and affects all threads of execution. To stop the trace and export the collected information, use [`tf.summary.trace_export`](trace_export). To stop the trace without exporting, use [`tf.summary.trace_off`](trace_off). | Args | | `graph` | If True, enables collection of executed graphs. It includes ones from tf.function invocation and ones from the legacy graph mode. The default is True. | | `profiler` | If True, enables the advanced profiler. Enabling profiler implicitly enables the graph collection. The profiler may incur a high memory overhead. The default is False. | tensorflow tf.summary.should_record_summaries tf.summary.should\_record\_summaries ==================================== Returns boolean Tensor which is True if summaries will be recorded. ``` tf.summary.should_record_summaries() ``` If no default summary writer is currently registered, this always returns False. Otherwise, this reflects the recording condition has been set via [`tf.summary.record_if()`](record_if) (except that it may return False for some replicas when using [`tf.distribute.Strategy`](../distribute/strategy)). If no recording condition is active, it defaults to True. tensorflow tf.summary.text tf.summary.text =============== [View source on GitHub](https://github.com/tensorflow/tensorboard/tree/2.9.0/tensorboard/plugins/text/summary_v2.py#L26-L97) | Write a text summary. ``` tf.summary.text( name, data, step=None, description=None ) ``` See also [`tf.summary.scalar`](scalar), [`tf.summary.SummaryWriter`](summarywriter), [`tf.summary.image`](image). Writes text Tensor values for later visualization and analysis in TensorBoard. Writes go to the current default summary writer. Like [`tf.summary.scalar`](scalar) points, text points are each associated with a `step` and a `name`. All the points with the same `name` constitute a time series of text values. #### For Example: ``` test_summary_writer = tf.summary.create_file_writer('test/logdir') with test_summary_writer.as_default(): tf.summary.text('first_text', 'hello world!', step=0) tf.summary.text('first_text', 'nice to meet you!', step=1) ``` The text summary can also contain Markdown, and TensorBoard will render the text as such. ``` with test_summary_writer.as_default(): text_data = ''' | *hello* | *there* | |---------|---------| | this | is | | a | table | ''' text_data = '\n'.join(l.strip() for l in text_data.splitlines()) tf.summary.text('markdown_text', text_data, step=0) ``` Since text is Tensor valued, each text point may be a Tensor of string values. rank-1 and rank-2 Tensors are rendered as tables in TensorBoard. For higher ranked Tensors, you'll see just a 2D slice of the data. To avoid this, reshape the Tensor to at most rank-2 prior to passing it to this function. Demo notebook at ["Displaying text data in TensorBoard"](https://www.tensorflow.org/tensorboard/text_summaries). | Arguments | | `name` | A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes. | | `data` | A UTF-8 string Tensor value. | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `description` | Optional long-form description for this summary, as a constant `str`. Markdown is supported. Defaults to empty. | | Returns | | True on success, or false if no summary was emitted because no default summary writer was available. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. | tensorflow tf.summary.image tf.summary.image ================ [View source on GitHub](https://github.com/tensorflow/tensorboard/tree/2.9.0/tensorboard/plugins/image/summary_v2.py#L27-L142) | Write an image summary. ``` tf.summary.image( name, data, step=None, max_outputs=3, description=None ) ``` See also [`tf.summary.scalar`](scalar), [`tf.summary.SummaryWriter`](summarywriter). Writes a collection of images to the current default summary writer. Data appears in TensorBoard's 'Images' dashboard. Like [`tf.summary.scalar`](scalar) points, each collection of images is associated with a `step` and a `name`. All the image collections with the same `name` constitute a time series of image collections. This example writes 2 random grayscale images: ``` w = tf.summary.create_file_writer('test/logs') with w.as_default(): image1 = tf.random.uniform(shape=[8, 8, 1]) image2 = tf.random.uniform(shape=[8, 8, 1]) tf.summary.image("grayscale_noise", [image1, image2], step=0) ``` To avoid clipping, data should be converted to one of the following: * floating point values in the range [0,1], or * uint8 values in the range [0,255] ``` # Convert the original dtype=int32 `Tensor` into `dtype=float64`. rgb_image_float = tf.constant([ [[1000, 0, 0], [0, 500, 1000]], ]) / 1000 tf.summary.image("picture", [rgb_image_float], step=0) # Convert original dtype=uint8 `Tensor` into proper range. rgb_image_uint8 = tf.constant([ [[1, 1, 0], [0, 0, 1]], ], dtype=tf.uint8) * 255 tf.summary.image("picture", [rgb_image_uint8], step=1) ``` | Arguments | | `name` | A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes. | | `data` | A `Tensor` representing pixel data with shape `[k, h, w, c]`, where `k` is the number of images, `h` and `w` are the height and width of the images, and `c` is the number of channels, which should be 1, 2, 3, or 4 (grayscale, grayscale with alpha, RGB, RGBA). Any of the dimensions may be statically unknown (i.e., `None`). Floating point data will be clipped to the range [0,1]. Other data types will be clipped into an allowed range for safe casting to uint8, using [`tf.image.convert_image_dtype`](../image/convert_image_dtype). | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `max_outputs` | Optional `int` or rank-0 integer `Tensor`. At most this many images will be emitted at each step. When more than `max_outputs` many images are provided, the first `max_outputs` many images will be used and the rest silently discarded. | | `description` | Optional long-form description for this summary, as a constant `str`. Markdown is supported. Defaults to empty. | | Returns | | True on success, or false if no summary was emitted because no default summary writer was available. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. | tensorflow tf.summary.create_noop_writer tf.summary.create\_noop\_writer =============================== Returns a summary writer that does nothing. ``` tf.summary.create_noop_writer() ``` This is useful as a placeholder in code that expects a context manager. tensorflow tf.summary.audio tf.summary.audio ================ [View source on GitHub](https://github.com/tensorflow/tensorboard/tree/2.9.0/tensorboard/plugins/audio/summary_v2.py#L32-L125) | Write an audio summary. ``` tf.summary.audio( name, data, sample_rate, step=None, max_outputs=3, encoding=None, description=None ) ``` | Arguments | | `name` | A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes. | | `data` | A `Tensor` representing audio data with shape `[k, t, c]`, where `k` is the number of audio clips, `t` is the number of frames, and `c` is the number of channels. Elements should be floating-point values in `[-1.0, 1.0]`. Any of the dimensions may be statically unknown (i.e., `None`). | | `sample_rate` | An `int` or rank-0 `int32` `Tensor` that represents the sample rate, in Hz. Must be positive. | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `max_outputs` | Optional `int` or rank-0 integer `Tensor`. At most this many audio clips will be emitted at each step. When more than `max_outputs` many clips are provided, the first `max_outputs` many clips will be used and the rest silently discarded. | | `encoding` | Optional constant `str` for the desired encoding. Only "wav" is currently supported, but this is not guaranteed to remain the default, so if you want "wav" in particular, set this explicitly. | | `description` | Optional long-form description for this summary, as a constant `str`. Markdown is supported. Defaults to empty. | | Returns | | True on success, or false if no summary was emitted because no default summary writer was available. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. | tensorflow tf.summary.graph tf.summary.graph ================ Writes a TensorFlow graph summary. ``` tf.summary.graph( graph_data ) ``` Write an instance of [`tf.Graph`](../graph) or [`tf.compat.v1.GraphDef`](../compat/v1/graphdef) as summary only in an eager mode. Please prefer to use the trace APIs ([`tf.summary.trace_on`](trace_on), [`tf.summary.trace_off`](trace_off), and [`tf.summary.trace_export`](trace_export)) when using [`tf.function`](../function) which can automatically collect and record graphs from executions. #### Usage Example: ``` writer = tf.summary.create_file_writer("/tmp/mylogs") @tf.function def f(): x = constant_op.constant(2) y = constant_op.constant(3) return x**y with writer.as_default(): tf.summary.graph(f.get_concrete_function().graph) # Another example: in a very rare use case, when you are dealing with a TF v1 # graph. graph = tf.Graph() with graph.as_default(): c = tf.constant(30.0) with writer.as_default(): tf.summary.graph(graph) ``` | Args | | `graph_data` | The TensorFlow graph to write, as a [`tf.Graph`](../graph) or a [`tf.compat.v1.GraphDef`](../compat/v1/graphdef). | | Returns | | True on success, or False if no summary was written because no default summary writer was available. | | Raises | | `ValueError` | `graph` summary API is invoked in a graph mode. | tensorflow tf.summary.create_file_writer tf.summary.create\_file\_writer =============================== Creates a summary file writer for the given log directory. ``` tf.summary.create_file_writer( logdir, max_queue=None, flush_millis=None, filename_suffix=None, name=None, experimental_trackable=False ) ``` | Args | | `logdir` | a string specifying the directory in which to write an event file. | | `max_queue` | the largest number of summaries to keep in a queue; will flush once the queue gets bigger than this. Defaults to 10. | | `flush_millis` | the largest interval between flushes. Defaults to 120,000. | | `filename_suffix` | optional suffix for the event file name. Defaults to `.v2`. | | `name` | a name for the op that creates the writer. | | `experimental_trackable` | a boolean that controls whether the returned writer will be a `TrackableResource`, which makes it compatible with SavedModel when used as a [`tf.Module`](../module) property. | | Returns | | A SummaryWriter object. | tensorflow tf.summary.trace_off tf.summary.trace\_off ===================== Stops the current trace and discards any collected information. ``` tf.summary.trace_off() ``` tensorflow tf.summary.flush tf.summary.flush ================ Forces summary writer to send any buffered data to storage. ``` tf.summary.flush( writer=None, name=None ) ``` This operation blocks until that finishes. | Args | | `writer` | The [`tf.summary.SummaryWriter`](summarywriter) to flush. If None, the current default writer will be used instead; if there is no current writer, this returns [`tf.no_op`](../no_op). | | `name` | Ignored legacy argument for a name for the operation. | | Returns | | The created [`tf.Operation`](../operation). | tensorflow tf.summary.record_if tf.summary.record\_if ===================== Sets summary recording on or off per the provided boolean value. ``` @tf_contextlib.contextmanager tf.summary.record_if( condition ) ``` The provided value can be a python boolean, a scalar boolean Tensor, or or a callable providing such a value; if a callable is passed it will be invoked on-demand to determine whether summary writing will occur. Note that when calling record\_if() in an eager mode context, if you intend to provide a varying condition like `step % 100 == 0`, you must wrap this in a callable to avoid immediate eager evaluation of the condition. In particular, using a callable is the only way to have your condition evaluated as part of the traced body of an @tf.function that is invoked from within the `record_if()` context. | Args | | `condition` | can be True, False, a bool Tensor, or a callable providing such. | | Yields | | Returns a context manager that sets this value on enter and restores the previous value on exit. | tensorflow tf.summary.write tf.summary.write ================ Writes a generic summary to the default SummaryWriter if one exists. ``` tf.summary.write( tag, tensor, step=None, metadata=None, name=None ) ``` This exists primarily to support the definition of type-specific summary ops like scalar() and image(), and is not intended for direct use unless defining a new type-specific summary op. | Args | | `tag` | string tag used to identify the summary (e.g. in TensorBoard), usually generated with `tf.summary.summary_scope` | | `tensor` | the Tensor holding the summary data to write or a callable that returns this Tensor. If a callable is passed, it will only be called when a default SummaryWriter exists and the recording condition specified by `record_if()` is met. | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `metadata` | Optional SummaryMetadata, as a proto or serialized bytes | | `name` | Optional string name for this op. | | Returns | | True on success, or false if no summary was written because no default summary writer was available. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. | tensorflow tf.summary.histogram tf.summary.histogram ==================== [View source on GitHub](https://github.com/tensorflow/tensorboard/tree/2.9.0/tensorboard/plugins/histogram/summary_v2.py#L103-L199) | Write a histogram summary. ``` tf.summary.histogram( name, data, step=None, buckets=None, description=None ) ``` See also [`tf.summary.scalar`](scalar), [`tf.summary.SummaryWriter`](summarywriter). Writes a histogram to the current default summary writer, for later analysis in TensorBoard's 'Histograms' and 'Distributions' dashboards (data written using this API will appear in both places). Like [`tf.summary.scalar`](scalar) points, each histogram is associated with a `step` and a `name`. All the histograms with the same `name` constitute a time series of histograms. The histogram is calculated over all the elements of the given `Tensor` without regard to its shape or rank. This example writes 2 histograms: ``` w = tf.summary.create_file_writer('test/logs') with w.as_default(): tf.summary.histogram("activations", tf.random.uniform([100, 50]), step=0) tf.summary.histogram("initial_weights", tf.random.normal([1000]), step=0) ``` A common use case is to examine the changing activation patterns (or lack thereof) at specific layers in a neural network, over time. ``` w = tf.summary.create_file_writer('test/logs') with w.as_default(): for step in range(100): # Generate fake "activations". activations = [ tf.random.normal([1000], mean=step, stddev=1), tf.random.normal([1000], mean=step, stddev=10), tf.random.normal([1000], mean=step, stddev=100), ] tf.summary.histogram("layer1/activate", activations[0], step=step) tf.summary.histogram("layer2/activate", activations[1], step=step) tf.summary.histogram("layer3/activate", activations[2], step=step) ``` | Arguments | | `name` | A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes. | | `data` | A `Tensor` of any shape. The histogram is computed over its elements, which must be castable to `float64`. | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `buckets` | Optional positive `int`. The output will have this many buckets, except in two edge cases. If there is no data, then there are no buckets. If there is data but all points have the same value, then all buckets' left and right endpoints are the same and only the last bucket has nonzero count. | | `description` | Optional long-form description for this summary, as a constant `str`. Markdown is supported. Defaults to empty. | | Returns | | True on success, or false if no summary was emitted because no default summary writer was available. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. |
programming_docs
tensorflow tf.summary.trace_export tf.summary.trace\_export ======================== Stops and exports the active trace as a Summary and/or profile file. ``` tf.summary.trace_export( name, step=None, profiler_outdir=None ) ``` Stops the trace and exports all metadata collected during the trace to the default SummaryWriter, if one has been set. | Args | | `name` | A name for the summary to be written. | | `step` | Explicit `int64`-castable monotonic step value for this summary. If omitted, this defaults to [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step), which must not be None. | | `profiler_outdir` | Output directory for profiler. It is required when profiler is enabled when trace was started. Otherwise, it is ignored. | | Raises | | `ValueError` | if a default writer exists, but no step was provided and [`tf.summary.experimental.get_step()`](https://www.tensorflow.org/api_docs/python/tf/summary/experimental/get_step) is None. | tensorflow tf.summary.SummaryWriter tf.summary.SummaryWriter ======================== Interface representing a stateful summary writer object. Methods ------- ### `as_default` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/summary_ops_v2.py#L259-L291) ``` as_default( step=None ) ``` Returns a context manager that enables summary writing. For convenience, if `step` is not None, this function also sets a default value for the `step` parameter used in summary-writing functions elsewhere in the API so that it need not be explicitly passed in every such invocation. The value can be a constant or a variable. > > **Note:** when setting `step` in a @tf.function, the step value will be captured at the time the function is traced, so changes to the step outside the function will not be reflected inside the function unless using a [`tf.Variable`](../variable) step. > For example, `step` can be used as: ``` with writer_a.as_default(step=10): tf.summary.scalar(tag, value) # Logged to writer_a with step 10 with writer_b.as_default(step=20): tf.summary.scalar(tag, value) # Logged to writer_b with step 20 tf.summary.scalar(tag, value) # Logged to writer_a with step 10 ``` | Args | | `step` | An `int64`-castable default step value, or `None`. When not `None`, the current step is captured, replaced by a given one, and the original one is restored when the context manager exits. When `None`, the current step is not modified (and not restored when the context manager exits). | | Returns | | The context manager. | ### `close` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/summary_ops_v2.py#L301-L303) ``` close() ``` Flushes and closes the summary writer. ### `flush` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/summary_ops_v2.py#L297-L299) ``` flush() ``` Flushes any buffered data. ### `init` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/summary_ops_v2.py#L293-L295) ``` init() ``` Initializes the summary writer. ### `set_as_default` [View source](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/ops/summary_ops_v2.py#L239-L257) ``` set_as_default( step=None ) ``` Enables this summary writer for the current thread. For convenience, if `step` is not None, this function also sets a default value for the `step` parameter used in summary-writing functions elsewhere in the API so that it need not be explicitly passed in every such invocation. The value can be a constant or a variable. > > **Note:** when setting `step` in a @tf.function, the step value will be captured at the time the function is traced, so changes to the step outside the function will not be reflected inside the function unless using a [`tf.Variable`](../variable) step. > | Args | | `step` | An `int64`-castable default step value, or `None`. When not `None`, the current step is modified to the given value. When `None`, the current step is not modified. | elixir Task.Supervisor Task.Supervisor ================ A task supervisor. This module defines a supervisor which can be used to dynamically supervise tasks. A task supervisor is started with no children, often under a supervisor and a name: ``` children = [ {Task.Supervisor, name: MyApp.TaskSupervisor} ] Supervisor.start_link(children, strategy: :one_for_one) ``` The options given in the child specification are documented in [`start_link/1`](#start_link/1). See the [`Task`](task) module for more examples. Name registration ------------------ A [`Task.Supervisor`](#content) is bound to the same name registration rules as a [`GenServer`](genserver). Read more about them in the [`GenServer`](genserver) docs. Summary ======== Types ------ [option()](#t:option/0) Option values used by `start_link` Functions ---------- [async(supervisor, fun, options \\ [])](#async/3) Starts a task that can be awaited on. [async(supervisor, module, fun, args, options \\ [])](#async/5) Starts a task that can be awaited on. [async\_nolink(supervisor, fun, options \\ [])](#async_nolink/3) Starts a task that can be awaited on. [async\_nolink(supervisor, module, fun, args, options \\ [])](#async_nolink/5) Starts a task that can be awaited on. [async\_stream(supervisor, enumerable, fun, options \\ [])](#async_stream/4) Returns a stream that runs the given function `fun` concurrently on each element in `enumerable`. [async\_stream(supervisor, enumerable, module, function, args, options \\ [])](#async_stream/6) Returns a stream where the given function (`module` and `function`) is mapped concurrently on each element in `enumerable`. [async\_stream\_nolink(supervisor, enumerable, fun, options \\ [])](#async_stream_nolink/4) Returns a stream that runs the given `function` concurrently on each element in `enumerable`. [async\_stream\_nolink(supervisor, enumerable, module, function, args, options \\ [])](#async_stream_nolink/6) Returns a stream where the given function (`module` and `function`) is mapped concurrently on each element in `enumerable`. [children(supervisor)](#children/1) Returns all children PIDs. [start\_child(supervisor, fun, options \\ [])](#start_child/3) Starts a task as a child of the given `supervisor`. [start\_child(supervisor, module, fun, args, options \\ [])](#start_child/5) Starts a task as a child of the given `supervisor`. [start\_link(options \\ [])](#start_link/1) Starts a new supervisor. [terminate\_child(supervisor, pid)](#terminate_child/2) Terminates the child with the given `pid`. Types ====== ### option() #### Specs ``` option() :: Supervisor.option() | {:restart, :supervisor.restart()} | {:shutdown, :supervisor.shutdown()} ``` Option values used by `start_link` Functions ========== ### async(supervisor, fun, options \\ []) #### Specs ``` async(Supervisor.supervisor(), (() -> any()), Keyword.t()) :: Task.t() ``` Starts a task that can be awaited on. The `supervisor` must be a reference as defined in [`Supervisor`](supervisor). The task will still be linked to the caller, see [`Task.async/3`](task#async/3) for more information and [`async_nolink/2`](#async_nolink/2) for a non-linked variant. Raises an error if `supervisor` has reached the maximum number of children. #### Options * `:shutdown` - `:brutal_kill` if the tasks must be killed directly on shutdown or an integer indicating the timeout value, defaults to 5000 milliseconds. ### async(supervisor, module, fun, args, options \\ []) #### Specs ``` async(Supervisor.supervisor(), module(), atom(), [term()], Keyword.t()) :: Task.t() ``` Starts a task that can be awaited on. The `supervisor` must be a reference as defined in [`Supervisor`](supervisor). The task will still be linked to the caller, see [`Task.async/3`](task#async/3) for more information and [`async_nolink/2`](#async_nolink/2) for a non-linked variant. Raises an error if `supervisor` has reached the maximum number of children. #### Options * `:shutdown` - `:brutal_kill` if the tasks must be killed directly on shutdown or an integer indicating the timeout value, defaults to 5000 milliseconds. ### async\_nolink(supervisor, fun, options \\ []) #### Specs ``` async_nolink(Supervisor.supervisor(), (() -> any()), Keyword.t()) :: Task.t() ``` Starts a task that can be awaited on. The `supervisor` must be a reference as defined in [`Supervisor`](supervisor). The task won't be linked to the caller, see [`Task.async/3`](task#async/3) for more information. Raises an error if `supervisor` has reached the maximum number of children. #### Options * `:shutdown` - `:brutal_kill` if the tasks must be killed directly on shutdown or an integer indicating the timeout value, defaults to 5000 milliseconds. #### Compatibility with OTP behaviours If you create a task using `async_nolink` inside an OTP behaviour like [`GenServer`](genserver), you should match on the message coming from the task inside your [`GenServer.handle_info/2`](genserver#c:handle_info/2) callback. The reply sent by the task will be in the format `{ref, result}`, where `ref` is the monitor reference held by the task struct and `result` is the return value of the task function. Keep in mind that, regardless of how the task created with `async_nolink` terminates, the caller's process will always receive a `:DOWN` message with the same `ref` value that is held by the task struct. If the task terminates normally, the reason in the `:DOWN` message will be `:normal`. #### Examples Typically, you use [`async_nolink/3`](#async_nolink/3) when there is a reasonable expectation that the task may fail, and you don't want it to take down the caller. Let's see an example where a [`GenServer`](genserver) is meant to run a single task and track its status: ``` defmodule MyApp.Server do use GenServer # ... def start_task do GenServer.call(__MODULE__, :start_task) end # In this case the task is already running, so we just return :ok. def handle_call(:start_task, _from, %{ref: ref} = state) when is_reference(ref) do {:reply, :ok, state} end # The task is not running yet, so let's start it. def handle_call(:start_task, _from, %{ref: nil} = state) do task = Task.Supervisor.async_nolink(MyApp.TaskSupervisor, fn -> ... end) # We return :ok and the server will continue running {:reply, :ok, %{state | ref: task.ref}} end # The task completed successfully def handle_info({ref, answer}, %{ref: ref} = state) do # We don't care about the DOWN message now, so let's demonitor and flush it Process.demonitor(ref, [:flush]) # Do something with the result and then return {:noreply, %{state | ref: nil}} end # The task failed def handle_info({:DOWN, ref, :process, _pid, _reason}, %{ref: ref} = state) do # Log and possibly restart the task... {:noreply, %{state | ref: nil}} end end ``` ### async\_nolink(supervisor, module, fun, args, options \\ []) #### Specs ``` async_nolink(Supervisor.supervisor(), module(), atom(), [term()], Keyword.t()) :: Task.t() ``` Starts a task that can be awaited on. The `supervisor` must be a reference as defined in [`Supervisor`](supervisor). The task won't be linked to the caller, see [`Task.async/3`](task#async/3) for more information. Raises an error if `supervisor` has reached the maximum number of children. Note this function requires the task supervisor to have `:temporary` as the `:restart` option (the default), as [`async_nolink/4`](#async_nolink/4) keeps a direct reference to the task which is lost if the task is restarted. ### async\_stream(supervisor, enumerable, fun, options \\ []) #### Specs ``` async_stream( Supervisor.supervisor(), Enumerable.t(), (term() -> term()), keyword() ) :: Enumerable.t() ``` Returns a stream that runs the given function `fun` concurrently on each element in `enumerable`. Each element in `enumerable` is passed as argument to the given function `fun` and processed by its own task. The tasks will be spawned under the given `supervisor` and linked to the current process, similarly to [`async/2`](#async/2). See [`async_stream/6`](#async_stream/6) for discussion, options, and examples. ### async\_stream(supervisor, enumerable, module, function, args, options \\ []) #### Specs ``` async_stream( Supervisor.supervisor(), Enumerable.t(), module(), atom(), [term()], keyword() ) :: Enumerable.t() ``` Returns a stream where the given function (`module` and `function`) is mapped concurrently on each element in `enumerable`. Each element will be prepended to the given `args` and processed by its own task. The tasks will be spawned under the given `supervisor` and linked to the current process, similarly to [`async/4`](#async/4). When streamed, each task will emit `{:ok, value}` upon successful completion or `{:exit, reason}` if the caller is trapping exits. The order of results depends on the value of the `:ordered` option. The level of concurrency and the time tasks are allowed to run can be controlled via options (see the "Options" section below). If you find yourself trapping exits to handle exits inside the async stream, consider using [`async_stream_nolink/6`](#async_stream_nolink/6) to start tasks that are not linked to the calling process. #### Options * `:max_concurrency` - sets the maximum number of tasks to run at the same time. Defaults to [`System.schedulers_online/0`](system#schedulers_online/0). * `:ordered` - whether the results should be returned in the same order as the input stream. This option is useful when you have large streams and don't want to buffer results before they are delivered. This is also useful when you're using the tasks for side effects. Defaults to `true`. * `:timeout` - the maximum amount of time to wait (in milliseconds) without receiving a task reply (across all running tasks). Defaults to `5000`. * `:on_timeout` - what do to when a task times out. The possible values are: + `:exit` (default) - the process that spawned the tasks exits. + `:kill_task` - the task that timed out is killed. The value emitted for that task is `{:exit, :timeout}`. * `:shutdown` - `:brutal_kill` if the tasks must be killed directly on shutdown or an integer indicating the timeout value. Defaults to `5000` milliseconds. #### Examples Let's build a stream and then enumerate it: ``` stream = Task.Supervisor.async_stream(MySupervisor, collection, Mod, :expensive_fun, []) Enum.to_list(stream) ``` ### async\_stream\_nolink(supervisor, enumerable, fun, options \\ []) #### Specs ``` async_stream_nolink( Supervisor.supervisor(), Enumerable.t(), (term() -> term()), keyword() ) :: Enumerable.t() ``` Returns a stream that runs the given `function` concurrently on each element in `enumerable`. Each element in `enumerable` is passed as argument to the given function `fun` and processed by its own task. The tasks will be spawned under the given `supervisor` and will not be linked to the current process, similarly to [`async_nolink/2`](#async_nolink/2). See [`async_stream/6`](#async_stream/6) for discussion and examples. ### async\_stream\_nolink(supervisor, enumerable, module, function, args, options \\ []) #### Specs ``` async_stream_nolink( Supervisor.supervisor(), Enumerable.t(), module(), atom(), [term()], keyword() ) :: Enumerable.t() ``` Returns a stream where the given function (`module` and `function`) is mapped concurrently on each element in `enumerable`. Each element in `enumerable` will be prepended to the given `args` and processed by its own task. The tasks will be spawned under the given `supervisor` and will not be linked to the current process, similarly to [`async_nolink/4`](#async_nolink/4). See [`async_stream/6`](#async_stream/6) for discussion, options, and examples. ### children(supervisor) #### Specs ``` children(Supervisor.supervisor()) :: [pid()] ``` Returns all children PIDs. ### start\_child(supervisor, fun, options \\ []) #### Specs ``` start_child(Supervisor.supervisor(), (() -> any()), keyword()) :: DynamicSupervisor.on_start_child() ``` Starts a task as a child of the given `supervisor`. Note that the spawned process is not linked to the caller, but only to the supervisor. This command is useful in case the task needs to perform side-effects (like I/O) and does not need to report back to the caller. #### Options * `:restart` - the restart strategy, may be `:temporary` (the default), `:transient` or `:permanent`. `:temporary` means the task is never restarted, `:transient` means it is restarted if the exit is not `:normal`, `:shutdown` or `{:shutdown, reason}`. A `:permanent` restart strategy means it is always restarted. It defaults to `:temporary`. * `:shutdown` - `:brutal_kill` if the tasks must be killed directly on shutdown or an integer indicating the timeout value, defaults to 5000 milliseconds. ### start\_child(supervisor, module, fun, args, options \\ []) #### Specs ``` start_child(Supervisor.supervisor(), module(), atom(), [term()], keyword()) :: DynamicSupervisor.on_start_child() ``` Starts a task as a child of the given `supervisor`. Similar to [`start_child/2`](#start_child/2) except the task is specified by the given `module`, `fun` and `args`. ### start\_link(options \\ []) #### Specs ``` start_link([option()]) :: Supervisor.on_start() ``` Starts a new supervisor. #### Examples A task supervisor is typically started under a supervision tree using the tuple format: ``` {Task.Supervisor, name: MyApp.TaskSupervisor} ``` You can also start it by calling [`start_link/1`](#start_link/1) directly: ``` Task.Supervisor.start_link(name: MyApp.TaskSupervisor) ``` But this is recommended only for scripting and should be avoided in production code. Generally speaking, processes should always be started inside supervision trees. #### Options * `:name` - used to register a supervisor name, the supported values are described under the `Name Registration` section in the [`GenServer`](genserver) module docs; * `:max_restarts`, `:max_seconds` and `:max_children` - as specified in [`DynamicSupervisor`](dynamicsupervisor); This function could also receive `:restart` and `:shutdown` as options but those two options have been deprecated and it is now preferred to give them directly to `start_child` and `async`. ### terminate\_child(supervisor, pid) #### Specs ``` terminate_child(Supervisor.supervisor(), pid()) :: :ok | {:error, :not_found} ``` Terminates the child with the given `pid`. elixir List.Chars protocol List.Chars protocol ==================== The [`List.Chars`](#content) protocol is responsible for converting a structure to a charlist (only if applicable). The only function required to be implemented is [`to_charlist/1`](#to_charlist/1) which does the conversion. The [`to_charlist/1`](#to_charlist/1) function automatically imported by [`Kernel`](kernel) invokes this protocol. Summary ======== Types ------ [t()](#t:t/0) Functions ---------- [to\_charlist(term)](#to_charlist/1) Converts `term` to a charlist. Types ====== ### t() #### Specs ``` t() :: term() ``` Functions ========== ### to\_charlist(term) #### Specs ``` to_charlist(t()) :: charlist() ``` Converts `term` to a charlist. elixir IEx.Helpers IEx.Helpers ============ Welcome to Interactive Elixir. You are currently seeing the documentation for the module [`IEx.Helpers`](#content) which provides many helpers to make Elixir's shell more joyful to work with. This message was triggered by invoking the helper `h()`, usually referred to as [`h/0`](#h/0) (since it expects 0 arguments). You can use the [`h/1`](#h/1) function to invoke the documentation for any Elixir module or function: ``` iex> h(Enum) iex> h(Enum.map) iex> h(Enum.reverse/1) ``` You can also use the [`i/1`](#i/1) function to introspect any value you have in the shell: ``` iex> i("hello") ``` There are many other helpers available, here are some examples: * [`b/1`](#b/1) - prints callbacks info and docs for a given module * [`c/1`](#c/1) - compiles a file * [`c/2`](#c/2) - compiles a file and writes bytecode to the given path * [`cd/1`](#cd/1) - changes the current directory * [`clear/0`](#clear/0) - clears the screen * [`exports/1`](#exports/1) - shows all exports (functions + macros) in a module * [`flush/0`](#flush/0) - flushes all messages sent to the shell * [`h/0`](#h/0) - prints this help message * [`h/1`](#h/1) - prints help for the given module, function or macro * [`i/0`](#i/0) - prints information about the last value * [`i/1`](#i/1) - prints information about the given term * [`ls/0`](#ls/0) - lists the contents of the current directory * [`ls/1`](#ls/1) - lists the contents of the specified directory * [`open/1`](#open/1) - opens the source for the given module or function in your editor * [`pid/1`](#pid/1) - creates a PID from a string * [`pid/3`](#pid/3) - creates a PID with the 3 integer arguments passed * [`port/1`](#port/1) - creates a port from a string * [`port/2`](#port/2) - creates a port with the 2 non-negative integers passed * [`ref/1`](#ref/1) - creates a reference from a string * [`ref/4`](#ref/4) - creates a reference with the 4 integer arguments passed * [`pwd/0`](#pwd/0) - prints the current working directory * [`r/1`](#r/1) - recompiles the given module's source file * [`recompile/0`](#recompile/0) - recompiles the current project * [`runtime_info/0`](#runtime_info/0) - prints runtime info (versions, memory usage, stats) * [`v/0`](#v/0) - retrieves the last value from the history * [`v/1`](#v/1) - retrieves the nth value from the history Help for all of those functions can be consulted directly from the command line using the [`h/1`](#h/1) helper itself. Try: ``` iex> h(v/0) ``` To list all IEx helpers available, which is effectively all exports (functions and macros) in the [`IEx.Helpers`](#content) module: ``` iex> exports(IEx.Helpers) ``` This module also includes helpers for debugging purposes, see [`IEx.break!/4`](iex#break!/4) for more information. To learn more about IEx as a whole, type `h(IEx)`. Summary ======== Functions ---------- [b(term)](#b/1) Prints the documentation for the given callback function. [break!(ast, stops \\ 1)](#break!/2) Macro-based shortcut for [`IEx.break!/4`](iex#break!/4). [break!(module, function, arity, stops \\ 1)](#break!/4) Sets up a breakpoint in `module`, `function` and `arity` with the given number of `stops`. [breaks()](#breaks/0) Prints all breakpoints to the terminal. [c(files, path \\ :in\_memory)](#c/2) Compiles the given files. [cd(directory)](#cd/1) Changes the current working directory to the given path. [clear()](#clear/0) Clears the console screen. [continue()](#continue/0) Continues execution of the current process. [exports(module \\ Kernel)](#exports/1) Prints a list of all the functions and macros exported by the given module. [flush()](#flush/0) Clears out all messages sent to the shell's inbox and prints them out. [h()](#h/0) Prints the documentation for [`IEx.Helpers`](#content). [h(term)](#h/1) Prints the documentation for the given module or for the given `function/arity` pair. [i(term \\ v(-1))](#i/1) Prints information about the data type of any given term. [import\_file(path)](#import_file/1) Injects the contents of the file at `path` as if it was typed into the shell. [import\_file\_if\_available(path)](#import_file_if_available/1) Similar to `import_file` but only imports the file it if it is available. [import\_if\_available(quoted\_module, opts \\ [])](#import_if_available/2) Calls [`import/2`](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#import/2) with the given arguments, but only if the module is available. [l(module)](#l/1) Loads the given module's BEAM code (and ensures any previous old version was properly purged before). [ls(path \\ ".")](#ls/1) Prints a list of the given directory's contents. [nl(nodes \\ Node.list(), module)](#nl/2) Deploys a given module's BEAM code to a list of nodes. [open()](#open/0) Opens the current prying location. [open(term)](#open/1) Opens the given `module`, `module.function/arity`, or `{file, line}`. [pid(string)](#pid/1) Creates a PID from `string`. [pid(x, y, z)](#pid/3) Creates a PID with 3 non-negative integers passed as arguments to the function. [port(string)](#port/1) Creates a Port from `string`. [port(major, minor)](#port/2) Creates a Port from two non-negative integers. [pwd()](#pwd/0) Prints the current working directory. [r(module)](#r/1) Recompiles and reloads the given `module`. [recompile(options \\ [])](#recompile/1) Recompiles the current Mix application. [ref(string)](#ref/1) Creates a Reference from `string`. [ref(w, x, y, z)](#ref/4) Creates a Reference from its 4 non-negative integers components. [remove\_breaks()](#remove_breaks/0) Removes all breakpoints and instrumentation from all modules. [remove\_breaks(module)](#remove_breaks/1) Removes all breakpoints and instrumentation from `module`. [reset\_break(id)](#reset_break/1) Sets the number of pending stops in the breakpoint with the given `id` to zero. [reset\_break(module, function, arity)](#reset_break/3) Sets the number of pending stops in the given module, function and arity to zero. [respawn()](#respawn/0) Respawns the current shell by starting a new shell process. [runtime\_info()](#runtime_info/0) Prints VM/runtime information such as versions, memory usage and statistics. Additional topics are available via [`runtime_info/1`](#runtime_info/1). [runtime\_info(topic)](#runtime_info/1) Just like [`runtime_info/0`](#runtime_info/0), except accepts topic or a list of topics. E.g. topic `:applications` will list the applications loaded. [t(term)](#t/1) Prints the types for the given module or for the given function/arity pair. [use\_if\_available(quoted\_module, opts \\ [])](#use_if_available/2) Calls [`use/2`](https://hexdocs.pm/elixir/Kernel.html#use/2) with the given arguments, but only if the module is available. [v(n \\ -1)](#v/1) Returns the value of the `n`th expression in the history. [whereami(radius \\ 2)](#whereami/1) Prints the current location and stacktrace in a pry session. Functions ========== ### b(term) Prints the documentation for the given callback function. It also accepts single module argument to list all available behaviour callbacks. #### Examples ``` iex> b(Mix.Task.run/1) iex> b(Mix.Task.run) iex> b(GenServer) ``` ### break!(ast, stops \\ 1) Macro-based shortcut for [`IEx.break!/4`](iex#break!/4). ### break!(module, function, arity, stops \\ 1) Sets up a breakpoint in `module`, `function` and `arity` with the given number of `stops`. See [`IEx.break!/4`](iex#break!/4) for a complete description of breakpoints in IEx. ### breaks() Prints all breakpoints to the terminal. ### c(files, path \\ :in\_memory) Compiles the given files. It expects a list of files to compile and an optional path to write the compiled code to. By default files are in-memory compiled. To write compiled files to the current directory, an empty string can be given. It returns the names of the compiled modules. If you want to recompile an existing module, check [`r/1`](#r/1) instead. #### Examples In the example below, we pass a directory to where the [`c/2`](#c/2) function will write the compiled `.beam` files to. This directory is typically named "ebin" in Erlang/Elixir systems: ``` iex> c(["foo.ex", "bar.ex"], "ebin") [Foo, Bar] ``` When compiling one file, there is no need to wrap it in a list: ``` iex> c("baz.ex") [Baz] ``` ### cd(directory) Changes the current working directory to the given path. ### clear() Clears the console screen. This function only works if ANSI escape codes are enabled on the shell, which means this function is by default unavailable on Windows machines. ### continue() Continues execution of the current process. This is usually called by sessions started with [`IEx.pry/0`](iex#pry/0) or [`IEx.break!/4`](iex#break!/4). This allows the current process to execute until the next breakpoint, which will automatically yield control back to IEx without requesting permission to pry. If the running process terminates, a new IEx session is started. While the process executes, the user will no longer have control of the shell. If you would rather start a new shell, use [`respawn/0`](#respawn/0) instead. ### exports(module \\ Kernel) Prints a list of all the functions and macros exported by the given module. ### flush() Clears out all messages sent to the shell's inbox and prints them out. ### h() Prints the documentation for [`IEx.Helpers`](#content). ### h(term) Prints the documentation for the given module or for the given `function/arity` pair. #### Examples ``` iex> h(Enum) ``` It also accepts functions in the format `function/arity` and `module.function/arity`, for example: ``` iex> h(receive/1) iex> h(Enum.all?/2) iex> h(Enum.all?) ``` ### i(term \\ v(-1)) Prints information about the data type of any given term. If no argument is given, the value of the previous expression is used. #### Examples ``` iex> i(1..5) ``` Will print: ``` Term 1..5 Data type Range Description This is a struct. Structs are maps with a __struct__ key. Reference modules Range, Map ``` ### import\_file(path) Injects the contents of the file at `path` as if it was typed into the shell. This would be the equivalent of getting all of the file contents and packing it all into a single line in IEx and executing it. By default, the contents of a `.iex.exs` file in the same directory as you are starting IEx are automatically imported. See the section for ".iex.exs" in the [`IEx`](iex) module docs for more information. `path` has to be a literal string and is automatically expanded via [`Path.expand/1`](https://hexdocs.pm/elixir/Path.html#expand/1). #### Examples ``` # ~/file.exs value = 13 # in the shell iex(1)> import_file("~/file.exs") 13 iex(2)> value 13 ``` ### import\_file\_if\_available(path) Similar to `import_file` but only imports the file it if it is available. By default, [`import_file/1`](#import_file/1) fails when the given file does not exist. However, since [`import_file/1`](#import_file/1) is expanded at compile-time, it's not possible to conditionally import a file since the macro is always expanded: ``` # This raises a File.Error if ~/.iex.exs doesn't exist. if "~/.iex.exs" |> Path.expand() |> File.exists?() do import_file("~/.iex.exs") end ``` This macro addresses this issue by checking if the file exists or not in behalf of the user. ### import\_if\_available(quoted\_module, opts \\ []) Calls [`import/2`](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#import/2) with the given arguments, but only if the module is available. This lets you put imports in `.iex.exs` files (including `~/.iex.exs`) without getting compile errors if you open a console where the module is not available. #### Example ``` # In ~/.iex.exs import_if_available(Ecto.Query) ``` ### l(module) Loads the given module's BEAM code (and ensures any previous old version was properly purged before). This function is useful when you know the bytecode for module has been updated in the file system and you want to tell the VM to load it. ### ls(path \\ ".") Prints a list of the given directory's contents. If `path` points to a file, prints its full path. ### nl(nodes \\ Node.list(), module) Deploys a given module's BEAM code to a list of nodes. This function is useful for development and debugging when you have code that has been compiled or updated locally that you want to run on other nodes. The node list defaults to a list of all connected nodes. Returns `{:error, :nofile}` if the object code (i.e. ".beam" file) for the module could not be found locally. #### Examples ``` iex> nl(HelloWorld) {:ok, [ {:node1@easthost, :loaded, HelloWorld}, {:node1@westhost, :loaded, HelloWorld} ]} iex> nl(NoSuchModuleExists) {:error, :nofile} ``` ### open() Opens the current prying location. This command only works inside a pry session started manually via [`IEx.pry/0`](iex#pry/0) or a breakpoint set via [`IEx.break!/4`](iex#break!/4). Calling this function during a regular [`IEx`](iex) session will print an error. Keep in mind the [`open/0`](#open/0) location may not exist when prying precompiled source code, such as Elixir itself. For more information and to open any module or function, see [`open/1`](#open/1). ### open(term) Opens the given `module`, `module.function/arity`, or `{file, line}`. This function uses the `ELIXIR_EDITOR` environment variable and falls back to `EDITOR` if the former is not available. By default, it attempts to open the file and line using the `file:line` notation. For example, if your editor is called `subl`, it will open the file as: ``` subl path/to/file:line ``` It is important that you choose an editor command that does not block nor that attempts to run an editor directly in the terminal. Command-line based editors likely need extra configuration so they open up the given file and line in a separate window. Custom editors are supported by using the `__FILE__` and `__LINE__` notations, for example: ``` ELIXIR_EDITOR="my_editor +__LINE__ __FILE__" ``` and Elixir will properly interpolate values. Since this function prints the result returned by the editor, `ELIXIR_EDITOR` can be set "echo" if you prefer to display the location rather than opening it. Keep in mind the location may not exist when opening precompiled source code. #### Examples ``` iex> open(MyApp) iex> open(MyApp.fun/2) iex> open({"path/to/file", 1}) ``` ### pid(string) Creates a PID from `string`. #### Examples ``` iex> pid("0.21.32") #PID<0.21.32> ``` ### pid(x, y, z) Creates a PID with 3 non-negative integers passed as arguments to the function. #### Examples ``` iex> pid(0, 21, 32) #PID<0.21.32> iex> pid(0, 64, 2048) #PID<0.64.2048> ``` ### port(string) Creates a Port from `string`. #### Examples ``` iex> port("0.4") #Port<0.4> ``` ### port(major, minor) Creates a Port from two non-negative integers. #### Examples ``` iex> port(0, 8080) #Port<0.8080> iex> port(0, 443) #Port<0.443> ``` ### pwd() Prints the current working directory. ### r(module) Recompiles and reloads the given `module`. Please note that all the modules defined in the same file as `module` are recompiled and reloaded. This function is meant to be used for development and debugging purposes. Do not depend on it in production code. #### In-memory reloading When we reload the module in IEx, we recompile the module source code, updating its contents in memory. The original `.beam` file in disk, probably the one where the first definition of the module came from, does not change at all. Since typespecs and docs are loaded from the .beam file (they are not loaded in memory with the module because there is no need for them to be in memory), they are not reloaded when you reload the module. ### recompile(options \\ []) Recompiles the current Mix application. This helper only works when IEx is started with a Mix project, for example, `iex -S mix`. The application is not restarted after compilation, which means any long running process may crash as any changed module will be temporarily removed and recompiled, without going through the proper code changes callback. If you want to reload a single module, consider using `r(ModuleName)` instead. This function is meant to be used for development and debugging purposes. Do not depend on it in production code. #### Options * `:force` - when `true`, forces the application to recompile ### ref(string) Creates a Reference from `string`. #### Examples ``` iex> ref("0.1.2.3") #Reference<0.1.2.3> ``` ### ref(w, x, y, z) Creates a Reference from its 4 non-negative integers components. #### Examples ``` iex> ref(0, 1, 2, 3) #Reference<0.1.2.3> ``` ### remove\_breaks() Removes all breakpoints and instrumentation from all modules. ### remove\_breaks(module) Removes all breakpoints and instrumentation from `module`. ### reset\_break(id) Sets the number of pending stops in the breakpoint with the given `id` to zero. Returns `:ok` if there is such breakpoint ID. `:not_found` otherwise. Note the module remains "instrumented" on reset. If you would like to effectively remove all breakpoints and instrumentation code from a module, use [`remove_breaks/1`](#remove_breaks/1) instead. ### reset\_break(module, function, arity) Sets the number of pending stops in the given module, function and arity to zero. If the module is not instrumented or if the given function does not have a breakpoint, it is a no-op and it returns `:not_found`. Otherwise it returns `:ok`. Note the module remains "instrumented" on reset. If you would like to effectively remove all breakpoints and instrumentation code from a module, use [`remove_breaks/1`](#remove_breaks/1) instead. ### respawn() Respawns the current shell by starting a new shell process. ### runtime\_info() Prints VM/runtime information such as versions, memory usage and statistics. Additional topics are available via [`runtime_info/1`](#runtime_info/1). ### runtime\_info(topic) Just like [`runtime_info/0`](#runtime_info/0), except accepts topic or a list of topics. E.g. topic `:applications` will list the applications loaded. ### t(term) Prints the types for the given module or for the given function/arity pair. #### Examples ``` iex> t(Enum) @type t() :: Enumerable.t() @type acc() :: any() @type element() :: any() @type index() :: integer() @type default() :: any() iex> t(Enum.t/0) @type t() :: Enumerable.t() iex> t(Enum.t) @type t() :: Enumerable.t() ``` ### use\_if\_available(quoted\_module, opts \\ []) Calls [`use/2`](https://hexdocs.pm/elixir/Kernel.html#use/2) with the given arguments, but only if the module is available. This lets you use the module in `.iex.exs` files (including `~/.iex.exs`) without getting compile errors if you open a console where the module is not available. #### Example ``` # In ~/.iex.exs use_if_available(Phoenix.HTML) ``` ### v(n \\ -1) Returns the value of the `n`th expression in the history. `n` can be a negative value: if it is, the corresponding expression value relative to the current one is returned. For example, `v(-2)` returns the value of the expression evaluated before the last evaluated expression. In particular, `v(-1)` returns the result of the last evaluated expression and `v()` does the same. #### Examples ``` iex(1)> "hello" <> " world" "hello world" iex(2)> 40 + 2 42 iex(3)> v(-2) "hello world" iex(4)> v(2) 42 iex(5)> v() 42 ``` ### whereami(radius \\ 2) Prints the current location and stacktrace in a pry session. It expects a `radius` which chooses how many lines before and after the current line we should print. By default the `radius` is of two lines: ``` Location: lib/iex/lib/iex/helpers.ex:79 77: 78: def recompile do 79: require IEx; IEx.pry() 80: if mix_started?() do 81: config = Mix.Project.config (IEx.Helpers) lib/iex/lib/iex/helpers.ex:78: IEx.Helpers.recompile/0 ``` This command only works inside a pry session started manually via [`IEx.pry/0`](iex#pry/0) or a breakpoint set via [`IEx.break!/4`](iex#break!/4). Calling this function during a regular [`IEx`](iex) session will print an error. Keep in mind the [`whereami/1`](#whereami/1) location may not exist when prying precompiled source code, such as Elixir itself.
programming_docs
elixir Atom Atom ===== Convenience functions for working with atoms. See also [`Kernel.is_atom/1`](kernel#is_atom/1). Summary ======== Functions ---------- [to\_charlist(atom)](#to_charlist/1) Converts an atom to a charlist. [to\_string(atom)](#to_string/1) Converts an atom to a string. Functions ========== ### to\_charlist(atom) #### Specs ``` to_charlist(atom()) :: charlist() ``` Converts an atom to a charlist. Inlined by the compiler. #### Examples ``` iex> Atom.to_charlist(:"An atom") 'An atom' ``` ### to\_string(atom) #### Specs ``` to_string(atom()) :: String.t() ``` Converts an atom to a string. Inlined by the compiler. #### Examples ``` iex> Atom.to_string(:foo) "foo" ``` elixir mix compile mix compile ============ A meta task that compiles source files. It simply runs the compilers registered in your project and returns a tuple with the compilation status and a list of diagnostics. Configuration -------------- * `:compilers` - compilers to run, defaults to [`Mix.compilers/0`](mix#compilers/0), which are `[:yecc, :leex, :erlang, :elixir, :xref, :app]`. * `:consolidate_protocols` - when `true`, runs protocol consolidation via the `compile.protocols` task. The default value is `true`. * `:build_embedded` - when `true`, embeds all code and priv content in the `_build` directory instead of using symlinks. * `:build_path` - the directory where build artifacts should be written to. This option is intended only for child apps within a larger umbrella application so that each child app can use the common `_build` directory of the parent umbrella. In a non-umbrella context, configuring this has undesirable side-effects (such as skipping some compiler checks) and should be avoided. Compilers ---------- To see documentation for each specific compiler, you must invoke `help` directly for the compiler command: ``` mix help compile.elixir mix help compile.erlang ``` You can get a list of all compilers by running: ``` mix compile --list ``` Command line options --------------------- * `--list` - lists all enabled compilers * `--no-archives-check` - skips checking of archives * `--no-deps-check` - skips checking of dependencies * `--no-protocol-consolidation` - skips protocol consolidation * `--force` - forces compilation * `--return-errors` - returns error status and diagnostics instead of exiting on error * `--erl-config` - path to an Erlang term file that will be loaded as Mix config Summary ======== Functions ---------- [compilers()](#compilers/0) Returns all compilers. Functions ========== ### compilers() Returns all compilers. elixir IEx.Server IEx.Server =========== The IEx.Server. The server responsibilities include: * reading input from the group leader and writing to the group leader * sending messages to the evaluator * taking over the evaluator process when using [`IEx.pry/0`](iex#pry/0) or setting up breakpoints Summary ======== Functions ---------- [run(opts)](#run/1) Starts a new IEx server session. Functions ========== ### run(opts) #### Specs ``` run(keyword()) :: :ok ``` Starts a new IEx server session. The accepted options are: * `:prefix` - the IEx prefix * `:env` - the [`Macro.Env`](https://hexdocs.pm/elixir/Macro.Env.html) used for the evaluator * `:binding` - an initial set of variables for the evaluator * `:on_eof` - if it should `:stop_evaluator` (default) or `:halt` the system elixir alias, require, and import Getting Started alias, require, and import ========================== In order to facilitate software reuse, Elixir provides three directives (`alias`, `require` and `import`) plus a macro called `use` summarized below: ``` # Alias the module so it can be called as Bar instead of Foo.Bar alias Foo.Bar, as: Bar # Require the module in order to use its macros require Foo # Import functions from Foo so they can be called without the `Foo.` prefix import Foo # Invokes the custom code defined in Foo as an extension point use Foo ``` We are going to explore them in detail now. Keep in mind the first three are called directives because they have **lexical scope**, while `use` is a common extension point that allows the used module to inject code. alias ----- `alias` allows you to set up aliases for any given module name. Imagine a module uses a specialized list implemented in `Math.List`. The `alias` directive allows referring to `Math.List` just as `List` within the module definition: ``` defmodule Stats do alias Math.List, as: List # In the remaining module definition List expands to Math.List. end ``` The original `List` can still be accessed within `Stats` by the fully-qualified name `Elixir.List`. > Note: All modules defined in Elixir are defined inside the main `Elixir` namespace. However, for convenience, you can omit “Elixir.” when referencing them. > > Aliases are frequently used to define shortcuts. In fact, calling `alias` without an `:as` option sets the alias automatically to the last part of the module name, for example: ``` alias Math.List ``` Is the same as: ``` alias Math.List, as: List ``` Note that `alias` is **lexically scoped**, which allows you to set aliases inside specific functions: ``` defmodule Math do def plus(a, b) do alias Math.List # ... end def minus(a, b) do # ... end end ``` In the example above, since we are invoking `alias` inside the function `plus/2`, the alias will be valid only inside the function `plus/2`. `minus/2` won’t be affected at all. require ------- Elixir provides macros as a mechanism for meta-programming (writing code that generates code). Macros are expanded at compile time. Public functions in modules are globally available, but in order to use macros, you need to opt-in by requiring the module they are defined in. ``` iex> Integer.is_odd(3) ** (CompileError) iex:1: you must require Integer before invoking the macro Integer.is_odd/1 (elixir) src/elixir_dispatch.erl:97: :elixir_dispatch.dispatch_require/6 iex> require Integer Integer iex> Integer.is_odd(3) true ``` In Elixir, `Integer.is_odd/1` is defined as a macro so that it can be used as a guard. This means that, in order to invoke `Integer.is_odd/1`, we need to first require the `Integer` module. Note that like the `alias` directive, `require` is also lexically scoped. We will talk more about macros in a later chapter. import ------ We use `import` whenever we want to access functions or macros from other modules without using the fully-qualified name. Note we can only import public functions, as private functions are never accessible externally. For example, if we want to use the `duplicate/2` function from the `List` module several times, we can import it: ``` iex> import List, only: [duplicate: 2] List iex> duplicate :ok, 3 [:ok, :ok, :ok] ``` We imported only the function `duplicate` (with arity 2) from `List`. Although `:only` is optional, its usage is recommended in order to avoid importing all the functions of a given module inside the current scope. `:except` could also be given as an option in order to import everything in a module *except* a list of functions. Note that `import` is **lexically scoped** too. This means that we can import specific macros or functions inside function definitions: ``` defmodule Math do def some_function do import List, only: [duplicate: 2] duplicate(:ok, 10) end end ``` In the example above, the imported `List.duplicate/2` is only visible within that specific function. `duplicate/2` won’t be available in any other function in that module (or any other module for that matter). Note that `import`ing a module automatically `require`s it. use --- The `use` macro is frequently used as an extension point. This means that, when you `use` a module `FooBar`, you allow that module to inject *any* code in the current module, such as importing itself or other modules, defining new functions, setting a module state, etc. For example, in order to write tests using the ExUnit framework, a developer should use the `ExUnit.Case` module: ``` defmodule AssertionTest do use ExUnit.Case, async: true test "always pass" do assert true end end ``` Behind the scenes, `use` requires the given module and then calls the `__using__/1` callback on it allowing the module to inject some code into the current context. Some modules (for example, the above `ExUnit.Case`, but also `Supervisor` and `GenServer`) use this mechanism to populate your module with some basic behaviour, which your module is intended to override or complete. Generally speaking, the following module: ``` defmodule Example do use Feature, option: :value end ``` is compiled into ``` defmodule Example do require Feature Feature.__using__(option: :value) end ``` Since `use` allows any code to run, we can’t really know the side-effects of using a module without reading its documentation. For this reason, `import` and `alias` are often preferred, as their semantics are defined by the language. Understanding Aliases --------------------- At this point, you may be wondering: what exactly is an Elixir alias and how is it represented? An alias in Elixir is a capitalized identifier (like `String`, `Keyword`, etc) which is converted to an atom during compilation. For instance, the `String` alias translates by default to the atom `:"Elixir.String"`: ``` iex> is_atom(String) true iex> to_string(String) "Elixir.String" iex> :"Elixir.String" == String true ``` By using the `alias/2` directive, we are changing the atom the alias expands to. Aliases expand to atoms because in the Erlang VM (and consequently Elixir) modules are always represented by atoms. For example, that’s the mechanism we use to call Erlang modules: ``` iex> :lists.flatten([1, [2], 3]) [1, 2, 3] ``` Module nesting -------------- Now that we have talked about aliases, we can talk about nesting and how it works in Elixir. Consider the following example: ``` defmodule Foo do defmodule Bar do end end ``` The example above will define two modules: `Foo` and `Foo.Bar`. The second can be accessed as `Bar` inside `Foo` as long as they are in the same lexical scope. The code above is exactly the same as: ``` defmodule Elixir.Foo do defmodule Elixir.Foo.Bar do end alias Elixir.Foo.Bar, as: Bar end ``` If, later, the `Bar` module is moved outside the `Foo` module definition, it must be referenced by its full name (`Foo.Bar`) or an alias must be set using the `alias` directive discussed above. **Note**: in Elixir, you don’t have to define the `Foo` module before being able to define the `Foo.Bar` module, as the language translates all module names to atoms. You can define arbitrarily-nested modules without defining any module in the chain (e.g., `Foo.Bar.Baz` without defining `Foo` or `Foo.Bar` first). As we will see in later chapters, aliases also play a crucial role in macros, to guarantee they are hygienic. Multi alias/import/require/use ------------------------------ From Elixir v1.2, it is possible to alias, import or require multiple modules at once. This is particularly useful once we start nesting modules, which is very common when building Elixir applications. For example, imagine you have an application where all modules are nested under `MyApp`, you can alias the modules `MyApp.Foo`, `MyApp.Bar` and `MyApp.Baz` at once as follows: ``` alias MyApp.{Foo, Bar, Baz} ``` With this, we have finished our tour of Elixir modules. The last topic to cover is module attributes. elixir Structs Getting Started Structs ======= In [chapter 7](keywords-and-maps) we learned about maps: ``` iex> map = %{a: 1, b: 2} %{a: 1, b: 2} iex> map[:a] 1 iex> %{map | a: 3} %{a: 3, b: 2} ``` Structs are extensions built on top of maps that provide compile-time checks and default values. Defining structs ---------------- To define a struct, the `defstruct` construct is used: ``` iex> defmodule User do ...> defstruct name: "John", age: 27 ...> end ``` The keyword list used with `defstruct` defines what fields the struct will have along with their default values. Structs take the name of the module they’re defined in. In the example above, we defined a struct named `User`. We can now create `User` structs by using a syntax similar to the one used to create maps (if you have defined the struct in a separate file, you can compile the file inside IEx before proceeding by running `c "file.exs"`; be aware you may get an error saying `the struct was not yet defined` if you try the below example in a file directly due to when definitions are resolved): ``` iex> %User{} %User{age: 27, name: "John"} iex> %User{name: "Jane"} %User{age: 27, name: "Jane"} ``` Structs provide *compile-time* guarantees that only the fields (and *all* of them) defined through `defstruct` will be allowed to exist in a struct: ``` iex> %User{oops: :field} ** (KeyError) key :oops not found in: %User{age: 27, name: "John"} ``` Accessing and updating structs ------------------------------ When we discussed maps, we showed how we can access and update the fields of a map. The same techniques (and the same syntax) apply to structs as well: ``` iex> john = %User{} %User{age: 27, name: "John"} iex> john.name "John" iex> jane = %{john | name: "Jane"} %User{age: 27, name: "Jane"} iex> %{jane | oops: :field} ** (KeyError) key :oops not found in: %User{age: 27, name: "Jane"} ``` When using the update syntax (`|`), the VM is aware that no new keys will be added to the struct, allowing the maps underneath to share their structure in memory. In the example above, both `john` and `jane` share the same key structure in memory. Structs can also be used in pattern matching, both for matching on the value of specific keys as well as for ensuring that the matching value is a struct of the same type as the matched value. ``` iex> %User{name: name} = john %User{age: 27, name: "John"} iex> name "John" iex> %User{} = %{} ** (MatchError) no match of right hand side value: %{} ``` Structs are bare maps underneath -------------------------------- In the example above, pattern matching works because underneath structs are bare maps with a fixed set of fields. As maps, structs store a “special” field named `__struct__` that holds the name of the struct: ``` iex> is_map(john) true iex> john.__struct__ User ``` Notice that we referred to structs as **bare** maps because none of the protocols implemented for maps are available for structs. For example, you can neither enumerate nor access a struct: ``` iex> john = %User{} %User{age: 27, name: "John"} iex> john[:name] ** (UndefinedFunctionError) function User.fetch/2 is undefined (User does not implement the Access behaviour) User.fetch(%User{age: 27, name: "John"}, :name) iex> Enum.each john, fn({field, value}) -> IO.puts(value) end ** (Protocol.UndefinedError) protocol Enumerable not implemented for %User{age: 27, name: "John"} ``` However, since structs are just maps, they work with the functions from the `Map` module: ``` iex> jane = Map.put(%User{}, :name, "Jane") %User{age: 27, name: "Jane"} iex> Map.merge(jane, %User{name: "John"}) %User{age: 27, name: "John"} iex> Map.keys(jane) [:__struct__, :age, :name] ``` Structs alongside protocols provide one of the most important features for Elixir developers: data polymorphism. That’s what we will explore in the next chapter. Default values and required keys -------------------------------- If you don’t specify a default key value when defining a struct, `nil` will be assumed: ``` iex> defmodule Product do ...> defstruct [:name] ...> end iex> %Product{} %Product{name: nil} ``` You can define a structure combining both fields with explicit default values, and implicit `nil` values. In this case you must first specify the fields which implicitly default to nil: ``` iex> defmodule User do ...> defstruct [:email, name: "John", age: 27] ...> end iex> %User{} %User{age: 27, email: nil, name: "John"} ``` Doing it in reverse order will raise a syntax error: ``` iex> defmodule User do ...> defstruct [name: "John", age: 27, :email] ...> end ** (SyntaxError) iex:107: syntax error before: email ``` You can also enforce that certain keys have to be specified when creating the struct: ``` iex> defmodule Car do ...> @enforce_keys [:make] ...> defstruct [:model, :make] ...> end iex> %Car{} ** (ArgumentError) the following keys must also be given when building struct Car: [:make] expanding struct: Car.__struct__/1 ``` elixir Erlang libraries Getting Started Erlang libraries ================ Elixir provides excellent interoperability with Erlang libraries. In fact, Elixir discourages simply wrapping Erlang libraries in favor of directly interfacing with Erlang code. In this section, we will present some of the most common and useful Erlang functionality that is not found in Elixir. As you grow more proficient in Elixir, you may want to explore the Erlang [STDLIB Reference Manual](http://erlang.org/doc/apps/stdlib/index.html) in more detail. The binary module ----------------- The built-in Elixir String module handles binaries that are UTF-8 encoded. [The binary module](http://erlang.org/doc/man/binary.html) is useful when you are dealing with binary data that is not necessarily UTF-8 encoded. ``` iex> String.to_charlist "Ø" [216] iex> :binary.bin_to_list "Ø" [195, 152] ``` The above example shows the difference; the `String` module returns Unicode codepoints, while `:binary` deals with raw data bytes. Formatted text output --------------------- Elixir does not contain a function similar to `printf` found in C and other languages. Luckily, the Erlang standard library functions `:io.format/2` and `:io_lib.format/2` may be used. The first formats to terminal output, while the second formats to an iolist. The format specifiers differ from `printf`, [refer to the Erlang documentation for details](http://erlang.org/doc/man/io.html#format-1). ``` iex> :io.format("Pi is approximately given by:~10.3f~n", [:math.pi]) Pi is approximately given by: 3.142 :ok iex> to_string :io_lib.format("Pi is approximately given by:~10.3f~n", [:math.pi]) "Pi is approximately given by: 3.142\n" ``` Also note that Erlang’s formatting functions require special attention to Unicode handling. The crypto module ----------------- [The crypto module](http://erlang.org/doc/man/crypto.html) contains hashing functions, digital signatures, encryption and more: ``` iex> Base.encode16(:crypto.hash(:sha256, "Elixir")) "3315715A7A3AD57428298676C5AE465DADA38D951BDFAC9348A8A31E9C7401CB" ``` The `:crypto` module is not part of the Erlang standard library, but is included with the Erlang distribution. This means you must list `:crypto` in your project’s applications list whenever you use it. To do this, edit your `mix.exs` file to include: ``` def application do [extra_applications: [:crypto]] end ``` The digraph module ------------------ [The digraph module](http://erlang.org/doc/man/digraph.html) (as well as [digraph\_utils](http://erlang.org/doc/man/digraph_utils.html)) contains functions for dealing with directed graphs built of vertices and edges. After constructing the graph, the algorithms in there will help find, for instance, the shortest path between two vertices, or loops in the graph. Given three vertices, find the shortest path from the first to the last. ``` iex> digraph = :digraph.new() iex> coords = [{0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}] iex> [v0, v1, v2] = (for c <- coords, do: :digraph.add_vertex(digraph, c)) iex> :digraph.add_edge(digraph, v0, v1) iex> :digraph.add_edge(digraph, v1, v2) iex> :digraph.get_short_path(digraph, v0, v2) [{0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}] ``` Note that the functions in `:digraph` alter the graph structure in-place, this is possible because they are implemented as ETS tables, explained next. Erlang Term Storage ------------------- The modules [`ets`](http://erlang.org/doc/man/ets.html) and [`dets`](http://erlang.org/doc/man/dets.html) handle storage of large data structures in memory or on disk respectively. ETS lets you create a table containing tuples. By default, ETS tables are protected, which means only the owner process may write to the table but any other process can read. ETS has some functionality to allow a table to be used as a simple database, a key-value store or as a cache mechanism. The functions in the `ets` module will modify the state of the table as a side-effect. ``` iex> table = :ets.new(:ets_test, []) # Store as tuples with {name, population} iex> :ets.insert(table, {"China", 1_374_000_000}) iex> :ets.insert(table, {"India", 1_284_000_000}) iex> :ets.insert(table, {"USA", 322_000_000}) iex> :ets.i(table) <1 > {<<"India">>,1284000000} <2 > {<<"USA">>,322000000} <3 > {<<"China">>,1374000000} ``` The math module --------------- [The `math` module](http://erlang.org/doc/man/math.html) contains common mathematical operations covering trigonometry, exponential, and logarithmic functions. ``` iex> angle_45_deg = :math.pi() * 45.0 / 180.0 iex> :math.sin(angle_45_deg) 0.7071067811865475 iex> :math.exp(55.0) 7.694785265142018e23 iex> :math.log(7.694785265142018e23) 55.0 ``` The queue module ---------------- The [`queue` is a data structure](http://erlang.org/doc/man/queue.html) that implements (double-ended) FIFO (first-in first-out) queues efficiently: ``` iex> q = :queue.new iex> q = :queue.in("A", q) iex> q = :queue.in("B", q) iex> {value, q} = :queue.out(q) iex> value {:value, "A"} iex> {value, q} = :queue.out(q) iex> value {:value, "B"} iex> {value, q} = :queue.out(q) iex> value :empty ``` The rand module --------------- [`rand` has functions](http://erlang.org/doc/man/rand.html) for returning random values and setting the random seed. ``` iex> :rand.uniform() 0.8175669086010815 iex> _ = :rand.seed(:exs1024, {123, 123534, 345345}) iex> :rand.uniform() 0.5820506340260994 iex> :rand.uniform(6) 6 ``` The zip and zlib modules ------------------------ [The `zip` module](http://erlang.org/doc/man/zip.html) lets you read and write ZIP files to and from disk or memory, as well as extracting file information. This code counts the number of files in a ZIP file: ``` iex> :zip.foldl(fn _, _, _, acc -> acc + 1 end, 0, :binary.bin_to_list("file.zip")) {:ok, 633} ``` [The `zlib` module](http://erlang.org/doc/man/zlib.html) deals with data compression in zlib format, as found in the `gzip` command. ``` iex> song = " ...> Mary had a little lamb, ...> His fleece was white as snow, ...> And everywhere that Mary went, ...> The lamb was sure to go." iex> compressed = :zlib.compress(song) iex> byte_size song 110 iex> byte_size compressed 99 iex> :zlib.uncompress(compressed) "\nMary had a little lamb,\nHis fleece was white as snow,\nAnd everywhere that Mary went,\nThe lamb was sure to go." ```
programming_docs
elixir ExUnit.Test ExUnit.Test ============ A struct that keeps information about the test. It is received by formatters and contains the following fields: * `:name` - the test name * `:module` - the test module * `:state` - the finished test state (see [`ExUnit.state/0`](exunit#t:state/0)) * `:time` - the duration in microseconds of the test's runtime * `:tags` - the test tags * `:logs` - the captured logs Summary ======== Types ------ [t()](#t:t/0) Types ====== ### t() #### Specs ``` t() :: %ExUnit.Test{ case: module(), logs: String.t(), module: module(), name: atom(), state: ExUnit.state(), tags: map(), time: non_neg_integer() } ``` elixir System System ======= The [`System`](#content) module provides functions that interact directly with the VM or the host system. Time ----- The [`System`](#content) module also provides functions that work with time, returning different times kept by the system with support for different time units. One of the complexities in relying on system times is that they may be adjusted. For example, when you enter and leave daylight saving time, the system clock will be adjusted, often adding or removing one hour. We call such changes "time warps". In order to understand how such changes may be harmful, imagine the following code: ``` ## DO NOT DO THIS prev = System.os_time() # ... execute some code ... next = System.os_time() diff = next - prev ``` If, while the code is executing, the system clock changes, some code that executed in 1 second may be reported as taking over 1 hour! To address such concerns, the VM provides a monotonic time via [`System.monotonic_time/0`](system#monotonic_time/0) which never decreases and does not leap: ``` ## DO THIS prev = System.monotonic_time() # ... execute some code ... next = System.monotonic_time() diff = next - prev ``` Generally speaking, the VM provides three time measurements: * [`os_time/0`](#os_time/0) - the time reported by the operating system (OS). This time may be adjusted forwards or backwards in time with no limitation; * [`system_time/0`](#system_time/0) - the VM view of the [`os_time/0`](#os_time/0). The system time and operating system time may not match in case of time warps although the VM works towards aligning them. This time is not monotonic (i.e., it may decrease) as its behaviour is configured [by the VM time warp mode](http://www.erlang.org/doc/apps/erts/time_correction.html#Time_Warp_Modes); * [`monotonic_time/0`](#monotonic_time/0) - a monotonically increasing time provided by the Erlang VM. The time functions in this module work in the `:native` unit (unless specified otherwise), which is operating system dependent. Most of the time, all calculations are done in the `:native` unit, to avoid loss of precision, with [`convert_time_unit/3`](#convert_time_unit/3) being invoked at the end to convert to a specific time unit like `:millisecond` or `:microsecond`. See the [`time_unit/0`](#t:time_unit/0) type for more information. For a more complete rundown on the VM support for different times, see the [chapter on time and time correction](http://www.erlang.org/doc/apps/erts/time_correction.html) in the Erlang docs. Summary ======== Types ------ [time\_unit()](#t:time_unit/0) The time unit to be passed to functions like [`monotonic_time/1`](#monotonic_time/1) and others. Functions ---------- [argv()](#argv/0) Lists command line arguments. [argv(args)](#argv/1) Modifies command line arguments. [at\_exit(fun)](#at_exit/1) Registers a program exit handler function. [build\_info()](#build_info/0) Elixir build information. [cmd(command, args, opts \\ [])](#cmd/3) Executes the given `command` with `args`. [compiled\_endianness()](#compiled_endianness/0) Returns the endianness the system was compiled with. [convert\_time\_unit(time, from\_unit, to\_unit)](#convert_time_unit/3) Converts `time` from time unit `from_unit` to time unit `to_unit`. [cwd()](#cwd/0) deprecated Current working directory. [cwd!()](#cwd!/0) deprecated Current working directory, exception on error. [delete\_env(varname)](#delete_env/1) Deletes an environment variable. [endianness()](#endianness/0) Returns the endianness. [fetch\_env(varname)](#fetch_env/1) Returns the value of the given environment variable or `:error` if not found. [fetch\_env!(varname)](#fetch_env!/1) Returns the value of the given environment variable or raises if not found. [find\_executable(program)](#find_executable/1) Locates an executable on the system. [get\_env()](#get_env/0) Returns all system environment variables. [get\_env(varname, default \\ nil)](#get_env/2) Returns the value of the given environment variable. [get\_pid()](#get_pid/0) Erlang VM process identifier. [halt(status \\ 0)](#halt/1) Immediately halts the Erlang runtime system. [monotonic\_time()](#monotonic_time/0) Returns the current monotonic time in the `:native` time unit. [monotonic\_time(unit)](#monotonic_time/1) Returns the current monotonic time in the given time unit. [no\_halt()](#no_halt/0) Checks if the system will halt or not at the end of ARGV processing. [no\_halt(boolean)](#no_halt/1) Marks if the system should halt or not at the end of ARGV processing. [os\_time()](#os_time/0) Returns the current operating system (OS) time. [os\_time(unit)](#os_time/1) Returns the current operating system (OS) time in the given time `unit`. [otp\_release()](#otp_release/0) Returns the Erlang/OTP release number. [pid()](#pid/0) Returns the operating system PID for the current Erlang runtime system instance. [put\_env(enum)](#put_env/1) Sets multiple environment variables. [put\_env(varname, value)](#put_env/2) Sets an environment variable value. [restart()](#restart/0) Restarts all applications in the Erlang runtime system. [schedulers()](#schedulers/0) Returns the number of schedulers in the VM. [schedulers\_online()](#schedulers_online/0) Returns the number of schedulers online in the VM. [stacktrace()](#stacktrace/0) Deprecated mechanism to retrieve the last exception stacktrace. [stop(status \\ 0)](#stop/1) Carefully stops the Erlang runtime system. [system\_time()](#system_time/0) Returns the current system time in the `:native` time unit. [system\_time(unit)](#system_time/1) Returns the current system time in the given time unit. [time\_offset()](#time_offset/0) Returns the current time offset between the Erlang VM monotonic time and the Erlang VM system time. [time\_offset(unit)](#time_offset/1) Returns the current time offset between the Erlang VM monotonic time and the Erlang VM system time. [tmp\_dir()](#tmp_dir/0) Writable temporary directory. [tmp\_dir!()](#tmp_dir!/0) Writable temporary directory, exception on error. [unique\_integer(modifiers \\ [])](#unique_integer/1) Generates and returns an integer that is unique in the current runtime instance. [user\_home()](#user_home/0) User home directory. [user\_home!()](#user_home!/0) User home directory, exception on error. [version()](#version/0) Elixir version information. Types ====== ### time\_unit() #### Specs ``` time_unit() :: :second | :millisecond | :microsecond | :nanosecond | pos_integer() ``` The time unit to be passed to functions like [`monotonic_time/1`](#monotonic_time/1) and others. The `:second`, `:millisecond`, `:microsecond` and `:nanosecond` time units controls the return value of the functions that accept a time unit. A time unit can also be a strictly positive integer. In this case, it represents the "parts per second": the time will be returned in `1 / parts_per_second` seconds. For example, using the `:millisecond` time unit is equivalent to using `1000` as the time unit (as the time will be returned in 1/1000 seconds - milliseconds). Functions ========== ### argv() #### Specs ``` argv() :: [String.t()] ``` Lists command line arguments. Returns the list of command line arguments passed to the program. ### argv(args) #### Specs ``` argv([String.t()]) :: :ok ``` Modifies command line arguments. Changes the list of command line arguments. Use it with caution, as it destroys any previous argv information. ### at\_exit(fun) #### Specs ``` at_exit((non_neg_integer() -> any())) :: :ok ``` Registers a program exit handler function. Registers a function that will be invoked at the end of program execution. Useful for invoking a hook in "script" mode. The handler always executes in a different process from the one it was registered in. As a consequence, any resources managed by the calling process (ETS tables, open files, etc.) won't be available by the time the handler function is invoked. The function must receive the exit status code as an argument. ### build\_info() #### Specs ``` build_info() :: %{ build: String.t(), date: String.t(), revision: String.t(), version: String.t(), otp_release: String.t() } ``` Elixir build information. Returns a map with the Elixir version, the Erlang/OTP release it was compiled with, a short Git revision hash and the date and time it was built. Every value in the map is a string, and these are: * `:build` - the Elixir version, short Git revision hash and Erlang/OTP release it was compiled with * `:date` - a string representation of the ISO8601 date and time it was built * `:opt_release` - OTP release it was compiled with * `:revision` - short Git revision hash. If Git was not available at building time, it is set to `""` * `:version` - the Elixir version One should not rely on the specific formats returned by each of those fields. Instead one should use specialized functions, such as [`version/0`](#version/0) to retrieve the Elixir version and [`otp_release/0`](#otp_release/0) to retrieve the Erlang/OTP release. #### Examples ``` iex> System.build_info() %{ build: "1.9.0-dev (772a00a0c) (compiled with Erlang/OTP 21)", date: "2018-12-24T01:09:21Z", otp_release: "21", revision: "772a00a0c", version: "1.9.0-dev" } ``` ### cmd(command, args, opts \\ []) #### Specs ``` cmd(binary(), [binary()], keyword()) :: {Collectable.t(), exit_status :: non_neg_integer()} ``` Executes the given `command` with `args`. `command` is expected to be an executable available in PATH unless an absolute path is given. `args` must be a list of binaries which the executable will receive as its arguments as is. This means that: * environment variables will not be interpolated * wildcard expansion will not happen (unless [`Path.wildcard/2`](path#wildcard/2) is used explicitly) * arguments do not need to be escaped or quoted for shell safety This function returns a tuple containing the collected result and the command exit status. Internally, this function uses a [`Port`](port) for interacting with the outside world. However, if you plan to run a long-running program, ports guarantee stdin/stdout devices will be closed but it does not automatically terminate the program. The documentation for the [`Port`](port) module describes this problem and possible solutions under the "Zombie processes" section. #### Examples ``` iex> System.cmd("echo", ["hello"]) {"hello\n", 0} iex> System.cmd("echo", ["hello"], env: [{"MIX_ENV", "test"}]) {"hello\n", 0} iex> System.cmd("echo", ["hello"], into: IO.stream(:stdio, :line)) hello {%IO.Stream{}, 0} ``` #### Options * `:into` - injects the result into the given collectable, defaults to `""` * `:cd` - the directory to run the command in * `:env` - an enumerable of tuples containing environment key-value as binary * `:arg0` - sets the command arg0 * `:stderr_to_stdout` - redirects stderr to stdout when `true` * `:parallelism` - when `true`, the VM will schedule port tasks to improve parallelism in the system. If set to `false`, the VM will try to perform commands immediately, improving latency at the expense of parallelism. The default can be set on system startup by passing the "+spp" argument to `--erl`. #### Error reasons If invalid arguments are given, [`ArgumentError`](argumenterror) is raised by [`System.cmd/3`](system#cmd/3). [`System.cmd/3`](system#cmd/3) also expects a strict set of options and will raise if unknown or invalid options are given. Furthermore, [`System.cmd/3`](system#cmd/3) may fail with one of the POSIX reasons detailed below: * `:system_limit` - all available ports in the Erlang emulator are in use * `:enomem` - there was not enough memory to create the port * `:eagain` - there are no more available operating system processes * `:enametoolong` - the external command given was too long * `:emfile` - there are no more available file descriptors (for the operating system process that the Erlang emulator runs in) * `:enfile` - the file table is full (for the entire operating system) * `:eacces` - the command does not point to an executable file * `:enoent` - the command does not point to an existing file #### Shell commands If you desire to execute a trusted command inside a shell, with pipes, redirecting and so on, please check [`:os.cmd/1`](http://www.erlang.org/doc/man/os.html#cmd-1). ### compiled\_endianness() #### Specs ``` compiled_endianness() :: :little | :big ``` Returns the endianness the system was compiled with. ### convert\_time\_unit(time, from\_unit, to\_unit) #### Specs ``` convert_time_unit(integer(), time_unit() | :native, time_unit() | :native) :: integer() ``` Converts `time` from time unit `from_unit` to time unit `to_unit`. The result is rounded via the floor function. [`convert_time_unit/3`](#convert_time_unit/3) accepts an additional time unit (other than the ones in the [`time_unit/0`](#t:time_unit/0) type) called `:native`. `:native` is the time unit used by the Erlang runtime system. It's determined when the runtime starts and stays the same until the runtime is stopped, but could differ the next time the runtime is started on the same machine. For this reason, you should use this function to convert `:native` time units to a predictable unit before you display them to humans. To determine how many seconds the `:native` unit represents in your current runtime, you can can call this function to convert 1 second to the `:native` time unit: `System.convert_time_unit(1, :second, :native)`. ### cwd() This function is deprecated. Use File.cwd/0 instead. #### Specs ``` cwd() :: String.t() | nil ``` Current working directory. Returns the current working directory or `nil` if one is not available. ### cwd!() This function is deprecated. Use File.cwd!/0 instead. #### Specs ``` cwd!() :: String.t() ``` Current working directory, exception on error. Returns the current working directory or raises [`RuntimeError`](runtimeerror). ### delete\_env(varname) #### Specs ``` delete_env(String.t()) :: :ok ``` Deletes an environment variable. Removes the variable `varname` from the environment. ### endianness() #### Specs ``` endianness() :: :little | :big ``` Returns the endianness. ### fetch\_env(varname) #### Specs ``` fetch_env(String.t()) :: {:ok, String.t()} | :error ``` Returns the value of the given environment variable or `:error` if not found. If the environment variable `varname` is set, then `{:ok, value}` is returned where `value` is a string. If `varname` is not set, `:error` is returned. #### Examples ``` iex> System.fetch_env("PORT") {:ok, "4000"} iex> System.fetch_env("NOT_SET") :error ``` ### fetch\_env!(varname) #### Specs ``` fetch_env!(String.t()) :: String.t() ``` Returns the value of the given environment variable or raises if not found. Same as [`get_env/1`](#get_env/1) but raises instead of returning `nil` when the variable is not set. #### Examples ``` iex> System.fetch_env!("PORT") "4000" iex> System.fetch_env!("NOT_SET") ** (ArgumentError) could not fetch environment variable "NOT_SET" because it is not set ``` ### find\_executable(program) #### Specs ``` find_executable(binary()) :: binary() | nil ``` Locates an executable on the system. This function looks up an executable program given its name using the environment variable PATH on Unix and Windows. It also considers the proper executable extension for each operating system, so for Windows it will try to lookup files with `.com`, `.cmd` or similar extensions. ### get\_env() #### Specs ``` get_env() :: %{optional(String.t()) => String.t()} ``` Returns all system environment variables. The returned value is a map containing name-value pairs. Variable names and their values are strings. ### get\_env(varname, default \\ nil) #### Specs ``` get_env(String.t(), String.t() | nil) :: String.t() | nil ``` Returns the value of the given environment variable. The returned value of the environment variable `varname` is a string. If the environment variable is not set, returns the string specified in `default` or `nil` if none is specified. #### Examples ``` iex> System.get_env("PORT") "4000" iex> System.get_env("NOT_SET") nil iex> System.get_env("NOT_SET", "4001") "4001" ``` ### get\_pid() #### Specs ``` get_pid() :: binary() ``` Erlang VM process identifier. Returns the process identifier of the current Erlang emulator in the format most commonly used by the operating system environment. For more information, see [`:os.getpid/0`](http://www.erlang.org/doc/man/os.html#getpid-0). ### halt(status \\ 0) #### Specs ``` halt(non_neg_integer() | binary() | :abort) :: no_return() ``` Immediately halts the Erlang runtime system. Terminates the Erlang runtime system without properly shutting down applications and ports. Please see [`stop/1`](#stop/1) for a careful shutdown of the system. `status` must be a non-negative integer, the atom `:abort` or a binary. * If an integer, the runtime system exits with the integer value which is returned to the operating system. * If `:abort`, the runtime system aborts producing a core dump, if that is enabled in the operating system. * If a string, an Erlang crash dump is produced with status as slogan, and then the runtime system exits with status code 1. Note that on many platforms, only the status codes 0-255 are supported by the operating system. For more information, see [`:erlang.halt/1`](http://www.erlang.org/doc/man/erlang.html#halt-1). #### Examples ``` System.halt(0) System.halt(1) System.halt(:abort) ``` ### monotonic\_time() #### Specs ``` monotonic_time() :: integer() ``` Returns the current monotonic time in the `:native` time unit. This time is monotonically increasing and starts in an unspecified point in time. Inlined by the compiler. ### monotonic\_time(unit) #### Specs ``` monotonic_time(time_unit()) :: integer() ``` Returns the current monotonic time in the given time unit. This time is monotonically increasing and starts in an unspecified point in time. ### no\_halt() #### Specs ``` no_halt() :: boolean() ``` Checks if the system will halt or not at the end of ARGV processing. ### no\_halt(boolean) #### Specs ``` no_halt(boolean()) :: :ok ``` Marks if the system should halt or not at the end of ARGV processing. ### os\_time() #### Specs ``` os_time() :: integer() ``` Returns the current operating system (OS) time. The result is returned in the `:native` time unit. This time may be adjusted forwards or backwards in time with no limitation and is not monotonic. Inlined by the compiler. ### os\_time(unit) #### Specs ``` os_time(time_unit()) :: integer() ``` Returns the current operating system (OS) time in the given time `unit`. This time may be adjusted forwards or backwards in time with no limitation and is not monotonic. ### otp\_release() #### Specs ``` otp_release() :: String.t() ``` Returns the Erlang/OTP release number. ### pid() #### Specs ``` pid() :: String.t() ``` Returns the operating system PID for the current Erlang runtime system instance. Returns a string containing the (usually) numerical identifier for a process. On UNIX, this is typically the return value of the `getpid()` system call. On Windows, the process ID as returned by the `GetCurrentProcessId()` system call is used. #### Examples ``` System.pid() ``` ### put\_env(enum) #### Specs ``` put_env(Enumerable.t()) :: :ok ``` Sets multiple environment variables. Sets a new value for each environment variable corresponding to each `{key, value}` pair in `enum`. ### put\_env(varname, value) #### Specs ``` put_env(binary(), binary()) :: :ok ``` Sets an environment variable value. Sets a new `value` for the environment variable `varname`. ### restart() #### Specs ``` restart() :: :ok ``` Restarts all applications in the Erlang runtime system. All applications are taken down smoothly, all code is unloaded, and all ports are closed before the system starts all applications once again. #### Examples ``` System.restart() ``` ### schedulers() #### Specs ``` schedulers() :: pos_integer() ``` Returns the number of schedulers in the VM. ### schedulers\_online() #### Specs ``` schedulers_online() :: pos_integer() ``` Returns the number of schedulers online in the VM. ### stacktrace() Deprecated mechanism to retrieve the last exception stacktrace. Accessing the stacktrace outside of a rescue/catch is deprecated. If you want to support only Elixir v1.7+, you must access [`__STACKTRACE__/0`](kernel.specialforms#__STACKTRACE__/0) inside a rescue/catch. If you want to support earlier Elixir versions, move [`System.stacktrace/0`](system#stacktrace/0) inside a rescue/catch. Note that the Erlang VM (and therefore this function) does not return the current stacktrace but rather the stacktrace of the latest exception. To retrieve the stacktrace of the current process, use `Process.info(self(), :current_stacktrace)` instead. ### stop(status \\ 0) #### Specs ``` stop(non_neg_integer() | binary()) :: no_return() ``` Carefully stops the Erlang runtime system. All applications are taken down smoothly, all code is unloaded, and all ports are closed before the system terminates by calling [`halt/1`](#halt/1). `status` must be a non-negative integer value which is returned by the runtime system to the operating system. Note that on many platforms, only the status codes 0-255 are supported by the operating system. #### Examples ``` System.stop(0) System.stop(1) ``` ### system\_time() #### Specs ``` system_time() :: integer() ``` Returns the current system time in the `:native` time unit. It is the VM view of the [`os_time/0`](#os_time/0). They may not match in case of time warps although the VM works towards aligning them. This time is not monotonic. Inlined by the compiler. ### system\_time(unit) #### Specs ``` system_time(time_unit()) :: integer() ``` Returns the current system time in the given time unit. It is the VM view of the [`os_time/0`](#os_time/0). They may not match in case of time warps although the VM works towards aligning them. This time is not monotonic. ### time\_offset() #### Specs ``` time_offset() :: integer() ``` Returns the current time offset between the Erlang VM monotonic time and the Erlang VM system time. The result is returned in the `:native` time unit. See [`time_offset/1`](#time_offset/1) for more information. Inlined by the compiler. ### time\_offset(unit) #### Specs ``` time_offset(time_unit()) :: integer() ``` Returns the current time offset between the Erlang VM monotonic time and the Erlang VM system time. The result is returned in the given time unit `unit`. The returned offset, added to an Erlang monotonic time (e.g., obtained with [`monotonic_time/1`](#monotonic_time/1)), gives the Erlang system time that corresponds to that monotonic time. ### tmp\_dir() #### Specs ``` tmp_dir() :: String.t() | nil ``` Writable temporary directory. Returns a writable temporary directory. Searches for directories in the following order: 1. the directory named by the TMPDIR environment variable 2. the directory named by the TEMP environment variable 3. the directory named by the TMP environment variable 4. `C:\TMP` on Windows or `/tmp` on Unix 5. as a last resort, the current working directory Returns `nil` if none of the above are writable. ### tmp\_dir!() #### Specs ``` tmp_dir!() :: String.t() ``` Writable temporary directory, exception on error. Same as [`tmp_dir/0`](#tmp_dir/0) but raises [`RuntimeError`](runtimeerror) instead of returning `nil` if no temp dir is set. ### unique\_integer(modifiers \\ []) #### Specs ``` unique_integer([:positive | :monotonic]) :: integer() ``` Generates and returns an integer that is unique in the current runtime instance. "Unique" means that this function, called with the same list of `modifiers`, will never return the same integer more than once on the current runtime instance. If `modifiers` is `[]`, then a unique integer (that can be positive or negative) is returned. Other modifiers can be passed to change the properties of the returned integer: * `:positive` - the returned integer is guaranteed to be positive. * `:monotonic` - the returned integer is monotonically increasing. This means that, on the same runtime instance (but even on different processes), integers returned using the `:monotonic` modifier will always be strictly less than integers returned by successive calls with the `:monotonic` modifier. All modifiers listed above can be combined; repeated modifiers in `modifiers` will be ignored. Inlined by the compiler. ### user\_home() #### Specs ``` user_home() :: String.t() | nil ``` User home directory. Returns the user home directory (platform independent). ### user\_home!() #### Specs ``` user_home!() :: String.t() ``` User home directory, exception on error. Same as [`user_home/0`](#user_home/0) but raises [`RuntimeError`](runtimeerror) instead of returning `nil` if no user home is set. ### version() #### Specs ``` version() :: String.t() ``` Elixir version information. Returns Elixir's version as binary.
programming_docs
elixir Typespecs and behaviours Getting Started Typespecs and behaviours ======================== Types and specs --------------- Elixir is a dynamically typed language, so all types in Elixir are checked at runtime. Nonetheless, Elixir comes with **typespecs**, which are a notation used for: 1. declaring typed function signatures (also called specifications); 2. declaring custom types. ### Function specifications Elixir provides many [built-in types](https://hexdocs.pm/elixir/typespecs.html#built-in-types), such as `integer` or `pid`, that can be used to document function signatures. For example, the `round/1` function, which rounds a number to its nearest integer. As you can see [in its documentation](https://hexdocs.pm/elixir/Kernel.html#round/1), `round/1`’s typed signature is written as: ``` round(number()) :: integer() ``` The syntax is to put the function and its input on the left side of the `::` and the return value’s type on the right side. Be aware that types *may* omit parentheses. In code, function specs are written with the `@spec` attribute, typically placed immediately before the function definition. Specs can describe both public and private functions. The function name and the number of arguments used in the `@spec` attribute must match the function it describes. Elixir supports compound types as well. For example, a list of integers has type `[integer]`, or maps that define keys and types (see the example below). You can see all the built-in types provided by Elixir [in the typespecs docs](https://hexdocs.pm/elixir/typespecs.html). ### Defining custom types Defining custom types can help communicate the intention of your code and increase its readability. Custom types can be defined within modules via the `@type` attribute. A simple example of a custom type implementation is to provide a more descriptive alias of an existing type. For example, defining `year` as a type makes your function specs more descriptive than if they had simply used `integer`: ``` defmodule Person do @typedoc """ A 4 digit year, e.g. 1984 """ @type year :: integer @spec current_age(year) :: integer def current_age(year_of_birth), do: # implementation end ``` The `@typedoc` attribute, similar to the `@doc` and `@moduledoc` attributes, is used to document custom types. You may define compound custom types, e.g. maps: ``` @type error_map :: %{ message: String.t, line_number: integer } ``` [Structs](structs) offer similar functionality. Let’s look at another example to understand how to define more complex types. Say we have a `LousyCalculator` module, which performs the usual arithmetic operations (sum, product, and so on) but, instead of returning numbers, it returns tuples with the result of an operation as the first element and a random remark as the second element. ``` defmodule LousyCalculator do @spec add(number, number) :: {number, String.t} def add(x, y), do: {x + y, "You need a calculator to do that?!"} @spec multiply(number, number) :: {number, String.t} def multiply(x, y), do: {x * y, "Jeez, come on!"} end ``` Tuples are a compound type and each tuple is identified by the types inside it (in this case, a number and a string). To understand why `String.t` is not written as `string`, have another look at the [typespecs docs](https://hexdocs.pm/elixir/typespecs.html#the-string-type). Defining function specs this way works, but we end up repeating the type `{number, String.t}` over and over. We can use the `@type` attribute to declare our own custom type and cut down on the repetition. ``` defmodule LousyCalculator do @typedoc """ Just a number followed by a string. """ @type number_with_remark :: {number, String.t} @spec add(number, number) :: number_with_remark def add(x, y), do: {x + y, "You need a calculator to do that?"} @spec multiply(number, number) :: number_with_remark def multiply(x, y), do: {x * y, "It is like addition on steroids."} end ``` Custom types defined through `@type` are exported and are available outside the module they’re defined in: ``` defmodule QuietCalculator do @spec add(number, number) :: number def add(x, y), do: make_quiet(LousyCalculator.add(x, y)) @spec make_quiet(LousyCalculator.number_with_remark) :: number defp make_quiet({num, _remark}), do: num end ``` If you want to keep a custom type private, you can use the `@typep` attribute instead of `@type`. The visibility also affects whether or not documentation will be generated by tools like [ExDoc](https://hexdocs.pm/ex_doc/readme.html), Elixir’s documentation generator. ### Static code analysis Typespecs are not only useful to developers as additional documentation. The Erlang tool [Dialyzer](http://www.erlang.org/doc/man/dialyzer.html), for example, uses typespecs in order to perform static analysis of code. That’s why, in the `QuietCalculator` example, we wrote a spec for the `make_quiet/1` function even though it was defined as a private function. Behaviours ---------- Many modules share the same public API. Take a look at [Plug](https://github.com/elixir-lang/plug), which, as its description states, is a **specification** for composable modules in web applications. Each *plug* is a module which **has to** implement at least two public functions: `init/1` and `call/2`. Behaviours provide a way to: * define a set of functions that have to be implemented by a module; * ensure that a module implements all the functions in that set. If you have to, you can think of behaviours like interfaces in object oriented languages like Java: a set of function signatures that a module has to implement. ### Defining behaviours Say we want to implement a bunch of parsers, each parsing structured data: for example, a JSON parser and a MessagePack parser. Each of these two parsers will *behave* the same way: both will provide a `parse/1` function and an `extensions/0` function. The `parse/1` function will return an Elixir representation of the structured data, while the `extensions/0` function will return a list of file extensions that can be used for each type of data (e.g., `.json` for JSON files). We can create a `Parser` behaviour: ``` defmodule Parser do @callback parse(String.t) :: {:ok, term} | {:error, String.t} @callback extensions() :: [String.t] end ``` Modules adopting the `Parser` behaviour will have to implement all the functions defined with the `@callback` attribute. As you can see, `@callback` expects a function name but also a function specification like the ones used with the `@spec` attribute we saw above. Also note that the `term` type is used to represent the parsed value. In Elixir, the `term` type is a shortcut to represent any type. ### Adopting behaviours Adopting a behaviour is straightforward: ``` defmodule JSONParser do @behaviour Parser @impl Parser def parse(str), do: {:ok, "some json " <> str} # ... parse JSON @impl Parser def extensions, do: ["json"] end ``` ``` defmodule YAMLParser do @behaviour Parser @impl Parser def parse(str), do: {:ok, "some yaml " <> str} # ... parse YAML @impl Parser def extensions, do: ["yml"] end ``` If a module adopting a given behaviour doesn’t implement one of the callbacks required by that behaviour, a compile-time warning will be generated. Furthermore, with `@impl` you can also make sure that you are implementing the **correct** callbacks from the given behaviour in an explicit manner. For example, the following parser implements both `parse` and `extensions`, however thanks to a typo, `BADParser` is implementing `parse/0` instead of `parse/1`. ``` defmodule BADParser do @behaviour Parser @impl Parser def parse, do: {:ok, "something bad"} @impl Parser def extensions, do: ["bad"] end ``` This code generates a warning letting you know that you are mistakenly implementing `parse/0` instead of `parse/1`. You can read more about `@impl` in the [module documentation](https://hexdocs.pm/elixir/master/Module.html#module-impl). ### Dynamic dispatch Behaviours are frequently used with dynamic dispatching. For example, we could add a `parse!` function to the `Parser` module that dispatches to the given implementation and returns the `:ok` result or raises in cases of `:error`: ``` defmodule Parser do @callback parse(String.t) :: {:ok, term} | {:error, String.t} @callback extensions() :: [String.t] def parse!(implementation, contents) do case implementation.parse(contents) do {:ok, data} -> data {:error, error} -> raise ArgumentError, "parsing error: #{error}" end end end ``` Note you don’t need to define a behaviour in order to dynamically dispatch on a module, but those features often go hand in hand. elixir mix xref mix xref ========= Performs cross reference checks between modules. The unreachable and deprecated checks below happen every time your project is compiled via [`mix compile.xref`](mix.tasks.compile.xref). See [`Mix.Tasks.Compile.Xref`](mix.tasks.compile.xref) for more information. This task is automatically reenabled, so you can perform multiple cross reference checks in the same Mix invocation. Xref modes ----------- The `xref` task expects a mode as first argument: ``` mix xref MODE ``` All available modes are discussed below. ### unreachable Prints all unreachable "file:line: module.function/arity" entries: ``` mix xref unreachable ``` The "file:line" represents the file and line a call to an unknown "module.function/arity" is made. The option `--abort-if-any` can be used for the command to fail if unreachable calls exist. deprecated ----------- Prints all deprecated "file:line: module.function/arity" entries: ``` mix xref deprecated ``` The "file:line" represents the file and line a call to a deprecated "module.function/arity" is made. This operation does not show deprecated local calls (a call to a deprecated function or macro in the same module) nor calls to deprecated functionality in Elixir itself. The option `--abort-if-any` can be used for the command to fail if deprecated calls exist. ### callers CALLEE Prints all callers of the given `CALLEE`, which can be one of: [`Module`](https://hexdocs.pm/elixir/Module.html), `Module.function`, or `Module.function/arity`. Examples: ``` mix xref callers MyMod mix xref callers MyMod.fun mix xref callers MyMod.fun/3 ``` ### graph Prints a file dependency graph where an edge from `A` to `B` indicates that `A` (source) depends on `B` (sink). ``` mix xref graph --format stats ``` The following options are accepted: * `--exclude` - paths to exclude * `--label` - only shows relationships with the given label The labels are "compile", "struct" and "runtime" * `--only-nodes` - only shows the node names (no edges) * `--source` - displays all files that the given source file references (directly or indirectly) * `--sink` - displays all files that reference the given file (directly or indirectly) * `--format` - can be set to one of: + `pretty` - prints the graph to the terminal using Unicode characters. Each prints each file followed by the files it depends on. This is the default except on Windows; + `plain` - the same as pretty except ASCII characters are used instead of Unicode characters. This is the default on Windows; + `stats` - prints general statistics about the graph; + `dot` - produces a DOT graph description in `xref_graph.dot` in the current directory. Warning: this will override any previously generated file The `--source` and `--sink` options are particularly useful when trying to understand how the modules in a particular file interact with the whole system. You can combine those options with `--label` and `--only-nodes` to get all files that exhibit a certain property, for example: ``` # To get all files that depend on lib/foo.ex mix xref graph --sink lib/foo.ex --only-nodes # To get all files that depend on lib/foo.ex at compile time mix xref graph --label compile --sink lib/foo.ex --only-nodes # To show general statistics about the graph mix xref graph --format stats # To limit statistics only to certain labels mix xref graph --format stats --label compile ``` Shared options --------------- Those options are shared across all modes: * `--include-siblings` - includes dependencies that have `:in_umbrella` set to true in the current project in the reports. This can be used to find callers or to analyze graphs between projects * `--no-compile` - does not compile even if files require compilation * `--no-deps-check` - does not check dependencies * `--no-archives-check` - does not check archives * `--no-elixir-version-check` - does not check the Elixir version from mix.exs Configuration -------------- All configuration for Xref should be placed under the key `:xref`. * `:exclude` - a list of modules and `{module, function, arity}` tuples to ignore when checking cross references. For example: `[MissingModule, {MissingModule2, :missing_func, 2}]` Summary ======== Functions ---------- [calls(opts \\ [])](#calls/1) Returns a list of information of all the function calls in the project. Functions ========== ### calls(opts \\ []) #### Specs ``` calls(keyword()) :: [ %{ callee: {module(), atom(), arity()}, caller_module: module(), line: integer(), file: String.t() } ] ``` Returns a list of information of all the function calls in the project. Each item in the list is a map with the following keys: * `:callee` - a tuple containing the module, function, and arity of the call * `:line` - an integer representing the line where the function is called * `:file` - a binary representing the file where the function is called * `:caller_module` - the module where the function is called This function returns an empty list when used at the root of an umbrella project because there is no compile manifest to extract the function call information from. To get the function calls of each child in an umbrella, execute the function at the root of each individual application. elixir Map Map ==== A set of functions for working with maps. Many functions for maps, which implement the [`Enumerable`](enumerable) protocol, are found in the [`Enum`](enum) module. Additionally, the following functions for maps are found in [`Kernel`](kernel): * [`map_size/1`](kernel#map_size/1) Maps are the "go to" key-value data structure in Elixir. Maps can be created with the `%{}` syntax, and key-value pairs can be expressed as `key => value`: ``` iex> %{} %{} iex> %{"one" => :two, 3 => "four"} %{3 => "four", "one" => :two} ``` Key-value pairs in a map do not follow any order (that's why the printed map in the example above has a different order than the map that was created). Maps do not impose any restriction on the key type: anything can be a key in a map. As a key-value structure, maps do not allow duplicated keys. Keys are compared using the exact-equality operator ([`===/2`](kernel#===/2)). If colliding keys are defined in a map literal, the last one prevails. When the key in a key-value pair is an atom, the `key: value` shorthand syntax can be used (as in many other special forms), provided key-value pairs are put at the end: ``` iex> %{"hello" => "world", a: 1, b: 2} %{:a => 1, :b => 2, "hello" => "world"} ``` Keys in maps can be accessed through some of the functions in this module (such as [`Map.get/3`](map#get/3) or [`Map.fetch/2`](map#fetch/2)) or through the `map[]` syntax provided by the [`Access`](access) module: ``` iex> map = %{a: 1, b: 2} iex> Map.fetch(map, :a) {:ok, 1} iex> map[:b] 2 iex> map["non_existing_key"] nil ``` For accessing atom keys, one may also `map.key`. Note that while `map[key]` will return `nil` if `map` doesn't contain `key`, `map.key` will raise if `map` doesn't contain the key `:key`. ``` iex> map = %{foo: "bar", baz: "bong"} iex> map.foo "bar" iex> map.non_existing_key ** (KeyError) key :non_existing_key not found in: %{baz: "bong", foo: "bar"} ``` The two syntaxes for accessing keys reveal the dual nature of maps. The `map[key]` syntax is used for dynamically created maps that may have any key, of any type. `map.key` is used with maps that hold a predetermined set of atoms keys, which are expected to always be present. Structs, defined via [`defstruct/1`](kernel#defstruct/1), are one example of such "static maps", where the keys can also be checked during compile time. Maps can be pattern matched on. When a map is on the left-hand side of a pattern match, it will match if the map on the right-hand side contains the keys on the left-hand side and their values match the ones on the left-hand side. This means that an empty map matches every map. ``` iex> %{} = %{foo: "bar"} %{foo: "bar"} iex> %{a: a} = %{:a => 1, "b" => 2, [:c, :e, :e] => 3} iex> a 1 iex> %{:c => 3} = %{:a => 1, 2 => :b} ** (MatchError) no match of right hand side value: %{2 => :b, :a => 1} ``` Variables can be used as map keys both when writing map literals as well as when matching: ``` iex> n = 1 1 iex> %{n => :one} %{1 => :one} iex> %{^n => :one} = %{1 => :one, 2 => :two, 3 => :three} %{1 => :one, 2 => :two, 3 => :three} ``` Maps also support a specific update syntax to update the value stored under *existing* atom keys: ``` iex> map = %{one: 1, two: 2} iex> %{map | one: "one"} %{one: "one", two: 2} iex> %{map | three: 3} ** (KeyError) key :three not found ``` The functions in this module that need to find a specific key work in logarithmic time. This means that the time it takes to find keys grows as the map grows, but it's not directly proportional to the map size. In comparison to finding an element in a list, it performs better because lists have a linear time complexity. Some functions, such as [`keys/1`](#keys/1) and [`values/1`](#values/1), run in linear time because they need to get to every element in the map. Summary ======== Types ------ [key()](#t:key/0) [value()](#t:value/0) Functions ---------- [delete(map, key)](#delete/2) Deletes the entry in `map` for a specific `key`. [drop(map, keys)](#drop/2) Drops the given `keys` from `map`. [equal?(map1, map2)](#equal?/2) Checks if two maps are equal. [fetch(map, key)](#fetch/2) Fetches the value for a specific `key` in the given `map`. [fetch!(map, key)](#fetch!/2) Fetches the value for a specific `key` in the given `map`, erroring out if `map` doesn't contain `key`. [from\_struct(struct)](#from_struct/1) Converts a `struct` to map. [get(map, key, default \\ nil)](#get/3) Gets the value for a specific `key` in `map`. [get\_and\_update(map, key, fun)](#get_and_update/3) Gets the value from `key` and updates it, all in one pass. [get\_and\_update!(map, key, fun)](#get_and_update!/3) Gets the value from `key` and updates it. Raises if there is no `key`. [get\_lazy(map, key, fun)](#get_lazy/3) Gets the value for a specific `key` in `map`. [has\_key?(map, key)](#has_key?/2) Returns whether the given `key` exists in the given `map`. [keys(map)](#keys/1) Returns all keys from `map`. [merge(map1, map2)](#merge/2) Merges two maps into one. [merge(map1, map2, fun)](#merge/3) Merges two maps into one, resolving conflicts through the given `fun`. [new()](#new/0) Returns a new empty map. [new(enumerable)](#new/1) Creates a map from an `enumerable`. [new(enumerable, transform)](#new/2) Creates a map from an `enumerable` via the given transformation function. [pop(map, key, default \\ nil)](#pop/3) Returns and removes the value associated with `key` in `map`. [pop\_lazy(map, key, fun)](#pop_lazy/3) Lazily returns and removes the value associated with `key` in `map`. [put(map, key, value)](#put/3) Puts the given `value` under `key` in `map`. [put\_new(map, key, value)](#put_new/3) Puts the given `value` under `key` unless the entry `key` already exists in `map`. [put\_new\_lazy(map, key, fun)](#put_new_lazy/3) Evaluates `fun` and puts the result under `key` in `map` unless `key` is already present. [replace!(map, key, value)](#replace!/3) Alters the value stored under `key` to `value`, but only if the entry `key` already exists in `map`. [split(map, keys)](#split/2) Takes all entries corresponding to the given `keys` in `map` and extracts them into a separate map. [take(map, keys)](#take/2) Returns a new map with all the key-value pairs in `map` where the key is in `keys`. [to\_list(map)](#to_list/1) Converts `map` to a list. [update(map, key, initial, fun)](#update/4) Updates the `key` in `map` with the given function. [update!(map, key, fun)](#update!/3) Updates `key` with the given function. [values(map)](#values/1) Returns all values from `map`. Types ====== ### key() #### Specs ``` key() :: any() ``` ### value() #### Specs ``` value() :: any() ``` Functions ========== ### delete(map, key) #### Specs ``` delete(map(), key()) :: map() ``` Deletes the entry in `map` for a specific `key`. If the `key` does not exist, returns `map` unchanged. Inlined by the compiler. #### Examples ``` iex> Map.delete(%{a: 1, b: 2}, :a) %{b: 2} iex> Map.delete(%{b: 2}, :a) %{b: 2} ``` ### drop(map, keys) #### Specs ``` drop(map(), [key()]) :: map() ``` Drops the given `keys` from `map`. If `keys` contains keys that are not in `map`, they're simply ignored. #### Examples ``` iex> Map.drop(%{a: 1, b: 2, c: 3}, [:b, :d]) %{a: 1, c: 3} ``` ### equal?(map1, map2) #### Specs ``` equal?(map(), map()) :: boolean() ``` Checks if two maps are equal. Two maps are considered to be equal if they contain the same keys and those keys contain the same values. #### Examples ``` iex> Map.equal?(%{a: 1, b: 2}, %{b: 2, a: 1}) true iex> Map.equal?(%{a: 1, b: 2}, %{b: 1, a: 2}) false ``` ### fetch(map, key) #### Specs ``` fetch(map(), key()) :: {:ok, value()} | :error ``` Fetches the value for a specific `key` in the given `map`. If `map` contains the given `key` with value `value`, then `{:ok, value}` is returned. If `map` doesn't contain `key`, `:error` is returned. Inlined by the compiler. #### Examples ``` iex> Map.fetch(%{a: 1}, :a) {:ok, 1} iex> Map.fetch(%{a: 1}, :b) :error ``` ### fetch!(map, key) #### Specs ``` fetch!(map(), key()) :: value() ``` Fetches the value for a specific `key` in the given `map`, erroring out if `map` doesn't contain `key`. If `map` contains the given `key`, the corresponding value is returned. If `map` doesn't contain `key`, a [`KeyError`](keyerror) exception is raised. Inlined by the compiler. #### Examples ``` iex> Map.fetch!(%{a: 1}, :a) 1 iex> Map.fetch!(%{a: 1}, :b) ** (KeyError) key :b not found in: %{a: 1} ``` ### from\_struct(struct) #### Specs ``` from_struct(atom() | struct()) :: map() ``` Converts a `struct` to map. It accepts the struct module or a struct itself and simply removes the `__struct__` field from the given struct or from a new struct generated from the given module. #### Example ``` defmodule User do defstruct [:name] end Map.from_struct(User) #=> %{name: nil} Map.from_struct(%User{name: "john"}) #=> %{name: "john"} ``` ### get(map, key, default \\ nil) #### Specs ``` get(map(), key(), value()) :: value() ``` Gets the value for a specific `key` in `map`. If `key` is present in `map` with value `value`, then `value` is returned. Otherwise, `default` is returned. If `default` is not provided, `nil` is used. #### Examples ``` iex> Map.get(%{}, :a) nil iex> Map.get(%{a: 1}, :a) 1 iex> Map.get(%{a: 1}, :b) nil iex> Map.get(%{a: 1}, :b, 3) 3 ``` ### get\_and\_update(map, key, fun) #### Specs ``` get_and_update(map(), key(), (value() -> {get, value()} | :pop)) :: {get, map()} when get: term() ``` Gets the value from `key` and updates it, all in one pass. `fun` is called with the current value under `key` in `map` (or `nil` if `key` is not present in `map`) and must return a two-element tuple: the "get" value (the retrieved value, which can be operated on before being returned) and the new value to be stored under `key` in the resulting new map. `fun` may also return `:pop`, which means the current value shall be removed from `map` and returned (making this function behave like `Map.pop(map, key)`). The returned value is a tuple with the "get" value returned by `fun` and a new map with the updated value under `key`. #### Examples ``` iex> Map.get_and_update(%{a: 1}, :a, fn current_value -> ...> {current_value, "new value!"} ...> end) {1, %{a: "new value!"}} iex> Map.get_and_update(%{a: 1}, :b, fn current_value -> ...> {current_value, "new value!"} ...> end) {nil, %{b: "new value!", a: 1}} iex> Map.get_and_update(%{a: 1}, :a, fn _ -> :pop end) {1, %{}} iex> Map.get_and_update(%{a: 1}, :b, fn _ -> :pop end) {nil, %{a: 1}} ``` ### get\_and\_update!(map, key, fun) #### Specs ``` get_and_update!(map(), key(), (value() -> {get, value()} | :pop)) :: {get, map()} when get: term() ``` Gets the value from `key` and updates it. Raises if there is no `key`. Behaves exactly like [`get_and_update/3`](#get_and_update/3), but raises a [`KeyError`](keyerror) exception if `key` is not present in `map`. #### Examples ``` iex> Map.get_and_update!(%{a: 1}, :a, fn current_value -> ...> {current_value, "new value!"} ...> end) {1, %{a: "new value!"}} iex> Map.get_and_update!(%{a: 1}, :b, fn current_value -> ...> {current_value, "new value!"} ...> end) ** (KeyError) key :b not found in: %{a: 1} iex> Map.get_and_update!(%{a: 1}, :a, fn _ -> ...> :pop ...> end) {1, %{}} ``` ### get\_lazy(map, key, fun) #### Specs ``` get_lazy(map(), key(), (() -> value())) :: value() ``` Gets the value for a specific `key` in `map`. If `key` is present in `map` with value `value`, then `value` is returned. Otherwise, `fun` is evaluated and its result is returned. This is useful if the default value is very expensive to calculate or generally difficult to setup and teardown again. #### Examples ``` iex> map = %{a: 1} iex> fun = fn -> ...> # some expensive operation here ...> 13 ...> end iex> Map.get_lazy(map, :a, fun) 1 iex> Map.get_lazy(map, :b, fun) 13 ``` ### has\_key?(map, key) #### Specs ``` has_key?(map(), key()) :: boolean() ``` Returns whether the given `key` exists in the given `map`. Inlined by the compiler. #### Examples ``` iex> Map.has_key?(%{a: 1}, :a) true iex> Map.has_key?(%{a: 1}, :b) false ``` ### keys(map) #### Specs ``` keys(map()) :: [key()] ``` Returns all keys from `map`. Inlined by the compiler. #### Examples ``` iex> Map.keys(%{a: 1, b: 2}) [:a, :b] ``` ### merge(map1, map2) #### Specs ``` merge(map(), map()) :: map() ``` Merges two maps into one. All keys in `map2` will be added to `map1`, overriding any existing one (i.e., the keys in `map2` "have precedence" over the ones in `map1`). If you have a struct and you would like to merge a set of keys into the struct, do not use this function, as it would merge all keys on the right side into the struct, even if the key is not part of the struct. Instead, use [`Kernel.struct/2`](kernel#struct/2). Inlined by the compiler. #### Examples ``` iex> Map.merge(%{a: 1, b: 2}, %{a: 3, d: 4}) %{a: 3, b: 2, d: 4} ``` ### merge(map1, map2, fun) #### Specs ``` merge(map(), map(), (key(), value(), value() -> value())) :: map() ``` Merges two maps into one, resolving conflicts through the given `fun`. All keys in `map2` will be added to `map1`. The given function will be invoked when there are duplicate keys; its arguments are `key` (the duplicate key), `value1` (the value of `key` in `map1`), and `value2` (the value of `key` in `map2`). The value returned by `fun` is used as the value under `key` in the resulting map. #### Examples ``` iex> Map.merge(%{a: 1, b: 2}, %{a: 3, d: 4}, fn _k, v1, v2 -> ...> v1 + v2 ...> end) %{a: 4, b: 2, d: 4} ``` ### new() #### Specs ``` new() :: map() ``` Returns a new empty map. #### Examples ``` iex> Map.new() %{} ``` ### new(enumerable) #### Specs ``` new(Enumerable.t()) :: map() ``` Creates a map from an `enumerable`. Duplicated keys are removed; the latest one prevails. #### Examples ``` iex> Map.new([{:b, 1}, {:a, 2}]) %{a: 2, b: 1} iex> Map.new(a: 1, a: 2, a: 3) %{a: 3} ``` ### new(enumerable, transform) #### Specs ``` new(Enumerable.t(), (term() -> {key(), value()})) :: map() ``` Creates a map from an `enumerable` via the given transformation function. Duplicated keys are removed; the latest one prevails. #### Examples ``` iex> Map.new([:a, :b], fn x -> {x, x} end) %{a: :a, b: :b} ``` ### pop(map, key, default \\ nil) #### Specs ``` pop(map(), key(), value()) :: {value(), map()} ``` Returns and removes the value associated with `key` in `map`. If `key` is present in `map` with value `value`, `{value, new_map}` is returned where `new_map` is the result of removing `key` from `map`. If `key` is not present in `map`, `{default, map}` is returned. #### Examples ``` iex> Map.pop(%{a: 1}, :a) {1, %{}} iex> Map.pop(%{a: 1}, :b) {nil, %{a: 1}} iex> Map.pop(%{a: 1}, :b, 3) {3, %{a: 1}} ``` ### pop\_lazy(map, key, fun) #### Specs ``` pop_lazy(map(), key(), (() -> value())) :: {value(), map()} ``` Lazily returns and removes the value associated with `key` in `map`. If `key` is present in `map` with value `value`, `{value, new_map}` is returned where `new_map` is the result of removing `key` from `map`. If `key` is not present in `map`, `{fun_result, map}` is returned, where `fun_result` is the result of applying `fun`. This is useful if the default value is very expensive to calculate or generally difficult to setup and teardown again. #### Examples ``` iex> map = %{a: 1} iex> fun = fn -> ...> # some expensive operation here ...> 13 ...> end iex> Map.pop_lazy(map, :a, fun) {1, %{}} iex> Map.pop_lazy(map, :b, fun) {13, %{a: 1}} ``` ### put(map, key, value) #### Specs ``` put(map(), key(), value()) :: map() ``` Puts the given `value` under `key` in `map`. Inlined by the compiler. #### Examples ``` iex> Map.put(%{a: 1}, :b, 2) %{a: 1, b: 2} iex> Map.put(%{a: 1, b: 2}, :a, 3) %{a: 3, b: 2} ``` ### put\_new(map, key, value) #### Specs ``` put_new(map(), key(), value()) :: map() ``` Puts the given `value` under `key` unless the entry `key` already exists in `map`. #### Examples ``` iex> Map.put_new(%{a: 1}, :b, 2) %{a: 1, b: 2} iex> Map.put_new(%{a: 1, b: 2}, :a, 3) %{a: 1, b: 2} ``` ### put\_new\_lazy(map, key, fun) #### Specs ``` put_new_lazy(map(), key(), (() -> value())) :: map() ``` Evaluates `fun` and puts the result under `key` in `map` unless `key` is already present. This function is useful in case you want to compute the value to put under `key` only if `key` is not already present (e.g., the value is expensive to calculate or generally difficult to setup and teardown again). #### Examples ``` iex> map = %{a: 1} iex> fun = fn -> ...> # some expensive operation here ...> 3 ...> end iex> Map.put_new_lazy(map, :a, fun) %{a: 1} iex> Map.put_new_lazy(map, :b, fun) %{a: 1, b: 3} ``` ### replace!(map, key, value) #### Specs ``` replace!(map(), key(), value()) :: map() ``` Alters the value stored under `key` to `value`, but only if the entry `key` already exists in `map`. If `key` is not present in `map`, a [`KeyError`](keyerror) exception is raised. Inlined by the compiler. #### Examples ``` iex> Map.replace!(%{a: 1, b: 2}, :a, 3) %{a: 3, b: 2} iex> Map.replace!(%{a: 1}, :b, 2) ** (KeyError) key :b not found in: %{a: 1} ``` ### split(map, keys) #### Specs ``` split(map(), [key()]) :: {map(), map()} ``` Takes all entries corresponding to the given `keys` in `map` and extracts them into a separate map. Returns a tuple with the new map and the old map with removed keys. Keys for which there are no entries in `map` are ignored. #### Examples ``` iex> Map.split(%{a: 1, b: 2, c: 3}, [:a, :c, :e]) {%{a: 1, c: 3}, %{b: 2}} ``` ### take(map, keys) #### Specs ``` take(map(), [key()]) :: map() ``` Returns a new map with all the key-value pairs in `map` where the key is in `keys`. If `keys` contains keys that are not in `map`, they're simply ignored. #### Examples ``` iex> Map.take(%{a: 1, b: 2, c: 3}, [:a, :c, :e]) %{a: 1, c: 3} ``` ### to\_list(map) #### Specs ``` to_list(map()) :: [{term(), term()}] ``` Converts `map` to a list. Each key-value pair in the map is converted to a two-element tuple `{key, value}` in the resulting list. Inlined by the compiler. #### Examples ``` iex> Map.to_list(%{a: 1}) [a: 1] iex> Map.to_list(%{1 => 2}) [{1, 2}] ``` ### update(map, key, initial, fun) #### Specs ``` update(map(), key(), value(), (value() -> value())) :: map() ``` Updates the `key` in `map` with the given function. If `key` is present in `map` with value `value`, `fun` is invoked with argument `value` and its result is used as the new value of `key`. If `key` is not present in `map`, `initial` is inserted as the value of `key`. The initial value will not be passed through the update function. #### Examples ``` iex> Map.update(%{a: 1}, :a, 13, &(&1 * 2)) %{a: 2} iex> Map.update(%{a: 1}, :b, 11, &(&1 * 2)) %{a: 1, b: 11} ``` ### update!(map, key, fun) #### Specs ``` update!(map(), key(), (value() -> value())) :: map() ``` Updates `key` with the given function. If `key` is present in `map` with value `value`, `fun` is invoked with argument `value` and its result is used as the new value of `key`. If `key` is not present in `map`, a [`KeyError`](keyerror) exception is raised. #### Examples ``` iex> Map.update!(%{a: 1}, :a, &(&1 * 2)) %{a: 2} iex> Map.update!(%{a: 1}, :b, &(&1 * 2)) ** (KeyError) key :b not found in: %{a: 1} ``` ### values(map) #### Specs ``` values(map()) :: [value()] ``` Returns all values from `map`. Inlined by the compiler. #### Examples ``` iex> Map.values(%{a: 1, b: 2}) [1, 2] ```
programming_docs
elixir Regex Regex ====== Provides regular expressions for Elixir. Regex is based on PCRE (Perl Compatible Regular Expressions) and built on top of Erlang's `:re` module. More information can be found in the [`:re` module documentation](http://www.erlang.org/doc/man/re.html). Regular expressions in Elixir can be created using the sigils `~r` (see [`Kernel.sigil_r/2`](kernel#sigil_r/2)) or `~R` (see [`Kernel.sigil_R/2`](kernel#sigil_R/2)): ``` # A simple regular expression that matches foo anywhere in the string ~r/foo/ # A regular expression with case insensitive and Unicode options ~r/foo/iu ``` Regular expressions created via sigils are pre-compiled and stored in the `.beam` file. Notice this may be a problem if you are precompiling Elixir, see the "Precompilation" section for more information. A Regex is represented internally as the [`Regex`](#content) struct. Therefore, `%Regex{}` can be used whenever there is a need to match on them. Keep in mind it is not guaranteed two regular expressions from the same source are equal, for example: ``` ~r/(?<foo>.)(?<bar>.)/ == ~r/(?<foo>.)(?<bar>.)/ ``` may return `true` or `false` depending on your machine, endianness, available optimizations and others. You can, however, retrieve the source of a compiled regular expression by accessing the `source` field, and then compare those directly: ``` ~r/(?<foo>.)(?<bar>.)/.source == ~r/(?<foo>.)(?<bar>.)/.source ``` Modifiers ---------- The modifiers available when creating a Regex are: * `unicode` (u) - enables Unicode specific patterns like `\p` and change modifiers like `\w`, `\W`, `\s` and friends to also match on Unicode. It expects valid Unicode strings to be given on match * `caseless` (i) - adds case insensitivity * `dotall` (s) - causes dot to match newlines and also set newline to anycrlf; the new line setting can be overridden by setting `(*CR)` or `(*LF)` or `(*CRLF)` or `(*ANY)` according to `:re` documentation * `multiline` (m) - causes `^` and `$` to mark the beginning and end of each line; use `\A` and `\z` to match the end or beginning of the string * `extended` (x) - whitespace characters are ignored except when escaped and allow `#` to delimit comments * `firstline` (f) - forces the unanchored pattern to match before or at the first newline, though the matched text may continue over the newline * `ungreedy` (U) - inverts the "greediness" of the regexp (the previous `r` option is deprecated in favor of `U`) The options not available are: * `anchored` - not available, use `^` or `\A` instead * `dollar_endonly` - not available, use `\z` instead * `no_auto_capture` - not available, use `?:` instead * `newline` - not available, use `(*CR)` or `(*LF)` or `(*CRLF)` or `(*ANYCRLF)` or `(*ANY)` at the beginning of the regexp according to the `:re` documentation Captures --------- Many functions in this module handle what to capture in a regex match via the `:capture` option. The supported values are: * `:all` - all captured subpatterns including the complete matching string (this is the default) * `:first` - only the first captured subpattern, which is always the complete matching part of the string; all explicitly captured subpatterns are discarded * `:all_but_first` - all but the first matching subpattern, i.e. all explicitly captured subpatterns, but not the complete matching part of the string * `:none` - does not return matching subpatterns at all * `:all_names` - captures all names in the Regex * `list(binary)` - a list of named captures to capture Character classes ------------------ Regex supports several built in named character classes. These are used by enclosing the class name in `[: :]` inside a group. For example: ``` iex> String.match?("123", ~r/^[[:alnum:]]+$/) true iex> String.match?("123 456", ~r/^[[:alnum:][:blank:]]+$/) true ``` The supported class names are: * alnum - Letters and digits * alpha - Letters * ascii - Character codes 0-127 * blank - Space or tab only * cntrl - Control characters * digit - Decimal digits (same as \d) * graph - Printing characters, excluding space * lower - Lowercase letters * print - Printing characters, including space * punct - Printing characters, excluding letters, digits, and space * space - Whitespace (the same as \s from PCRE 8.34) * upper - Uppercase letters * word - "Word" characters (same as \w) * xdigit - Hexadecimal digits Note the behaviour of those classes may change according to the Unicode and other modifiers: ``` iex> String.match?("josé", ~r/^[[:lower:]]+$/) false iex> String.match?("josé", ~r/^[[:lower:]]+$/u) true ``` Precompilation --------------- Regular expressions built with sigil are precompiled and stored in `.beam` files. Precompiled regexes will be checked in runtime and may work slower between operating systems and OTP releases. This is rarely a problem, as most Elixir code shared during development is compiled on the target (such as dependencies, archives, and escripts) and, when running in production, the code must either be compiled on the target (via [`mix compile`](https://hexdocs.pm/mix/Mix.Tasks.Compile.html) or similar) or released on the host (via `mix releases` or similar) with a matching OTP, OS and architecture as as the target. If you know you are running on a different system that the current one and you are doing multiple matches with the regex, you can manually invoke [`Regex.recompile/1`](regex#recompile/1) or [`Regex.recompile!/1`](regex#recompile!/1) to perform a runtime version check and recompile the regex if necessary. Summary ======== Types ------ [t()](#t:t/0) Functions ---------- [compile(source, options \\ "")](#compile/2) Compiles the regular expression. [compile!(source, options \\ "")](#compile!/2) Compiles the regular expression and raises [`Regex.CompileError`](regex.compileerror) in case of errors. [escape(string)](#escape/1) Escapes a string to be literally matched in a regex. [match?(regex, string)](#match?/2) Returns a boolean indicating whether there was a match or not. [named\_captures(regex, string, options \\ [])](#named_captures/3) Returns the given captures as a map or `nil` if no captures are found. [names(regex)](#names/1) Returns a list of names in the regex. [opts(regex)](#opts/1) Returns the regex options as a string. [re\_pattern(regex)](#re_pattern/1) Returns the underlying `re_pattern` in the regular expression. [recompile(regex)](#recompile/1) Recompiles the existing regular expression if necessary. [recompile!(regex)](#recompile!/1) Recompiles the existing regular expression and raises [`Regex.CompileError`](regex.compileerror) in case of errors. [regex?(term)](#regex?/1) Returns `true` if the given `term` is a regex. Otherwise returns `false`. [replace(regex, string, replacement, options \\ [])](#replace/4) Receives a regex, a binary and a replacement, returns a new binary where all matches are replaced by the replacement. [run(regex, string, options \\ [])](#run/3) Runs the regular expression against the given string until the first match. It returns a list with all captures or `nil` if no match occurred. [scan(regex, string, options \\ [])](#scan/3) Same as [`run/3`](#run/3), but scans the target several times collecting all matches of the regular expression. [source(regex)](#source/1) Returns the regex source as a binary. [split(regex, string, options \\ [])](#split/3) Splits the given target based on the given pattern and in the given number of parts. [version()](#version/0) Returns the version of the underlying Regex engine. Types ====== ### t() #### Specs ``` t() :: %Regex{ opts: binary(), re_pattern: term(), re_version: term(), source: binary() } ``` Functions ========== ### compile(source, options \\ "") #### Specs ``` compile(binary(), binary() | [term()]) :: {:ok, t()} | {:error, any()} ``` Compiles the regular expression. The given options can either be a binary with the characters representing the same regex options given to the `~r` (see [`Kernel.sigil_r/2`](kernel#sigil_r/2)) sigil, or a list of options, as expected by the Erlang's `:re` module. It returns `{:ok, regex}` in case of success, `{:error, reason}` otherwise. #### Examples ``` iex> Regex.compile("foo") {:ok, ~r/foo/} iex> Regex.compile("*foo") {:error, {'nothing to repeat', 0}} ``` ### compile!(source, options \\ "") #### Specs ``` compile!(binary(), binary() | [term()]) :: t() ``` Compiles the regular expression and raises [`Regex.CompileError`](regex.compileerror) in case of errors. ### escape(string) #### Specs ``` escape(String.t()) :: String.t() ``` Escapes a string to be literally matched in a regex. #### Examples ``` iex> Regex.escape(".") "\\." iex> Regex.escape("\\what if") "\\\\what\\ if" ``` ### match?(regex, string) #### Specs ``` match?(t(), String.t()) :: boolean() ``` Returns a boolean indicating whether there was a match or not. #### Examples ``` iex> Regex.match?(~r/foo/, "foo") true iex> Regex.match?(~r/foo/, "bar") false ``` ### named\_captures(regex, string, options \\ []) #### Specs ``` named_captures(t(), String.t(), [term()]) :: map() | nil ``` Returns the given captures as a map or `nil` if no captures are found. #### Options * `:return` - set to `:index` to return byte index and match length. Defaults to `:binary`. #### Examples ``` iex> Regex.named_captures(~r/c(?<foo>d)/, "abcd") %{"foo" => "d"} iex> Regex.named_captures(~r/a(?<foo>b)c(?<bar>d)/, "abcd") %{"bar" => "d", "foo" => "b"} iex> Regex.named_captures(~r/a(?<foo>b)c(?<bar>d)/, "efgh") nil ``` ### names(regex) #### Specs ``` names(t()) :: [String.t()] ``` Returns a list of names in the regex. #### Examples ``` iex> Regex.names(~r/(?<foo>bar)/) ["foo"] ``` ### opts(regex) #### Specs ``` opts(t()) :: String.t() ``` Returns the regex options as a string. #### Examples ``` iex> Regex.opts(~r(foo)m) "m" ``` ### re\_pattern(regex) #### Specs ``` re_pattern(t()) :: term() ``` Returns the underlying `re_pattern` in the regular expression. ### recompile(regex) #### Specs ``` recompile(t()) :: t() ``` Recompiles the existing regular expression if necessary. This checks the version stored in the regular expression and recompiles the regex in case of version mismatch. ### recompile!(regex) #### Specs ``` recompile!(t()) :: t() ``` Recompiles the existing regular expression and raises [`Regex.CompileError`](regex.compileerror) in case of errors. ### regex?(term) #### Specs ``` regex?(any()) :: boolean() ``` Returns `true` if the given `term` is a regex. Otherwise returns `false`. #### Examples ``` iex> Regex.regex?(~r/foo/) true iex> Regex.regex?(0) false ``` ### replace(regex, string, replacement, options \\ []) #### Specs ``` replace(t(), String.t(), String.t() | (... -> String.t()), [term()]) :: String.t() ``` Receives a regex, a binary and a replacement, returns a new binary where all matches are replaced by the replacement. The replacement can be either a string or a function. The string is used as a replacement for every match and it allows specific captures to be accessed via `\N` or `\g{N}`, where `N` is the capture. In case `\0` is used, the whole match is inserted. Note that in regexes the backslash needs to be escaped, hence in practice you'll need to use `\\N` and `\\g{N}`. When the replacement is a function, the function may have arity N where each argument maps to a capture, with the first argument being the whole match. If the function expects more arguments than captures found, the remaining arguments will receive `""`. #### Options * `:global` - when `false`, replaces only the first occurrence (defaults to `true`) #### Examples ``` iex> Regex.replace(~r/d/, "abc", "d") "abc" iex> Regex.replace(~r/b/, "abc", "d") "adc" iex> Regex.replace(~r/b/, "abc", "[\\0]") "a[b]c" iex> Regex.replace(~r/a(b|d)c/, "abcadc", "[\\1]") "[b][d]" iex> Regex.replace(~r/\.(\d)$/, "500.5", ".\\g{1}0") "500.50" iex> Regex.replace(~r/a(b|d)c/, "abcadc", fn _, x -> "[#{x}]" end) "[b][d]" iex> Regex.replace(~r/a/, "abcadc", "A", global: false) "Abcadc" ``` ### run(regex, string, options \\ []) #### Specs ``` run(t(), binary(), [term()]) :: nil | [binary()] | [{integer(), integer()}] ``` Runs the regular expression against the given string until the first match. It returns a list with all captures or `nil` if no match occurred. #### Options * `:return` - set to `:index` to return byte index and match length. Defaults to `:binary`. * `:capture` - what to capture in the result. Check the moduledoc for [`Regex`](#content) to see the possible capture values. #### Examples ``` iex> Regex.run(~r/c(d)/, "abcd") ["cd", "d"] iex> Regex.run(~r/e/, "abcd") nil iex> Regex.run(~r/c(d)/, "abcd", return: :index) [{2, 2}, {3, 1}] ``` ### scan(regex, string, options \\ []) #### Specs ``` scan(t(), String.t(), [term()]) :: [[String.t()]] ``` Same as [`run/3`](#run/3), but scans the target several times collecting all matches of the regular expression. A list of lists is returned, where each entry in the primary list represents a match and each entry in the secondary list represents the captured contents. #### Options * `:return` - set to `:index` to return byte index and match length. Defaults to `:binary`. * `:capture` - what to capture in the result. Check the moduledoc for [`Regex`](#content) to see the possible capture values. #### Examples ``` iex> Regex.scan(~r/c(d|e)/, "abcd abce") [["cd", "d"], ["ce", "e"]] iex> Regex.scan(~r/c(?:d|e)/, "abcd abce") [["cd"], ["ce"]] iex> Regex.scan(~r/e/, "abcd") [] iex> Regex.scan(~r/\p{Sc}/u, "$, £, and €") [["$"], ["£"], ["€"]] iex> Regex.scan(~r/=+/, "=ü†ƒ8===", return: :index) [[{0, 1}], [{9, 3}]] ``` ### source(regex) #### Specs ``` source(t()) :: String.t() ``` Returns the regex source as a binary. #### Examples ``` iex> Regex.source(~r(foo)) "foo" ``` ### split(regex, string, options \\ []) #### Specs ``` split(t(), String.t(), [term()]) :: [String.t()] ``` Splits the given target based on the given pattern and in the given number of parts. #### Options * `:parts` - when specified, splits the string into the given number of parts. If not specified, `:parts` defaults to `:infinity`, which will split the string into the maximum number of parts possible based on the given pattern. * `:trim` - when `true`, removes empty strings (`""`) from the result. Defaults to `false`. * `:on` - specifies which captures to split the string on, and in what order. Defaults to `:first` which means captures inside the regex do not affect the splitting process. * `:include_captures` - when `true`, includes in the result the matches of the regular expression. Defaults to `false`. #### Examples ``` iex> Regex.split(~r{-}, "a-b-c") ["a", "b", "c"] iex> Regex.split(~r{-}, "a-b-c", parts: 2) ["a", "b-c"] iex> Regex.split(~r{-}, "abc") ["abc"] iex> Regex.split(~r{}, "abc") ["", "a", "b", "c", ""] iex> Regex.split(~r{a(?<second>b)c}, "abc") ["", ""] iex> Regex.split(~r{a(?<second>b)c}, "abc", on: [:second]) ["a", "c"] iex> Regex.split(~r{(x)}, "Elixir", include_captures: true) ["Eli", "x", "ir"] iex> Regex.split(~r{a(?<second>b)c}, "abc", on: [:second], include_captures: true) ["a", "b", "c"] ``` ### version() #### Specs ``` version() :: term() ``` Returns the version of the underlying Regex engine. elixir StringIO StringIO ========= Controls an IO device process that wraps a string. A [`StringIO`](#content) IO device can be passed as a "device" to most of the functions in the [`IO`](io) module. Examples --------- ``` iex> {:ok, pid} = StringIO.open("foo") iex> IO.read(pid, 2) "fo" ``` Summary ======== Functions ---------- [child\_spec(init\_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [close(pid)](#close/1) Stops the IO device and returns the remaining input/output buffers. [contents(pid)](#contents/1) Returns the current input/output buffers for the given IO device. [flush(pid)](#flush/1) Flushes the output buffer and returns its current contents. [open(string, options\_or\_function \\ [])](#open/2) Creates an IO device. [open(string, options, function)](#open/3) Creates an IO device. Functions ========== ### child\_spec(init\_arg) Returns a specification to start this module under a supervisor. See [`Supervisor`](supervisor). ### close(pid) #### Specs ``` close(pid()) :: {:ok, {binary(), binary()}} ``` Stops the IO device and returns the remaining input/output buffers. #### Examples ``` iex> {:ok, pid} = StringIO.open("in") iex> IO.write(pid, "out") iex> StringIO.close(pid) {:ok, {"in", "out"}} ``` ### contents(pid) #### Specs ``` contents(pid()) :: {binary(), binary()} ``` Returns the current input/output buffers for the given IO device. #### Examples ``` iex> {:ok, pid} = StringIO.open("in") iex> IO.write(pid, "out") iex> StringIO.contents(pid) {"in", "out"} ``` ### flush(pid) #### Specs ``` flush(pid()) :: binary() ``` Flushes the output buffer and returns its current contents. #### Examples ``` iex> {:ok, pid} = StringIO.open("in") iex> IO.write(pid, "out") iex> StringIO.flush(pid) "out" iex> StringIO.contents(pid) {"in", ""} ``` ### open(string, options\_or\_function \\ []) #### Specs ``` open(binary(), keyword()) :: {:ok, pid()} ``` ``` open(binary(), (pid() -> res)) :: {:ok, res} when res: var ``` Creates an IO device. `string` will be the initial input of the newly created device. `options_or_function` can be a keyword list of options or a function. If options are provided, the result will be `{:ok, pid}`, returning the IO device created. The option `:capture_prompt`, when set to `true`, causes prompts (which are specified as arguments to `IO.get*` functions) to be included in the device's output. If a function is provided, the device will be created and sent to the function. When the function returns, the device will be closed. The final result will be a tuple with `:ok` and the result of the function. #### Examples ``` iex> {:ok, pid} = StringIO.open("foo") iex> IO.gets(pid, ">") "foo" iex> StringIO.contents(pid) {"", ""} iex> {:ok, pid} = StringIO.open("foo", capture_prompt: true) iex> IO.gets(pid, ">") "foo" iex> StringIO.contents(pid) {"", ">"} iex> StringIO.open("foo", fn pid -> ...> input = IO.gets(pid, ">") ...> IO.write(pid, "The input was #{input}") ...> StringIO.contents(pid) ...> end) {:ok, {"", "The input was foo"}} ``` ### open(string, options, function) #### Specs ``` open(binary(), keyword(), (pid() -> res)) :: {:ok, res} when res: var ``` Creates an IO device. `string` will be the initial input of the newly created device. If the `:capture_prompt` option is set to `true`, prompts (specified as arguments to `IO.get*` functions) are captured in the output. The device will be created and sent to the function given. When the function returns, the device will be closed. The final result will be a tuple with `:ok` and the result of the function. #### Examples ``` iex> StringIO.open("foo", [], fn pid -> ...> input = IO.gets(pid, ">") ...> IO.write(pid, "The input was #{input}") ...> StringIO.contents(pid) ...> end) {:ok, {"", "The input was foo"}} iex> StringIO.open("foo", [capture_prompt: true], fn pid -> ...> input = IO.gets(pid, ">") ...> IO.write(pid, "The input was #{input}") ...> StringIO.contents(pid) ...> end) {:ok, {"", ">The input was foo"}} ``` elixir GenEvent behaviour GenEvent behaviour =================== This behaviour is deprecated. Use Erlang/OTP's :gen\_event module instead. A event manager with event handlers behaviour. If you are interested in implementing an event manager, please read the "Alternatives" section below. If you have to implement an event handler to integrate with an existing system, such as Elixir's Logger, please use `:gen_event` instead. Alternatives ------------- There are a few suitable alternatives to replace GenEvent. Each of them can be the most beneficial based on the use case. ### Supervisor and GenServers One alternative to GenEvent is a very minimal solution consisting of using a supervisor and multiple GenServers started under it. The supervisor acts as the "event manager" and the children GenServers act as the "event handlers". This approach has some shortcomings (it provides no backpressure for example) but can still replace GenEvent for low-profile usages of it. [This blog post by José Valim](http://blog.plataformatec.com.br/2016/11/replacing-genevent-by-a-supervisor-genserver/) has more detailed information on this approach. ### GenStage If the use case where you were using GenEvent requires more complex logic, [GenStage](https://github.com/elixir-lang/gen_stage) provides a great alternative. GenStage is an external Elixir library maintained by the Elixir team; it provides a tool to implement systems that exchange events in a demand-driven way with built-in support for backpressure. See the [GenStage documentation](https://hexdocs.pm/gen_stage) for more information. ### `:gen_event` If your use case requires exactly what GenEvent provided, or you have to integrate with an existing `:gen_event`-based system, you can still use the [`:gen_event`](http://erlang.org/doc/man/gen_event.html) Erlang module. Summary ======== Types ------ [handler()](#t:handler/0) [manager()](#t:manager/0) [name()](#t:name/0) [on\_start()](#t:on_start/0) [options()](#t:options/0) Callbacks ---------- [code\_change(old\_vsn, state, extra)](#c:code_change/3) [handle\_call(request, state)](#c:handle_call/2) [handle\_event(event, state)](#c:handle_event/2) [handle\_info(msg, state)](#c:handle_info/2) [init(args)](#c:init/1) [terminate(reason, state)](#c:terminate/2) Types ====== ### handler() #### Specs ``` handler() :: atom() | {atom(), term()} ``` ### manager() #### Specs ``` manager() :: pid() | name() | {atom(), node()} ``` ### name() #### Specs ``` name() :: atom() | {:global, term()} | {:via, module(), term()} ``` ### on\_start() #### Specs ``` on_start() :: {:ok, pid()} | {:error, {:already_started, pid()}} ``` ### options() #### Specs ``` options() :: [{:name, name()}] ``` Callbacks ========== ### code\_change(old\_vsn, state, extra) #### Specs ``` code_change(old_vsn, state :: term(), extra :: term()) :: {:ok, new_state :: term()} when old_vsn: term() | {:down, term()} ``` ### handle\_call(request, state) #### Specs ``` handle_call(request :: term(), state :: term()) :: {:ok, reply, new_state} | {:ok, reply, new_state, :hibernate} | {:remove_handler, reply} when reply: term(), new_state: term() ``` ### handle\_event(event, state) #### Specs ``` handle_event(event :: term(), state :: term()) :: {:ok, new_state} | {:ok, new_state, :hibernate} | :remove_handler when new_state: term() ``` ### handle\_info(msg, state) #### Specs ``` handle_info(msg :: term(), state :: term()) :: {:ok, new_state} | {:ok, new_state, :hibernate} | :remove_handler when new_state: term() ``` ### init(args) #### Specs ``` init(args :: term()) :: {:ok, state} | {:ok, state, :hibernate} | {:error, reason :: any()} when state: any() ``` ### terminate(reason, state) #### Specs ``` terminate(reason, state :: term()) :: term() when reason: :stop | {:stop, term()} | :remove_handler | {:error, term()} | term() ```
programming_docs
elixir Node Node ===== Functions related to VM nodes. Some of the functions in this module are inlined by the compiler, similar to functions in the [`Kernel`](kernel) module and they are explicitly marked in their docs as "inlined by the compiler". For more information about inlined functions, check out the [`Kernel`](kernel) module. Summary ======== Types ------ [state()](#t:state/0) [t()](#t:t/0) Functions ---------- [alive?()](#alive?/0) Returns `true` if the local node is alive. [connect(node)](#connect/1) Establishes a connection to `node`. [disconnect(node)](#disconnect/1) Forces the disconnection of a node. [get\_cookie()](#get_cookie/0) Returns the magic cookie of the local node. [list()](#list/0) Returns a list of all visible nodes in the system, excluding the local node. [list(args)](#list/1) Returns a list of nodes according to argument given. [monitor(node, flag)](#monitor/2) Monitors the status of the node. [monitor(node, flag, options)](#monitor/3) Behaves as [`monitor/2`](#monitor/2) except that it allows an extra option to be given, namely `:allow_passive_connect`. [ping(node)](#ping/1) Tries to set up a connection to node. [self()](#self/0) Returns the current node. [set\_cookie(node \\ Node.self(), cookie)](#set_cookie/2) Sets the magic cookie of `node` to the atom `cookie`. [spawn(node, fun)](#spawn/2) Returns the PID of a new process started by the application of `fun` on `node`. If `node` does not exist, a useless PID is returned. [spawn(node, fun, opts)](#spawn/3) Returns the PID of a new process started by the application of `fun` on `node`. [spawn(node, module, fun, args)](#spawn/4) Returns the PID of a new process started by the application of `module.function(args)` on `node`. [spawn(node, module, fun, args, opts)](#spawn/5) Returns the PID of a new process started by the application of `module.function(args)` on `node`. [spawn\_link(node, fun)](#spawn_link/2) Returns the PID of a new linked process started by the application of `fun` on `node`. [spawn\_link(node, module, fun, args)](#spawn_link/4) Returns the PID of a new linked process started by the application of `module.function(args)` on `node`. [start(name, type \\ :longnames, tick\_time \\ 15000)](#start/3) Turns a non-distributed node into a distributed node. [stop()](#stop/0) Turns a distributed node into a non-distributed node. Types ====== ### state() #### Specs ``` state() :: :visible | :hidden | :connected | :this | :known ``` ### t() #### Specs ``` t() :: node() ``` Functions ========== ### alive?() #### Specs ``` alive?() :: boolean() ``` Returns `true` if the local node is alive. That is, if the node can be part of a distributed system. ### connect(node) #### Specs ``` connect(t()) :: boolean() | :ignored ``` Establishes a connection to `node`. Returns `true` if successful, `false` if not, and the atom `:ignored` if the local node is not alive. For more information, see [`:net_kernel.connect_node/1`](http://www.erlang.org/doc/man/net_kernel.html#connect_node-1). ### disconnect(node) #### Specs ``` disconnect(t()) :: boolean() | :ignored ``` Forces the disconnection of a node. This will appear to the `node` as if the local node has crashed. This function is mainly used in the Erlang network authentication protocols. Returns `true` if disconnection succeeds, otherwise `false`. If the local node is not alive, the function returns `:ignored`. For more information, see [`:erlang.disconnect_node/1`](http://www.erlang.org/doc/man/erlang.html#disconnect_node-1). ### get\_cookie() #### Specs ``` get_cookie() :: atom() ``` Returns the magic cookie of the local node. Returns the cookie if the node is alive, otherwise `:nocookie`. ### list() #### Specs ``` list() :: [t()] ``` Returns a list of all visible nodes in the system, excluding the local node. Same as `list(:visible)`. Inlined by the compiler. ### list(args) #### Specs ``` list(state() | [state()]) :: [t()] ``` Returns a list of nodes according to argument given. The result returned when the argument is a list, is the list of nodes satisfying the disjunction(s) of the list elements. For more information, see [`:erlang.nodes/1`](http://www.erlang.org/doc/man/erlang.html#nodes-1). Inlined by the compiler. ### monitor(node, flag) #### Specs ``` monitor(t(), boolean()) :: true ``` Monitors the status of the node. If `flag` is `true`, monitoring is turned on. If `flag` is `false`, monitoring is turned off. For more information, see [`:erlang.monitor_node/2`](http://www.erlang.org/doc/man/erlang.html#monitor_node-2). For monitoring status changes of all nodes, see [`:net_kernel.monitor_nodes/3`](http://www.erlang.org/doc/man/net_kernel.html#monitor_nodes-3). ### monitor(node, flag, options) #### Specs ``` monitor(t(), boolean(), [:allow_passive_connect]) :: true ``` Behaves as [`monitor/2`](#monitor/2) except that it allows an extra option to be given, namely `:allow_passive_connect`. For more information, see [`:erlang.monitor_node/3`](http://www.erlang.org/doc/man/erlang.html#monitor_node-3). For monitoring status changes of all nodes, see [`:net_kernel.monitor_nodes/3`](http://www.erlang.org/doc/man/net_kernel.html#monitor_nodes-3). ### ping(node) #### Specs ``` ping(t()) :: :pong | :pang ``` Tries to set up a connection to node. Returns `:pang` if it fails, or `:pong` if it is successful. #### Examples ``` iex> Node.ping(:unknown_node) :pang ``` ### self() #### Specs ``` self() :: t() ``` Returns the current node. It returns the same as the built-in `node()`. ### set\_cookie(node \\ Node.self(), cookie) #### Specs ``` set_cookie(t(), atom()) :: true ``` Sets the magic cookie of `node` to the atom `cookie`. The default node is [`Node.self/0`](node#self/0), the local node. If `node` is the local node, the function also sets the cookie of all other unknown nodes to `cookie`. This function will raise [`FunctionClauseError`](functionclauseerror) if the given `node` is not alive. ### spawn(node, fun) #### Specs ``` spawn(t(), (() -> any())) :: pid() ``` Returns the PID of a new process started by the application of `fun` on `node`. If `node` does not exist, a useless PID is returned. For the list of available options, see [`:erlang.spawn/2`](http://www.erlang.org/doc/man/erlang.html#spawn-2). Inlined by the compiler. ### spawn(node, fun, opts) #### Specs ``` spawn(t(), (() -> any()), Process.spawn_opts()) :: pid() | {pid(), reference()} ``` Returns the PID of a new process started by the application of `fun` on `node`. If `node` does not exist, a useless PID is returned. For the list of available options, see [`:erlang.spawn_opt/3`](http://www.erlang.org/doc/man/erlang.html#spawn_opt-3). Inlined by the compiler. ### spawn(node, module, fun, args) #### Specs ``` spawn(t(), module(), atom(), [any()]) :: pid() ``` Returns the PID of a new process started by the application of `module.function(args)` on `node`. If `node` does not exist, a useless PID is returned. For the list of available options, see [`:erlang.spawn/4`](http://www.erlang.org/doc/man/erlang.html#spawn-4). Inlined by the compiler. ### spawn(node, module, fun, args, opts) #### Specs ``` spawn(t(), module(), atom(), [any()], Process.spawn_opts()) :: pid() | {pid(), reference()} ``` Returns the PID of a new process started by the application of `module.function(args)` on `node`. If `node` does not exist, a useless PID is returned. For the list of available options, see [`:erlang.spawn/5`](http://www.erlang.org/doc/man/erlang.html#spawn-5). Inlined by the compiler. ### spawn\_link(node, fun) #### Specs ``` spawn_link(t(), (() -> any())) :: pid() ``` Returns the PID of a new linked process started by the application of `fun` on `node`. A link is created between the calling process and the new process, atomically. If `node` does not exist, a useless PID is returned (and due to the link, an exit signal with exit reason `:noconnection` will be received). Inlined by the compiler. ### spawn\_link(node, module, fun, args) #### Specs ``` spawn_link(t(), module(), atom(), [any()]) :: pid() ``` Returns the PID of a new linked process started by the application of `module.function(args)` on `node`. A link is created between the calling process and the new process, atomically. If `node` does not exist, a useless PID is returned (and due to the link, an exit signal with exit reason `:noconnection` will be received). Inlined by the compiler. ### start(name, type \\ :longnames, tick\_time \\ 15000) #### Specs ``` start(node(), :longnames | :shortnames, non_neg_integer()) :: {:ok, pid()} | {:error, term()} ``` Turns a non-distributed node into a distributed node. This functionality starts the `:net_kernel` and other related processes. ### stop() #### Specs ``` stop() :: :ok | {:error, :not_allowed | :not_found} ``` Turns a distributed node into a non-distributed node. For other nodes in the network, this is the same as the node going down. Only possible when the node was started with [`Node.start/3`](node#start/3), otherwise returns `{:error, :not_allowed}`. Returns `{:error, :not_found}` if the local node is not alive. elixir Calendar.TimeZoneDatabase behaviour Calendar.TimeZoneDatabase behaviour ==================================== This module defines a behaviour for providing time zone data. IANA provides time zone data that includes data about different UTC offsets and standard offsets for time zones. Summary ======== Types ------ [time\_zone\_period()](#t:time_zone_period/0) A period where a certain combination of UTC offset, standard offset and zone abbreviation is in effect. [time\_zone\_period\_limit()](#t:time_zone_period_limit/0) Limit for when a certain time zone period begins or ends. Callbacks ---------- [time\_zone\_period\_from\_utc\_iso\_days(arg1, arg2)](#c:time_zone_period_from_utc_iso_days/2) Time zone period for a point in time in UTC for a specific time zone. [time\_zone\_periods\_from\_wall\_datetime(arg1, arg2)](#c:time_zone_periods_from_wall_datetime/2) Possible time zone periods for a certain time zone and wall clock date and time. Types ====== ### time\_zone\_period() #### Specs ``` time_zone_period() :: %{ optional(any()) => any(), :utc_offset => Calendar.utc_offset(), :std_offset => Calendar.std_offset(), :zone_abbr => Calendar.zone_abbr() } ``` A period where a certain combination of UTC offset, standard offset and zone abbreviation is in effect. For instance one period could be the summer of 2018 in "Europe/London" where summer time / daylight saving time is in effect and lasts from spring to autumn. At autumn the `std_offset` changes along with the `zone_abbr` so a different period is needed during winter. ### time\_zone\_period\_limit() #### Specs ``` time_zone_period_limit() :: Calendar.naive_datetime() ``` Limit for when a certain time zone period begins or ends. A beginning is inclusive. An ending is exclusive. Eg. if a period is from 2015-03-29 01:00:00 and until 2015-10-25 01:00:00, the period includes and begins from the beginning of 2015-03-29 01:00:00 and lasts until just before 2015-10-25 01:00:00. A beginning or end for certain periods are infinite. For instance the latest period for time zones without DST or plans to change. However for the purpose of this behaviour they are only used for gaps in wall time where the needed period limits are at a certain time. Callbacks ========== ### time\_zone\_period\_from\_utc\_iso\_days(arg1, arg2) #### Specs ``` time_zone_period_from_utc_iso_days(Calendar.iso_days(), Calendar.time_zone()) :: {:ok, time_zone_period()} | {:error, :time_zone_not_found | :utc_only_time_zone_database} ``` Time zone period for a point in time in UTC for a specific time zone. Takes a time zone name and a point in time for UTC and returns a `time_zone_period` for that point in time. ### time\_zone\_periods\_from\_wall\_datetime(arg1, arg2) #### Specs ``` time_zone_periods_from_wall_datetime( Calendar.naive_datetime(), Calendar.time_zone() ) :: {:ok, time_zone_period()} | {:ambiguous, time_zone_period(), time_zone_period()} | {:gap, {time_zone_period(), time_zone_period_limit()}, {time_zone_period(), time_zone_period_limit()}} | {:error, :time_zone_not_found | :utc_only_time_zone_database} ``` Possible time zone periods for a certain time zone and wall clock date and time. When the provided `datetime` is ambiguous a tuple with `:ambiguous` and two possible periods. The periods in the list are sorted with the first element being the one that begins first. When the provided `datetime` is in a gap - for instance during the "spring forward" when going from winter time to summer time, a tuple with `:gap` and two periods with limits are returned in a nested tuple. The first nested two-tuple is the period before the gap and a naive datetime with a limit for when the period ends (wall time). The second nested two-tuple is the period just after the gap and a datetime (wall time) for when the period begins just after the gap. If there is only a single possible period for the provided `datetime`, the a tuple with `:single` and the `time_zone_period` is returned. elixir Comprehensions Getting Started Comprehensions ============== In Elixir, it is common to loop over an Enumerable, often filtering out some results and mapping values into another list. Comprehensions are syntactic sugar for such constructs: they group those common tasks into the `for` special form. For example, we can map a list of integers into their squared values: ``` iex> for n <- [1, 2, 3, 4], do: n * n [1, 4, 9, 16] ``` A comprehension is made of three parts: generators, filters, and collectables. Generators and filters ---------------------- In the expression above, `n <- [1, 2, 3, 4]` is the **generator**. It is literally generating values to be used in the comprehension. Any enumerable can be passed on the right-hand side of the generator expression: ``` iex> for n <- 1..4, do: n * n [1, 4, 9, 16] ``` Generator expressions also support pattern matching on their left-hand side; all non-matching patterns are *ignored*. Imagine that, instead of a range, we have a keyword list where the key is the atom `:good` or `:bad` and we only want to compute the square of the `:good` values: ``` iex> values = [good: 1, good: 2, bad: 3, good: 4] iex> for {:good, n} <- values, do: n * n [1, 4, 16] ``` Alternatively to pattern matching, filters can be used to select some particular elements. For example, we can select the multiples of 3 and discard all others: ``` iex> multiple_of_3? = fn(n) -> rem(n, 3) == 0 end iex> for n <- 0..5, multiple_of_3?.(n), do: n * n [0, 9] ``` Comprehensions discard all elements for which the filter expression returns `false` or `nil`; all other values are selected. Comprehensions generally provide a much more concise representation than using the equivalent functions from the `Enum` and `Stream` modules. Furthermore, comprehensions also allow multiple generators and filters to be given. Here is an example that receives a list of directories and gets the size of each file in those directories: ``` dirs = ['/home/mikey', '/home/james'] for dir <- dirs, file <- File.ls!(dir), path = Path.join(dir, file), File.regular?(path) do File.stat!(path).size end ``` Multiple generators can also be used to calculate the cartesian product of two lists: ``` iex> for i <- [:a, :b, :c], j <- [1, 2], do: {i, j} [a: 1, a: 2, b: 1, b: 2, c: 1, c: 2] ``` Finally, keep in mind that variable assignments inside the comprehension, be it in generators, filters or inside the block, are not reflected outside of the comprehension. Bitstring generators -------------------- Bitstring generators are also supported and are very useful when you need to comprehend over bitstring streams. The example below receives a list of pixels from a binary with their respective red, green and blue values and converts them into tuples of three elements each: ``` iex> pixels = <<213, 45, 132, 64, 76, 32, 76, 0, 0, 234, 32, 15>> iex> for <<r::8, g::8, b::8 <- pixels>>, do: {r, g, b} [{213, 45, 132}, {64, 76, 32}, {76, 0, 0}, {234, 32, 15}] ``` A bitstring generator can be mixed with “regular” enumerable generators, and supports filters as well. The `:into` option ------------------ In the examples above, all the comprehensions returned lists as their result. However, the result of a comprehension can be inserted into different data structures by passing the `:into` option to the comprehension. For example, a bitstring generator can be used with the `:into` option in order to easily remove all spaces in a string: ``` iex> for <<c <- " hello world ">>, c != ?\s, into: "", do: <<c>> "helloworld" ``` Sets, maps, and other dictionaries can also be given to the `:into` option. In general, `:into` accepts any structure that implements the [`Collectable`](https://hexdocs.pm/elixir/Collectable.html) protocol. A common use case of `:into` can be transforming values in a map, without touching the keys: ``` iex> for {key, val} <- %{"a" => 1, "b" => 2}, into: %{}, do: {key, val * val} %{"a" => 1, "b" => 4} ``` Let’s make another example using streams. Since the `IO` module provides streams (that are both `Enumerable`s and `Collectable`s), an echo terminal that echoes back the upcased version of whatever is typed can be implemented using comprehensions: ``` iex> stream = IO.stream(:stdio, :line) iex> for line <- stream, into: stream do ...> String.upcase(line) <> "\n" ...> end ``` Now type any string into the terminal and you will see that the same value will be printed in upper-case. Unfortunately, this example also got your IEx shell stuck in the comprehension, so you will need to hit `Ctrl+C` twice to get out of it. :) elixir Integer Integer ======== Functions for working with integers. Some functions that work on integers are found in [`Kernel`](kernel): * [`abs/1`](kernel#abs/1) * [`div/2`](kernel#div/2) * [`max/2`](kernel#max/2) * [`min/2`](kernel#min/2) * [`rem/2`](kernel#rem/2) Summary ======== Guards ------- [is\_even(integer)](#is_even/1) Determines if an `integer` is even. [is\_odd(integer)](#is_odd/1) Determines if `integer` is odd. Functions ---------- [digits(integer, base \\ 10)](#digits/2) Returns the ordered digits for the given `integer`. [floor\_div(dividend, divisor)](#floor_div/2) Performs a floored integer division. [gcd(integer1, integer2)](#gcd/2) Returns the greatest common divisor of the two given integers. [mod(dividend, divisor)](#mod/2) Computes the modulo remainder of an integer division. [parse(binary, base \\ 10)](#parse/2) Parses a text representation of an integer. [to\_charlist(integer)](#to_charlist/1) Returns a charlist which corresponds to the text representation of the given `integer`. [to\_charlist(integer, base)](#to_charlist/2) Returns a charlist which corresponds to the text representation of `integer` in the given `base`. [to\_string(integer)](#to_string/1) Returns a binary which corresponds to the text representation of `integer`. [to\_string(integer, base)](#to_string/2) Returns a binary which corresponds to the text representation of `integer` in the given `base`. [undigits(digits, base \\ 10)](#undigits/2) Returns the integer represented by the ordered `digits`. Guards ======= ### is\_even(integer) Determines if an `integer` is even. Returns `true` if the given `integer` is an even number, otherwise it returns `false`. Allowed in guard clauses. #### Examples ``` iex> Integer.is_even(10) true iex> Integer.is_even(5) false iex> Integer.is_even(-10) true iex> Integer.is_even(0) true ``` ### is\_odd(integer) Determines if `integer` is odd. Returns `true` if the given `integer` is an odd number, otherwise it returns `false`. Allowed in guard clauses. #### Examples ``` iex> Integer.is_odd(5) true iex> Integer.is_odd(6) false iex> Integer.is_odd(-5) true iex> Integer.is_odd(0) false ``` Functions ========== ### digits(integer, base \\ 10) #### Specs ``` digits(integer(), pos_integer()) :: [integer(), ...] ``` Returns the ordered digits for the given `integer`. An optional `base` value may be provided representing the radix for the returned digits. This one must be an integer >= 2. #### Examples ``` iex> Integer.digits(123) [1, 2, 3] iex> Integer.digits(170, 2) [1, 0, 1, 0, 1, 0, 1, 0] iex> Integer.digits(-170, 2) [-1, 0, -1, 0, -1, 0, -1, 0] ``` ### floor\_div(dividend, divisor) #### Specs ``` floor_div(integer(), neg_integer() | pos_integer()) :: integer() ``` Performs a floored integer division. Raises an [`ArithmeticError`](arithmeticerror) exception if one of the arguments is not an integer, or when the `divisor` is `0`. [`Integer.floor_div/2`](integer#floor_div/2) performs *floored* integer division. This means that the result is always rounded towards negative infinity. If you want to perform truncated integer division (rounding towards zero), use [`Kernel.div/2`](kernel#div/2) instead. #### Examples ``` iex> Integer.floor_div(5, 2) 2 iex> Integer.floor_div(6, -4) -2 iex> Integer.floor_div(-99, 2) -50 ``` ### gcd(integer1, integer2) #### Specs ``` gcd(integer(), integer()) :: non_neg_integer() ``` Returns the greatest common divisor of the two given integers. The greatest common divisor (GCD) of `integer1` and `integer2` is the largest positive integer that divides both `integer1` and `integer2` without leaving a remainder. By convention, `gcd(0, 0)` returns `0`. #### Examples ``` iex> Integer.gcd(2, 3) 1 iex> Integer.gcd(8, 12) 4 iex> Integer.gcd(8, -12) 4 iex> Integer.gcd(10, 0) 10 iex> Integer.gcd(7, 7) 7 iex> Integer.gcd(0, 0) 0 ``` ### mod(dividend, divisor) #### Specs ``` mod(integer(), neg_integer() | pos_integer()) :: integer() ``` Computes the modulo remainder of an integer division. [`Integer.mod/2`](integer#mod/2) uses floored division, which means that the result will always have the sign of the `divisor`. Raises an [`ArithmeticError`](arithmeticerror) exception if one of the arguments is not an integer, or when the `divisor` is `0`. #### Examples ``` iex> Integer.mod(5, 2) 1 iex> Integer.mod(6, -4) -2 ``` ### parse(binary, base \\ 10) #### Specs ``` parse(binary(), 2..36) :: {integer(), binary()} | :error ``` Parses a text representation of an integer. An optional `base` to the corresponding integer can be provided. If `base` is not given, 10 will be used. If successful, returns a tuple in the form of `{integer, remainder_of_binary}`. Otherwise `:error`. Raises an error if `base` is less than 2 or more than 36. If you want to convert a string-formatted integer directly to an integer, [`String.to_integer/1`](string#to_integer/1) or [`String.to_integer/2`](string#to_integer/2) can be used instead. #### Examples ``` iex> Integer.parse("34") {34, ""} iex> Integer.parse("34.5") {34, ".5"} iex> Integer.parse("three") :error iex> Integer.parse("34", 10) {34, ""} iex> Integer.parse("f4", 16) {244, ""} iex> Integer.parse("Awww++", 36) {509216, "++"} iex> Integer.parse("fab", 10) :error iex> Integer.parse("a2", 38) ** (ArgumentError) invalid base 38 ``` ### to\_charlist(integer) #### Specs ``` to_charlist(integer()) :: charlist() ``` Returns a charlist which corresponds to the text representation of the given `integer`. Inlined by the compiler. #### Examples ``` iex> Integer.to_charlist(123) '123' iex> Integer.to_charlist(+456) '456' iex> Integer.to_charlist(-789) '-789' iex> Integer.to_charlist(0123) '123' ``` ### to\_charlist(integer, base) #### Specs ``` to_charlist(integer(), 2..36) :: charlist() ``` Returns a charlist which corresponds to the text representation of `integer` in the given `base`. `base` can be an integer between 2 and 36. Inlined by the compiler. #### Examples ``` iex> Integer.to_charlist(100, 16) '64' iex> Integer.to_charlist(-100, 16) '-64' iex> Integer.to_charlist(882_681_651, 36) 'ELIXIR' ``` ### to\_string(integer) #### Specs ``` to_string(integer()) :: String.t() ``` Returns a binary which corresponds to the text representation of `integer`. Inlined by the compiler. #### Examples ``` iex> Integer.to_string(123) "123" iex> Integer.to_string(+456) "456" iex> Integer.to_string(-789) "-789" iex> Integer.to_string(0123) "123" ``` ### to\_string(integer, base) #### Specs ``` to_string(integer(), 2..36) :: String.t() ``` Returns a binary which corresponds to the text representation of `integer` in the given `base`. `base` can be an integer between 2 and 36. Inlined by the compiler. #### Examples ``` iex> Integer.to_string(100, 16) "64" iex> Integer.to_string(-100, 16) "-64" iex> Integer.to_string(882_681_651, 36) "ELIXIR" ``` ### undigits(digits, base \\ 10) #### Specs ``` undigits([integer()], pos_integer()) :: integer() ``` Returns the integer represented by the ordered `digits`. An optional `base` value may be provided representing the radix for the `digits`. Base has to be an integer greater than or equal to `2`. #### Examples ``` iex> Integer.undigits([1, 2, 3]) 123 iex> Integer.undigits([1, 4], 16) 20 iex> Integer.undigits([]) 0 ```
programming_docs
elixir Mix.Project Mix.Project ============ Defines and manipulates Mix projects. A Mix project is defined by calling `use Mix.Project` in a module, usually placed in `mix.exs`: ``` defmodule MyApp.MixProject do use Mix.Project def project do [ app: :my_app, version: "1.0.0" ] end end ``` Configuration -------------- In order to configure Mix, the module that `use`s [`Mix.Project`](#content) should export a `project/0` function that returns a keyword list representing configuration for the project. This configuration can be read using [`Mix.Project.config/0`](mix.project#config/0). Note that [`config/0`](#config/0) won't fail if a project is not defined; this allows many Mix tasks to work without a project. If a task requires a project to be defined or needs to access a special function within the project, the task can call [`Mix.Project.get!/0`](mix.project#get!/0) which fails with [`Mix.NoProjectError`](mix.noprojecterror) in the case a project is not defined. There isn't a comprehensive list of all the options that can be returned by `project/0` since many Mix tasks define their own options that they read from this configuration. For example, look at the "Configuration" section in the documentation for the [`Mix.Tasks.Compile`](mix.tasks.compile) task. These are a few options that are not used by just one Mix task (and will thus be documented here): * `:build_per_environment` - if `true`, builds will be *per-environment*. If `false`, builds will go in `_build/shared` regardless of the Mix environment. Defaults to `true`. * `:aliases` - a list of task aliases. For more information, check out the "Aliases" section in the documentation for the [`Mix`](mix) module. Defaults to `[]`. * `:config_path` - a string representing the path of the main config file. See [`config_files/0`](#config_files/0) for more information. Defaults to `"config/config.exs"`. * `:default_task` - a string representing the default task to be run by `mix` when no task is specified. Defaults to `"run"`. * `:deps` - a list of dependencies of this project. Refer to the documentation for the [`Mix.Tasks.Deps`](mix.tasks.deps) task for more information. Defaults to `[]`. * `:deps_path` - directory where dependencies are stored. Also see [`deps_path/1`](#deps_path/1). Defaults to `"deps"`. * `:lockfile` - the name of the lockfile used by the `mix deps.*` family of tasks. Defaults to `"mix.lock"`. * `:preferred_cli_env` - a keyword list of `{task, env}` tuples where `task` is the task name as an atom (for example, `:"deps.get"`) and `env` is the preferred environment (for example, `:test`). This option overrides what is specified by the tasks with the `@preferred_cli_env` attribute (see the docs for [`Mix.Task`](mix.task)). Defaults to `[]`. * `:preferred_cli_target` - a keyword list of `{task, target}` tuples where `task` is the task name as an atom (for example, `:test`) and `target` is the preferred target (for example, `:host`). Defaults to `[]`. For more options, keep an eye on the documentation for single Mix tasks; good examples are the [`Mix.Tasks.Compile`](mix.tasks.compile) task and all the specific compiler tasks (such as [`Mix.Tasks.Compile.Elixir`](mix.tasks.compile.elixir) or [`Mix.Tasks.Compile.Erlang`](mix.tasks.compile.erlang)). Note that sometimes the same configuration option is mentioned in the documentation for different tasks; this is just because it's common for many tasks to read and use the same configuration option (for example, `:erlc_paths` is used by [`mix compile.erlang`](mix.tasks.compile.erlang), [`mix compile.yecc`](mix.tasks.compile.yecc), and other tasks). Erlang projects ---------------- Mix can be used to manage Erlang projects that don't have any Elixir code. To ensure Mix tasks work correctly for an Erlang project, `language: :erlang` has to be part of the configuration returned by `project/0`. This setting also makes sure Elixir is not added as a dependency to the generated `.app` file or to the escript generated with [`mix escript.build`](mix.tasks.escript.build), and so on. Summary ======== Functions ---------- [app\_path(config \\ config())](#app_path/1) Returns the application path inside the build. [apps\_paths(config \\ config())](#apps_paths/1) Returns a map with the umbrella child applications paths. [build\_path(config \\ config())](#build_path/1) Returns the build path for the given project. [build\_structure(config \\ config(), opts \\ [])](#build_structure/2) Builds the project structure for the given application. [clear\_deps\_cache()](#clear_deps_cache/0) Clears the dependency for the current environment. [compile(args, config \\ [])](#compile/2) Compiles the given project. [compile\_path(config \\ config())](#compile_path/1) Returns the paths the given project compiles to. [config()](#config/0) Returns the project configuration. [config\_files()](#config_files/0) Returns a list of project configuration files for this project. [config\_mtime()](#config_mtime/0) Returns the latest modification time from config files. [consolidation\_path(config \\ config())](#consolidation_path/1) Returns the path where protocol consolidations are stored. [deps\_path(config \\ config())](#deps_path/1) Returns the path where dependencies are stored for the given project. [deps\_paths(opts \\ [])](#deps_paths/1) Returns the full path of all dependencies as a map. [ensure\_structure(config \\ config(), opts \\ [])](#ensure_structure/2) Ensures the project structure for the given project exists. [get()](#get/0) Retrieves the current project if there is one. [get!()](#get!/0) Same as [`get/0`](#get/0), but raises an exception if there is no current project. [in\_project(app, path, post\_config \\ [], fun)](#in_project/4) Runs the given `fun` inside the given project. [load\_paths(config \\ config())](#load_paths/1) deprecated [manifest\_path(config \\ config())](#manifest_path/1) Returns the path where manifests are stored. [umbrella?(config \\ config())](#umbrella?/1) Returns `true` if `config` is the configuration for an umbrella project. Functions ========== ### app\_path(config \\ config()) #### Specs ``` app_path(keyword()) :: Path.t() ``` Returns the application path inside the build. The returned path will be expanded. #### Examples ``` Mix.Project.app_path() #=> "/path/to/project/_build/shared/lib/app" ``` ### apps\_paths(config \\ config()) #### Specs ``` apps_paths(keyword()) :: %{optional(atom()) => Path.t()} | nil ``` Returns a map with the umbrella child applications paths. These paths are based on the `:apps_path` and `:apps` configurations. If the given project configuration identifies an umbrella project, the return value is a map of `app => path` where `app` is a child app of the umbrella and `path` is its path relative to the root of the umbrella project. If the given project configuration does not identify an umbrella project, `nil` is returned. #### Examples ``` Mix.Project.apps_paths() #=> %{my_app1: "apps/my_app1", my_app2: "apps/my_app2"} ``` ### build\_path(config \\ config()) #### Specs ``` build_path(keyword()) :: Path.t() ``` Returns the build path for the given project. If no configuration is given, the one for the current project is used. The returned path will be expanded. #### Examples ``` Mix.Project.build_path() #=> "/path/to/project/_build/shared" ``` If `:build_per_environment` is set to `true`, it will create a new build per environment: ``` Mix.env() #=> :dev Mix.Project.build_path() #=> "/path/to/project/_build/dev" ``` ### build\_structure(config \\ config(), opts \\ []) #### Specs ``` build_structure(keyword(), keyword()) :: :ok ``` Builds the project structure for the given application. #### Options * `:symlink_ebin` - symlink ebin instead of copying it ### clear\_deps\_cache() #### Specs ``` clear_deps_cache() :: :ok ``` Clears the dependency for the current environment. Useful when dependencies need to be reloaded due to change of global state. ### compile(args, config \\ []) #### Specs ``` compile([term()], keyword()) :: term() ``` Compiles the given project. ### compile\_path(config \\ config()) #### Specs ``` compile_path(keyword()) :: Path.t() ``` Returns the paths the given project compiles to. If no configuration is given, the one for the current project will be used. The returned path will be expanded. #### Examples ``` Mix.Project.compile_path() #=> "/path/to/project/_build/dev/lib/app/ebin" ``` ### config() #### Specs ``` config() :: keyword() ``` Returns the project configuration. If there is no project defined, it still returns a keyword list with default values. This allows many Mix tasks to work without the need for an underlying project. Note this configuration is cached once the project is pushed onto the stack. Calling it multiple times won't cause it to be recomputed. Do not use [`Mix.Project.config/0`](mix.project#config/0) to find the runtime configuration. Use it only to configure aspects of your project (like compilation directories) and not your application runtime. ### config\_files() #### Specs ``` config_files() :: [Path.t()] ``` Returns a list of project configuration files for this project. This function is usually used in compilation tasks to trigger a full recompilation whenever such configuration files change. It returns the `mix.exs` file, the lock manifest, and all config files in the `config` directory that do not start with a leading period (for example, `.my_config.exs`). ### config\_mtime() #### Specs ``` config_mtime() :: posix_mtime when posix_mtime: integer() ``` Returns the latest modification time from config files. This function is usually used in compilation tasks to trigger a full recompilation whenever such configuration files change. For this reason, the mtime is cached to avoid file system lookups. ### consolidation\_path(config \\ config()) #### Specs ``` consolidation_path(keyword()) :: Path.t() ``` Returns the path where protocol consolidations are stored. The returned path will be expanded. #### Examples ``` Mix.Project.consolidation_path() #=> "/path/to/project/_build/dev/lib/my_app/consolidated" ``` Inside umbrellas: ``` Mix.Project.consolidation_path() #=> "/path/to/project/_build/dev/consolidated" ``` ### deps\_path(config \\ config()) #### Specs ``` deps_path(keyword()) :: Path.t() ``` Returns the path where dependencies are stored for the given project. If no configuration is given, the one for the current project is used. The returned path will be expanded. #### Examples ``` Mix.Project.deps_path() #=> "/path/to/project/deps" ``` ### deps\_paths(opts \\ []) #### Specs ``` deps_paths(keyword()) :: %{optional(atom()) => Path.t()} ``` Returns the full path of all dependencies as a map. #### Options * `:depth` - only returns dependencies to the depth level, a depth of 1 will only return top-level dependencies * `:parents` - starts the dependency traversal from the given parents instead of the application root #### Examples ``` Mix.Project.deps_paths() #=> %{foo: "deps/foo", bar: "custom/path/dep"} ``` ### ensure\_structure(config \\ config(), opts \\ []) #### Specs ``` ensure_structure(keyword(), keyword()) :: :ok ``` Ensures the project structure for the given project exists. In case it does exist, it is a no-op. Otherwise, it is built. ### get() #### Specs ``` get() :: module() | nil ``` Retrieves the current project if there is one. If there is no current project, `nil` is returned. This may happen in cases there is no `mix.exs` in the current directory. If you expect a project to be defined, i.e., it is a requirement of the current task, you should call [`get!/0`](#get!/0) instead. ### get!() #### Specs ``` get!() :: module() ``` Same as [`get/0`](#get/0), but raises an exception if there is no current project. This is usually called by tasks that need additional functions on the project to be defined. Since such tasks usually depend on a project being defined, this function raises a [`Mix.NoProjectError`](mix.noprojecterror) exception in case no project is available. ### in\_project(app, path, post\_config \\ [], fun) #### Specs ``` in_project(atom(), Path.t(), keyword(), (module() -> result)) :: result when result: term() ``` Runs the given `fun` inside the given project. This function changes the current working directory and loads the project at the given directory onto the project stack. A `post_config` can be passed that will be merged into the project configuration. `fun` is called with the module name of the given [`Mix.Project`](#content). The return value of this function is the return value of `fun`. #### Examples ``` Mix.Project.in_project(:my_app, "/path/to/my_app", fn module -> "Mix project is: #{inspect(module)}" end) #=> "Mix project is: MyApp.MixProject" ``` ### load\_paths(config \\ config()) This function is deprecated. Use Mix.Project.compile\_path/1 instead. ### manifest\_path(config \\ config()) #### Specs ``` manifest_path(keyword()) :: Path.t() ``` Returns the path where manifests are stored. By default they are stored in the app path inside the build directory. Umbrella applications have the manifest path set to the root of the build directory. Directories may be changed in future releases. The returned path will be expanded. #### Examples ``` Mix.Project.manifest_path() #=> "/path/to/project/_build/shared/lib/app/.mix" ``` ### umbrella?(config \\ config()) #### Specs ``` umbrella?(keyword()) :: boolean() ``` Returns `true` if `config` is the configuration for an umbrella project. When called with no arguments, tells whether the current project is an umbrella project. elixir OptionParser OptionParser ============= Functions for parsing command line arguments. When calling a command, it's possible to pass command line options to modify what the command does. In this documentation, those are called "switches", in other situations they may be called "flags" or simply "options". A switch can be given a value, also called an "argument". The main function in this module is [`parse/2`](#parse/2), which parses a list of command line options and arguments into a keyword list: ``` iex> OptionParser.parse(["--debug"], strict: [debug: :boolean]) {[debug: true], [], []} ``` [`OptionParser`](#content) provides some conveniences out of the box, such as aliases and automatic handling of negation switches. The [`parse_head/2`](#parse_head/2) function is an alternative to [`parse/2`](#parse/2) which stops parsing as soon as it finds a value that is not a switch nor a value for a previous switch. This module also provides low-level functions, such as [`next/2`](#next/2), for parsing switches manually, as well as [`split/1`](#split/1) and [`to_argv/1`](#to_argv/1) for parsing from and converting switches to strings. Summary ======== Types ------ [argv()](#t:argv/0) [errors()](#t:errors/0) [options()](#t:options/0) [parsed()](#t:parsed/0) Functions ---------- [next(argv, opts \\ [])](#next/2) Low-level function that parses one option. [parse(argv, opts \\ [])](#parse/2) Parses `argv` into a keyword list. [parse!(argv, opts \\ [])](#parse!/2) The same as [`parse/2`](#parse/2) but raises an [`OptionParser.ParseError`](optionparser.parseerror) exception if any invalid options are given. [parse\_head(argv, opts \\ [])](#parse_head/2) Similar to [`parse/2`](#parse/2) but only parses the head of `argv`; as soon as it finds a non-switch, it stops parsing. [parse\_head!(argv, opts \\ [])](#parse_head!/2) The same as [`parse_head/2`](#parse_head/2) but raises an [`OptionParser.ParseError`](optionparser.parseerror) exception if any invalid options are given. [split(string)](#split/1) Splits a string into [`argv/0`](#t:argv/0) chunks. [to\_argv(enum, options \\ [])](#to_argv/2) Receives a key-value enumerable and converts it to [`argv/0`](#t:argv/0). Types ====== ### argv() #### Specs ``` argv() :: [String.t()] ``` ### errors() #### Specs ``` errors() :: [{String.t(), String.t() | nil}] ``` ### options() #### Specs ``` options() :: [switches: keyword(), strict: keyword(), aliases: keyword()] ``` ### parsed() #### Specs ``` parsed() :: keyword() ``` Functions ========== ### next(argv, opts \\ []) #### Specs ``` next(argv(), options()) :: {:ok, key :: atom(), value :: term(), argv()} | {:invalid, String.t(), String.t() | nil, argv()} | {:undefined, String.t(), String.t() | nil, argv()} | {:error, argv()} ``` Low-level function that parses one option. It accepts the same options as [`parse/2`](#parse/2) and [`parse_head/2`](#parse_head/2) as both functions are built on top of this function. This function may return: * `{:ok, key, value, rest}` - the option `key` with `value` was successfully parsed * `{:invalid, key, value, rest}` - the option `key` is invalid with `value` (returned when the value cannot be parsed according to the switch type) * `{:undefined, key, value, rest}` - the option `key` is undefined (returned in strict mode when the switch is unknown or on nonexistent atoms) * `{:error, rest}` - there are no switches at the head of the given `argv` ### parse(argv, opts \\ []) #### Specs ``` parse(argv(), options()) :: {parsed(), argv(), errors()} ``` Parses `argv` into a keyword list. It returns a three-element tuple with the form `{parsed, args, invalid}`, where: * `parsed` is a keyword list of parsed switches with `{switch_name, value}` tuples in it; `switch_name` is the atom representing the switch name while `value` is the value for that switch parsed according to `opts` (see the "Examples" section for more information) * `args` is a list of the remaining arguments in `argv` as strings * `invalid` is a list of invalid options as `{option_name, value}` where `option_name` is the raw option and `value` is `nil` if the option wasn't expected or the string value if the value didn't have the expected type for the corresponding option Elixir converts switches to underscored atoms, so `--source-path` becomes `:source_path`. This is done to better suit Elixir conventions. However, this means that switches can't contain underscores and switches that do contain underscores are always returned in the list of invalid switches. When parsing, it is common to list switches and their expected types: ``` iex> OptionParser.parse(["--debug"], strict: [debug: :boolean]) {[debug: true], [], []} iex> OptionParser.parse(["--source", "lib"], strict: [source: :string]) {[source: "lib"], [], []} iex> OptionParser.parse( ...> ["--source-path", "lib", "test/enum_test.exs", "--verbose"], ...> strict: [source_path: :string, verbose: :boolean] ...> ) {[source_path: "lib", verbose: true], ["test/enum_test.exs"], []} ``` We will explore the valid switches and operation modes of option parser below. #### Options The following options are supported: * `:switches` or `:strict` - see the "Switch definitions" section below * `:allow_nonexistent_atoms` - see the "Parsing unknown switches" section below * `:aliases` - see the "Aliases" section below #### Switch definitions Switches can be specified via one of two options: * `:strict` - defines strict switches and their types. Any switch in `argv` that is not specified in the list is returned in the invalid options list. This is the preferred way to parse options. * `:switches` - defines switches and their types. This function still attempts to parse switches that are not in this list. Both these options accept a keyword list where the key is an atom defining the name of the switch and value is the `type` of the switch (see the "Types" section below for more information). Note that you should only supply the `:switches` or the `:strict` option. If you supply both, an [`ArgumentError`](argumenterror) exception will be raised. ### Types Switches parsed by [`OptionParser`](#content) may take zero or one arguments. The following switches types take no arguments: * `:boolean` - sets the value to `true` when given (see also the "Negation switches" section below) * `:count` - counts the number of times the switch is given The following switches take one argument: * `:integer` - parses the value as an integer * `:float` - parses the value as a float * `:string` - parses the value as a string If a switch can't be parsed according to the given type, it is returned in the invalid options list. ### Modifiers Switches can be specified with modifiers, which change how they behave. The following modifiers are supported: * `:keep` - keeps duplicated elements instead of overriding them; works with all types except `:count`. Specifying `switch_name: :keep` assumes the type of `:switch_name` will be `:string`. To use `:keep` with a type other than `:string`, use a list as the type for the switch. For example: `[foo: [:integer, :keep]]`. ### Negation switches In case a switch `SWITCH` is specified to have type `:boolean`, it may be passed as `--no-SWITCH` as well which will set the option to `false`: ``` iex> OptionParser.parse(["--no-op", "path/to/file"], switches: [op: :boolean]) {[op: false], ["path/to/file"], []} ``` ### Parsing unknown switches When the `:switches` option is given, [`OptionParser`](#content) will attempt to parse unknown switches: ``` iex> OptionParser.parse(["--debug"], switches: [key: :string]) {[debug: true], [], []} ``` Even though we haven't specified `--debug` in the list of switches, it is part of the returned options. This would also work: ``` iex> OptionParser.parse(["--debug", "value"], switches: [key: :string]) {[debug: "value"], [], []} ``` Switches followed by a value will be assigned the value, as a string. Switches without an argument will be set automatically to `true`. Since we cannot assert the type of the switch value, it is preferred to use the `:strict` option that accepts only known switches and always verify their types. If you do want to parse unknown switches, remember that Elixir converts switches to atoms. Since atoms are not garbage-collected, OptionParser will only parse switches that translate to atoms used by the runtime to avoid leaking atoms. For instance, the code below will discard the `--option-parser-example` switch because the `:option_parser_example` atom is never used anywhere: ``` OptionParser.parse(["--option-parser-example"], switches: [debug: :boolean]) # The :option_parser_example atom is not used anywhere below ``` However, the code below would work as long as `:option_parser_example` atom is used at some point later (or earlier) **in the same module**. For example: ``` {opts, _, _} = OptionParser.parse(["--option-parser-example"], switches: [debug: :boolean]) # ... then somewhere in the same module you access it ... opts[:option_parser_example] ``` In other words, Elixir will only parse options that are used by the runtime, ignoring all others. If you would like to parse all switches, regardless if they exist or not, you can force creation of atoms by passing `allow_nonexistent_atoms: true` as option. Use this option with care. It is only useful when you are building command-line applications that receive dynamically-named arguments and must be avoided in long-running systems. #### Aliases A set of aliases can be specified in the `:aliases` option: ``` iex> OptionParser.parse(["-d"], aliases: [d: :debug], strict: [debug: :boolean]) {[debug: true], [], []} ``` #### Examples Here are some examples of working with different types and modifiers: ``` iex> OptionParser.parse(["--unlock", "path/to/file"], strict: [unlock: :boolean]) {[unlock: true], ["path/to/file"], []} iex> OptionParser.parse( ...> ["--unlock", "--limit", "0", "path/to/file"], ...> strict: [unlock: :boolean, limit: :integer] ...> ) {[unlock: true, limit: 0], ["path/to/file"], []} iex> OptionParser.parse(["--limit", "3"], strict: [limit: :integer]) {[limit: 3], [], []} iex> OptionParser.parse(["--limit", "xyz"], strict: [limit: :integer]) {[], [], [{"--limit", "xyz"}]} iex> OptionParser.parse(["--verbose"], switches: [verbose: :count]) {[verbose: 1], [], []} iex> OptionParser.parse(["-v", "-v"], aliases: [v: :verbose], strict: [verbose: :count]) {[verbose: 2], [], []} iex> OptionParser.parse(["--unknown", "xyz"], strict: []) {[], ["xyz"], [{"--unknown", nil}]} iex> OptionParser.parse( ...> ["--limit", "3", "--unknown", "xyz"], ...> switches: [limit: :integer] ...> ) {[limit: 3, unknown: "xyz"], [], []} iex> OptionParser.parse( ...> ["--unlock", "path/to/file", "--unlock", "path/to/another/file"], ...> strict: [unlock: :keep] ...> ) {[unlock: "path/to/file", unlock: "path/to/another/file"], [], []} ``` ### parse!(argv, opts \\ []) #### Specs ``` parse!(argv(), options()) :: {parsed(), argv()} ``` The same as [`parse/2`](#parse/2) but raises an [`OptionParser.ParseError`](optionparser.parseerror) exception if any invalid options are given. If there are no errors, returns a `{parsed, rest}` tuple where: * `parsed` is the list of parsed switches (same as in [`parse/2`](#parse/2)) * `rest` is the list of arguments (same as in [`parse/2`](#parse/2)) #### Examples ``` iex> OptionParser.parse!(["--debug", "path/to/file"], strict: [debug: :boolean]) {[debug: true], ["path/to/file"]} iex> OptionParser.parse!(["--limit", "xyz"], strict: [limit: :integer]) ** (OptionParser.ParseError) 1 error found! --limit : Expected type integer, got "xyz" iex> OptionParser.parse!(["--unknown", "xyz"], strict: []) ** (OptionParser.ParseError) 1 error found! --unknown : Unknown option iex> OptionParser.parse!( ...> ["-l", "xyz", "-f", "bar"], ...> switches: [limit: :integer, foo: :integer], ...> aliases: [l: :limit, f: :foo] ...> ) ** (OptionParser.ParseError) 2 errors found! -l : Expected type integer, got "xyz" -f : Expected type integer, got "bar" ``` ### parse\_head(argv, opts \\ []) #### Specs ``` parse_head(argv(), options()) :: {parsed(), argv(), errors()} ``` Similar to [`parse/2`](#parse/2) but only parses the head of `argv`; as soon as it finds a non-switch, it stops parsing. See [`parse/2`](#parse/2) for more information. #### Example ``` iex> OptionParser.parse_head( ...> ["--source", "lib", "test/enum_test.exs", "--verbose"], ...> switches: [source: :string, verbose: :boolean] ...> ) {[source: "lib"], ["test/enum_test.exs", "--verbose"], []} iex> OptionParser.parse_head( ...> ["--verbose", "--source", "lib", "test/enum_test.exs", "--unlock"], ...> switches: [source: :string, verbose: :boolean, unlock: :boolean] ...> ) {[verbose: true, source: "lib"], ["test/enum_test.exs", "--unlock"], []} ``` ### parse\_head!(argv, opts \\ []) #### Specs ``` parse_head!(argv(), options()) :: {parsed(), argv()} ``` The same as [`parse_head/2`](#parse_head/2) but raises an [`OptionParser.ParseError`](optionparser.parseerror) exception if any invalid options are given. If there are no errors, returns a `{parsed, rest}` tuple where: * `parsed` is the list of parsed switches (same as in [`parse_head/2`](#parse_head/2)) * `rest` is the list of arguments (same as in [`parse_head/2`](#parse_head/2)) #### Examples ``` iex> OptionParser.parse_head!( ...> ["--source", "lib", "path/to/file", "--verbose"], ...> switches: [source: :string, verbose: :boolean] ...> ) {[source: "lib"], ["path/to/file", "--verbose"]} iex> OptionParser.parse_head!( ...> ["--number", "lib", "test/enum_test.exs", "--verbose"], ...> strict: [number: :integer] ...> ) ** (OptionParser.ParseError) 1 error found! --number : Expected type integer, got "lib" iex> OptionParser.parse_head!( ...> ["--verbose", "--source", "lib", "test/enum_test.exs", "--unlock"], ...> strict: [verbose: :integer, source: :integer] ...> ) ** (OptionParser.ParseError) 2 errors found! --verbose : Missing argument of type integer --source : Expected type integer, got "lib" ``` ### split(string) #### Specs ``` split(String.t()) :: argv() ``` Splits a string into [`argv/0`](#t:argv/0) chunks. This function splits the given `string` into a list of strings in a similar way to many shells. #### Examples ``` iex> OptionParser.split("foo bar") ["foo", "bar"] iex> OptionParser.split("foo \"bar baz\"") ["foo", "bar baz"] ``` ### to\_argv(enum, options \\ []) #### Specs ``` to_argv(Enumerable.t(), options()) :: argv() ``` Receives a key-value enumerable and converts it to [`argv/0`](#t:argv/0). Keys must be atoms. Keys with `nil` value are discarded, boolean values are converted to `--key` or `--no-key` (if the value is `true` or `false`, respectively), and all other values are converted using [`Kernel.to_string/1`](kernel#to_string/1). It is advised to pass to [`to_argv/2`](#to_argv/2) the same set of `options` given to [`parse/2`](#parse/2). Some switches can only be reconstructed correctly with the `:switches` information in hand. #### Examples ``` iex> OptionParser.to_argv(foo_bar: "baz") ["--foo-bar", "baz"] iex> OptionParser.to_argv(bool: true, bool: false, discarded: nil) ["--bool", "--no-bool"] ``` Some switches will output different values based on the switches types: ``` iex> OptionParser.to_argv([number: 2], switches: []) ["--number", "2"] iex> OptionParser.to_argv([number: 2], switches: [number: :count]) ["--number", "--number"] ```
programming_docs
elixir Kernel.SpecialForms Kernel.SpecialForms ==================== Special forms are the basic building blocks of Elixir, and therefore cannot be overridden by the developer. We define them in this module. Some of these forms are lexical (like [`alias/2`](#alias/2), [`case/2`](#case/2), etc.). The macros [`{}/1`](#%7B%7D/1) and [`<<>>/1`](#%3C%3C%3E%3E/1) are also special forms used to define tuple and binary data structures respectively. This module also documents macros that return information about Elixir's compilation environment, such as ([`__ENV__/0`](#__ENV__/0), [`__MODULE__/0`](#__MODULE__/0), [`__DIR__/0`](#__DIR__/0) and [`__CALLER__/0`](#__CALLER__/0)). Finally, it also documents two special forms, [`__block__/1`](#__block__/1) and [`__aliases__/1`](#__aliases__/1), which are not intended to be called directly by the developer but they appear in quoted contents since they are essential in Elixir's constructs. Summary ======== Functions ---------- [%struct{}](#%25/2) Matches on or builds a struct. [%{}](#%25%7B%7D/1) Creates a map. [&(expr)](#&/1) Captures or creates an anonymous function. [left . right](#./2) Defines a remote call, a call to an anonymous function, or an alias. [left :: right](#::/2) Used by types and bitstrings to specify types. [<<args>>](#%3C%3C%3E%3E/1) Defines a new bitstring. [left = right](#=/2) Matches the value on the right against the pattern on the left. [^var](#%5E/1) Accesses an already bound variable in match clauses. Also known as the pin operator. [\_\_CALLER\_\_](#__CALLER__/0) Returns the current calling environment as a [`Macro.Env`](macro.env) struct. [\_\_DIR\_\_](#__DIR__/0) Returns the absolute path of the directory of the current file as a binary. [\_\_ENV\_\_](#__ENV__/0) Returns the current environment information as a [`Macro.Env`](macro.env) struct. [\_\_MODULE\_\_](#__MODULE__/0) Returns the current module name as an atom or `nil` otherwise. [\_\_STACKTRACE\_\_](#__STACKTRACE__/0) Returns the stacktrace for the currently handled exception. [\_\_aliases\_\_(args)](#__aliases__/1) Internal special form to hold aliases information. [\_\_block\_\_(args)](#__block__/1) Internal special form for block expressions. [alias(module, opts)](#alias/2) [`alias/2`](#alias/2) is used to set up aliases, often useful with modules' names. [case(condition, clauses)](#case/2) Matches the given expression against the given clauses. [cond(clauses)](#cond/1) Evaluates the expression corresponding to the first clause that evaluates to a truthy value. [fn](#fn/1) Defines an anonymous function. [for(args)](#for/1) Comprehensions allow you to quickly build a data structure from an enumerable or a bitstring. [import(module, opts)](#import/2) Imports functions and macros from other modules. [quote(opts, block)](#quote/2) Gets the representation of any expression. [receive(args)](#receive/1) Checks if there is a message matching the given clauses in the current process mailbox. [require(module, opts)](#require/2) Requires a module in order to use its macros. [super(args)](#super/1) Calls the overridden function when overriding it with [`Kernel.defoverridable/1`](kernel#defoverridable/1). [try(args)](#try/1) Evaluates the given expressions and handles any error, exit, or throw that may have happened. [unquote(expr)](#unquote/1) Unquotes the given expression inside a quoted expression. [unquote\_splicing(expr)](#unquote_splicing/1) Unquotes the given list expanding its arguments. [with(args)](#with/1) Used to combine matching clauses. [{args}](#%7B%7D/1) Creates a tuple. Functions ========== ### %struct{} Matches on or builds a struct. A struct is a tagged map that allows developers to provide default values for keys, tags to be used in polymorphic dispatches and compile time assertions. Structs are usually defined with the [`Kernel.defstruct/1`](kernel#defstruct/1) macro: ``` defmodule User do defstruct name: "john", age: 27 end ``` Now a struct can be created as follows: ``` %User{} ``` Underneath a struct is just a map with a `:__struct__` key pointing to the `User` module: ``` %User{} == %{__struct__: User, name: "john", age: 27} ``` The struct fields can be given when building the struct: ``` %User{age: 31} #=> %{__struct__: User, name: "john", age: 31} ``` Or also on pattern matching to extract values out: ``` %User{age: age} = user ``` An update operation specific for structs is also available: ``` %User{user | age: 28} ``` The advantage of structs is that they validate that the given keys are part of the defined struct. The example below will fail because there is no key `:full_name` in the `User` struct: ``` %User{full_name: "john doe"} ``` The syntax above will guarantee the given keys are valid at compilation time and it will guarantee at runtime the given argument is a struct, failing with [`BadStructError`](badstructerror) otherwise. Although structs are maps, by default structs do not implement any of the protocols implemented for maps. Check [`Kernel.defprotocol/2`](kernel#defprotocol/2) for more information on how structs can be used with protocols for polymorphic dispatch. Also see [`Kernel.struct/2`](kernel#struct/2) and [`Kernel.struct!/2`](kernel#struct!/2) for examples on how to create and update structs dynamically. #### Pattern matching on struct names Besides allowing pattern matching on struct fields, such as: ``` %User{age: age} = user ``` Structs also allow pattern matching on the struct name: ``` %struct_name{} = user struct_name #=> User ``` You can also assign the struct name to `_` when you want to check if something is a struct but you are not interested in its name: ``` %_{} = user ``` ### %{} Creates a map. See the [`Map`](map) module for more information about maps, their syntax, and ways to access and manipulate them. #### AST representation Regardless of whether `=>` or the keyword syntax is used, key-value pairs in maps are always represented internally as a list of two-element tuples for simplicity: ``` iex> quote do ...> %{"a" => :b, c: :d} ...> end {:%{}, [], [{"a", :b}, {:c, :d}]} ``` ### &(expr) Captures or creates an anonymous function. #### Capture The capture operator is most commonly used to capture a function with given name and arity from a module: ``` iex> fun = &Kernel.is_atom/1 iex> fun.(:atom) true iex> fun.("string") false ``` In the example above, we captured [`Kernel.is_atom/1`](kernel#is_atom/1) as an anonymous function and then invoked it. The capture operator can also be used to capture local functions, including private ones, and imported functions by omitting the module name: ``` &local_function/1 ``` See also [`Function.capture/3`](function#capture/3). #### Anonymous functions The capture operator can also be used to partially apply functions, where `&1`, `&2` and so on can be used as value placeholders. For example: ``` iex> double = &(&1 * 2) iex> double.(2) 4 ``` In other words, `&(&1 * 2)` is equivalent to `fn x -> x * 2 end`. We can partially apply a remote function with placeholder: ``` iex> take_five = &Enum.take(&1, 5) iex> take_five.(1..10) [1, 2, 3, 4, 5] ``` Another example while using an imported or local function: ``` iex> first_elem = &elem(&1, 0) iex> first_elem.({0, 1}) 0 ``` The `&` operator can be used with more complex expressions: ``` iex> fun = &(&1 + &2 + &3) iex> fun.(1, 2, 3) 6 ``` As well as with lists and tuples: ``` iex> fun = &{&1, &2} iex> fun.(1, 2) {1, 2} iex> fun = &[&1 | &2] iex> fun.(1, [2, 3]) [1, 2, 3] ``` The only restrictions when creating anonymous functions is that at least one placeholder must be present, i.e. it must contain at least `&1`, and that block expressions are not supported: ``` # No placeholder, fails to compile. &(:foo) # Block expression, fails to compile. &(&1; &2) ``` ### left . right Defines a remote call, a call to an anonymous function, or an alias. The dot (`.`) in Elixir can be used for remote calls: ``` iex> String.downcase("FOO") "foo" ``` In this example above, we have used `.` to invoke `downcase` in the [`String`](string) module, passing `"FOO"` as argument. The dot may be used to invoke anonymous functions too: ``` iex> (fn n -> n end).(7) 7 ``` in which case there is a function on the left hand side. We can also use the dot for creating aliases: ``` iex> Hello.World Hello.World ``` This time, we have joined two aliases, defining the final alias `Hello.World`. #### Syntax The right side of `.` may be a word starting with an uppercase letter, which represents an alias, a word starting with lowercase or underscore, any valid language operator or any name wrapped in single- or double-quotes. Those are all valid examples: ``` iex> Kernel.Sample Kernel.Sample iex> Kernel.length([1, 2, 3]) 3 iex> Kernel.+(1, 2) 3 iex> Kernel."+"(1, 2) 3 ``` Wrapping the function name in single- or double-quotes is always a remote call. Therefore `Kernel."Foo"` will attempt to call the function "Foo" and not return the alias `Kernel.Foo`. This is done by design as module names are more strict than function names. When the dot is used to invoke an anonymous function there is only one operand, but it is still written using a postfix notation: ``` iex> negate = fn n -> -n end iex> negate.(7) -7 ``` #### Quoted expression When `.` is used, the quoted expression may take two distinct forms. When the right side starts with a lowercase letter (or underscore): ``` iex> quote do ...> String.downcase("FOO") ...> end {{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]} ``` Notice we have an inner tuple, containing the atom `:.` representing the dot as first element: ``` {:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]} ``` This tuple follows the general quoted expression structure in Elixir, with the name as first argument, some keyword list as metadata as second, and the list of arguments as third. In this case, the arguments are the alias [`String`](string) and the atom `:downcase`. The second argument in a remote call is **always** an atom. In the case of calls to anonymous functions, the inner tuple with the dot special form has only one argument, reflecting the fact that the operator is unary: ``` iex> quote do ...> negate.(0) ...> end {{:., [], [{:negate, [], __MODULE__}]}, [], [0]} ``` When the right side is an alias (i.e. starts with uppercase), we get instead: ``` iex> quote do ...> Hello.World ...> end {:__aliases__, [alias: false], [:Hello, :World]} ``` We go into more details about aliases in the [`__aliases__/1`](#__aliases__/1) special form documentation. #### Unquoting We can also use unquote to generate a remote call in a quoted expression: ``` iex> x = :downcase iex> quote do ...> String.unquote(x)("FOO") ...> end {{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]} ``` Similar to `Kernel."FUNCTION_NAME"`, `unquote(x)` will always generate a remote call, independent of the value of `x`. To generate an alias via the quoted expression, one needs to rely on [`Module.concat/2`](module#concat/2): ``` iex> x = Sample iex> quote do ...> Module.concat(String, unquote(x)) ...> end {{:., [], [{:__aliases__, [alias: false], [:Module]}, :concat]}, [], [{:__aliases__, [alias: false], [:String]}, Sample]} ``` ### left :: right Used by types and bitstrings to specify types. This operator is used in two distinct occasions in Elixir. It is used in typespecs to specify the type of a variable, function or of a type itself: ``` @type number :: integer | float @spec add(number, number) :: number ``` It may also be used in bit strings to specify the type of a given bit segment: ``` <<int::integer-little, rest::bits>> = bits ``` Read the documentation on the `Typespec` page and [`<<>>/1`](#%3C%3C%3E%3E/1) for more information on typespecs and bitstrings respectively. ### <<args>> Defines a new bitstring. #### Examples ``` iex> <<1, 2, 3>> <<1, 2, 3>> ``` #### Types A bitstring is made of many segments and each segment has a type. There are 9 types used in bitstrings: * `integer` * `float` * `bits` (alias for `bitstring`) * `bitstring` * `binary` * `bytes` (alias for `binary`) * `utf8` * `utf16` * `utf32` When no type is specified, the default is `integer`: ``` iex> <<1, 2, 3>> <<1, 2, 3>> ``` Elixir also accepts by default the segment to be a literal string or a literal charlist, which are by default expanded to integers: ``` iex> <<0, "foo">> <<0, 102, 111, 111>> ``` Variables or any other type need to be explicitly tagged: ``` iex> rest = "oo" iex> <<102, rest>> ** (ArgumentError) argument error ``` We can solve this by explicitly tagging it as `binary`: ``` iex> rest = "oo" iex> <<102, rest::binary>> "foo" ``` The `utf8`, `utf16`, and `utf32` types are for Unicode code points. They can also be applied to literal strings and charlists: ``` iex> <<"foo"::utf16>> <<0, 102, 0, 111, 0, 111>> iex> <<"foo"::utf32>> <<0, 0, 0, 102, 0, 0, 0, 111, 0, 0, 0, 111>> ``` #### Options Many options can be given by using `-` as separator. Order is arbitrary, so the following are all equivalent: ``` <<102::integer-native, rest::binary>> <<102::native-integer, rest::binary>> <<102::unsigned-big-integer, rest::binary>> <<102::unsigned-big-integer-size(8), rest::binary>> <<102::unsigned-big-integer-8, rest::binary>> <<102::8-integer-big-unsigned, rest::binary>> <<102, rest::binary>> ``` ### Unit and Size The length of the match is equal to the `unit` (a number of bits) times the `size` (the number of repeated segments of length `unit`). | Type | Default Unit | | --- | --- | | `integer` | 1 bit | | `float` | 1 bit | | `binary` | 8 bits | Sizes for types are a bit more nuanced. The default size for integers is 8. For floats, it is 64. For floats, `size * unit` must result in 32 or 64, corresponding to [IEEE 754](https://en.wikipedia.org/wiki/IEEE_floating_point) binary32 and binary64, respectively. For binaries, the default is the size of the binary. Only the last binary in a match can use the default size. All others must have their size specified explicitly, even if the match is unambiguous. For example: ``` iex> <<name::binary-size(5), " the ", species::binary>> = <<"Frank the Walrus">> "Frank the Walrus" iex> {name, species} {"Frank", "Walrus"} ``` The size can be a variable: ``` iex> name_size = 5 iex> <<name::binary-size(name_size), " the ", species::binary>> = <<"Frank the Walrus">> iex> {name, species} {"Frank", "Walrus"} ``` And the variable can be defined in the match itself (prior to its use): ``` iex> <<name_size::size(8), name::binary-size(name_size), " the ", species::binary>> = <<5, "Frank the Walrus">> iex> {name, species} {"Frank", "Walrus"} ``` However, the size cannot be defined in the match outside the binary/bitstring match: ``` {name_size, <<name::binary-size(name_size), _rest::binary>>} = {5, <<"Frank the Walrus">>} ** (CompileError): undefined variable "name_size" in bitstring segment ``` Failing to specify the size for the non-last causes compilation to fail: ``` <<name::binary, " the ", species::binary>> = <<"Frank the Walrus">> ** (CompileError): a binary field without size is only allowed at the end of a binary pattern ``` #### Shortcut Syntax Size and unit can also be specified using a syntax shortcut when passing integer values: ``` iex> x = 1 iex> <<x::8>> == <<x::size(8)>> true iex> <<x::8*4>> == <<x::size(8)-unit(4)>> true ``` This syntax reflects the fact the effective size is given by multiplying the size by the unit. ### Modifiers Some types have associated modifiers to clear up ambiguity in byte representation. | Modifier | Relevant Type(s) | | --- | --- | | `signed` | `integer` | | `unsigned` (default) | `integer` | | `little` | `integer`, `float`, `utf16`, `utf32` | | `big` (default) | `integer`, `float`, `utf16`, `utf32` | | `native` | `integer`, `utf16`, `utf32` | ### Sign Integers can be `signed` or `unsigned`, defaulting to `unsigned`. ``` iex> <<int::integer>> = <<-100>> <<156>> iex> int 156 iex> <<int::integer-signed>> = <<-100>> <<156>> iex> int -100 ``` `signed` and `unsigned` are only used for matching binaries (see below) and are only used for integers. ``` iex> <<-100::signed, _rest::binary>> = <<-100, "foo">> <<156, 102, 111, 111>> ``` ### Endianness Elixir has three options for endianness: `big`, `little`, and `native`. The default is `big`: ``` iex> <<number::little-integer-size(16)>> = <<0, 1>> <<0, 1>> iex> number 256 iex> <<number::big-integer-size(16)>> = <<0, 1>> <<0, 1>> iex> number 1 ``` `native` is determined by the VM at startup and will depend on the host operating system. #### Binary/Bitstring Matching Binary matching is a powerful feature in Elixir that is useful for extracting information from binaries as well as pattern matching. Binary matching can be used by itself to extract information from binaries: ``` iex> <<"Hello, ", place::binary>> = "Hello, World" "Hello, World" iex> place "World" ``` Or as a part of function definitions to pattern match: ``` defmodule ImageTyper do @png_signature <<137::size(8), 80::size(8), 78::size(8), 71::size(8), 13::size(8), 10::size(8), 26::size(8), 10::size(8)>> @jpg_signature <<255::size(8), 216::size(8)>> def type(<<@png_signature, rest::binary>>), do: :png def type(<<@jpg_signature, rest::binary>>), do: :jpg def type(_), do: :unknown end ``` ### Performance & Optimizations The Erlang compiler can provide a number of optimizations on binary creation and matching. To see optimization output, set the `bin_opt_info` compiler option: ``` ERL_COMPILER_OPTIONS=bin_opt_info mix compile ``` To learn more about specific optimizations and performance considerations, check out [Erlang's Efficiency Guide on handling binaries](http://www.erlang.org/doc/efficiency_guide/binaryhandling.html). ### left = right Matches the value on the right against the pattern on the left. ### ^var Accesses an already bound variable in match clauses. Also known as the pin operator. #### Examples Elixir allows variables to be rebound via static single assignment: ``` iex> x = 1 iex> x = x + 1 iex> x 2 ``` However, in some situations, it is useful to match against an existing value, instead of rebinding. This can be done with the `^` special form, colloquially known as the pin operator: ``` iex> x = 1 iex> ^x = List.first([1]) iex> ^x = List.first([2]) ** (MatchError) no match of right hand side value: 2 ``` Note that `^x` always refers to the value of `x` prior to the match. The following example will match: ``` iex> x = 0 iex> {x, ^x} = {1, 0} iex> x 1 ``` ### \_\_CALLER\_\_ Returns the current calling environment as a [`Macro.Env`](macro.env) struct. In the environment you can access the filename, line numbers, set up aliases, the function and others. ### \_\_DIR\_\_ Returns the absolute path of the directory of the current file as a binary. Although the directory can be accessed as `Path.dirname(__ENV__.file)`, this macro is a convenient shortcut. ### \_\_ENV\_\_ Returns the current environment information as a [`Macro.Env`](macro.env) struct. In the environment you can access the current filename, line numbers, set up aliases, the current function and others. ### \_\_MODULE\_\_ Returns the current module name as an atom or `nil` otherwise. Although the module can be accessed in the [`__ENV__/0`](#__ENV__/0), this macro is a convenient shortcut. ### \_\_STACKTRACE\_\_ Returns the stacktrace for the currently handled exception. It is available only in the `catch` and `rescue` clauses of [`try/1`](#try/1) expressions. To retrieve the stacktrace of the current process, use `Process.info(self(), :current_stacktrace)` instead. ### \_\_aliases\_\_(args) Internal special form to hold aliases information. It is usually compiled to an atom: ``` iex> quote do ...> Foo.Bar ...> end {:__aliases__, [alias: false], [:Foo, :Bar]} ``` Elixir represents `Foo.Bar` as `__aliases__` so calls can be unambiguously identified by the operator `:.`. For example: ``` iex> quote do ...> Foo.bar ...> end {{:., [], [{:__aliases__, [alias: false], [:Foo]}, :bar]}, [], []} ``` Whenever an expression iterator sees a `:.` as the tuple key, it can be sure that it represents a call and the second argument in the list is an atom. On the other hand, aliases hold some properties: 1. The head element of aliases can be any term that must expand to an atom at compilation time. 2. The tail elements of aliases are guaranteed to always be atoms. 3. When the head element of aliases is the atom `:Elixir`, no expansion happens. ### \_\_block\_\_(args) Internal special form for block expressions. This is the special form used whenever we have a block of expressions in Elixir. This special form is private and should not be invoked directly: ``` iex> quote do ...> 1 ...> 2 ...> 3 ...> end {:__block__, [], [1, 2, 3]} ``` ### alias(module, opts) [`alias/2`](#alias/2) is used to set up aliases, often useful with modules' names. #### Examples [`alias/2`](#alias/2) can be used to set up an alias for any module: ``` defmodule Math do alias MyKeyword, as: Keyword end ``` In the example above, we have set up `MyKeyword` to be aliased as [`Keyword`](keyword). So now, any reference to [`Keyword`](keyword) will be automatically replaced by `MyKeyword`. In case one wants to access the original [`Keyword`](keyword), it can be done by accessing `Elixir`: ``` Keyword.values #=> uses MyKeyword.values Elixir.Keyword.values #=> uses Keyword.values ``` Notice that calling `alias` without the `:as` option automatically sets an alias based on the last part of the module. For example: ``` alias Foo.Bar.Baz ``` Is the same as: ``` alias Foo.Bar.Baz, as: Baz ``` We can also alias multiple modules in one line: ``` alias Foo.{Bar, Baz, Biz} ``` Is the same as: ``` alias Foo.Bar alias Foo.Baz alias Foo.Biz ``` #### Lexical scope [`import/2`](#import/2), [`require/2`](#require/2) and [`alias/2`](#alias/2) are called directives and all have lexical scope. This means you can set up aliases inside specific functions and it won't affect the overall scope. #### Warnings If you alias a module and you don't use the alias, Elixir is going to issue a warning implying the alias is not being used. In case the alias is generated automatically by a macro, Elixir won't emit any warnings though, since the alias was not explicitly defined. Both warning behaviours could be changed by explicitly setting the `:warn` option to `true` or `false`. ### case(condition, clauses) Matches the given expression against the given clauses. #### Examples ``` case thing do {:selector, i, value} when is_integer(i) -> value value -> value end ``` In the example above, we match `thing` against each clause "head" and execute the clause "body" corresponding to the first clause that matches. If no clause matches, an error is raised. For this reason, it may be necessary to add a final catch-all clause (like `_`) which will always match. ``` x = 10 case x do 0 -> "This clause won't match" _ -> "This clause would match any value (x = #{x})" end #=> "This clause would match any value (x = 10)" ``` #### Variable handling Notice that variables bound in a clause "head" do not leak to the outer context: ``` case data do {:ok, value} -> value :error -> nil end value #=> unbound variable value ``` However, variables explicitly bound in the clause "body" are accessible from the outer context: ``` value = 7 case lucky? do false -> value = 13 true -> true end value #=> 7 or 13 ``` In the example above, `value` is going to be `7` or `13` depending on the value of `lucky?`. In case `value` has no previous value before case, clauses that do not explicitly bind a value have the variable bound to `nil`. If you want to pattern match against an existing variable, you need to use the [`^/1`](#%5E/1) operator: ``` x = 1 case 10 do ^x -> "Won't match" _ -> "Will match" end #=> "Will match" ``` ### cond(clauses) Evaluates the expression corresponding to the first clause that evaluates to a truthy value. ``` cond do hd([1, 2, 3]) -> "1 is considered as true" end #=> "1 is considered as true" ``` Raises an error if all conditions evaluate to `nil` or `false`. For this reason, it may be necessary to add a final always-truthy condition (anything non-`false` and non-`nil`), which will always match. #### Examples ``` cond do 1 + 1 == 1 -> "This will never match" 2 * 2 != 4 -> "Nor this" true -> "This will" end #=> "This will" ``` ### fn Defines an anonymous function. #### Examples ``` iex> add = fn a, b -> a + b end iex> add.(1, 2) 3 ``` Anonymous functions can also have multiple clauses. All clauses should expect the same number of arguments: ``` iex> negate = fn ...> true -> false ...> false -> true ...> end iex> negate.(false) true ``` ### for(args) Comprehensions allow you to quickly build a data structure from an enumerable or a bitstring. Let's start with an example: ``` iex> for n <- [1, 2, 3, 4], do: n * 2 [2, 4, 6, 8] ``` A comprehension accepts many generators and filters. Enumerable generators are defined using `<-`: ``` # A list generator: iex> for n <- [1, 2, 3, 4], do: n * 2 [2, 4, 6, 8] # A comprehension with two generators iex> for x <- [1, 2], y <- [2, 3], do: x * y [2, 3, 4, 6] ``` Filters can also be given: ``` # A comprehension with a generator and a filter iex> for n <- [1, 2, 3, 4, 5, 6], rem(n, 2) == 0, do: n [2, 4, 6] ``` Generators can also be used to filter as it removes any value that doesn't match the pattern on the left side of `<-`: ``` iex> users = [user: "john", admin: "meg", guest: "barbara"] iex> for {type, name} when type != :guest <- users do ...> String.upcase(name) ...> end ["JOHN", "MEG"] ``` Bitstring generators are also supported and are very useful when you need to organize bitstring streams: ``` iex> pixels = <<213, 45, 132, 64, 76, 32, 76, 0, 0, 234, 32, 15>> iex> for <<r::8, g::8, b::8 <- pixels>>, do: {r, g, b} [{213, 45, 132}, {64, 76, 32}, {76, 0, 0}, {234, 32, 15}] ``` Variable assignments inside the comprehension, be it in generators, filters or inside the block, are not reflected outside of the comprehension. #### The `:into` and `:uniq` options In the examples above, the result returned by the comprehension was always a list. The returned result can be configured by passing an `:into` option, that accepts any structure as long as it implements the [`Collectable`](collectable) protocol. For example, we can use bitstring generators with the `:into` option to easily remove all spaces in a string: ``` iex> for <<c <- " hello world ">>, c != ?\s, into: "", do: <<c>> "helloworld" ``` The [`IO`](io) module provides streams, that are both [`Enumerable`](enumerable) and [`Collectable`](collectable), here is an upcase echo server using comprehensions: ``` for line <- IO.stream(:stdio, :line), into: IO.stream(:stdio, :line) do String.upcase(line) end ``` Similarly, `uniq: true` can also be given to comprehensions to guarantee the results are only added to the collection if they were not returned before. For example: ``` iex> for x <- [1, 1, 2, 3], uniq: true, do: x * 2 [2, 4, 6] iex> for <<x <- "abcabc">>, uniq: true, into: "", do: <<x - 32>> "ABC" ``` #### The `:reduce` option While the `:into` option allows us to customize the comprehension behaviour to a given data type, such as putting all of the values inside a map or inside a binary, it is not always enough. For example, imagine that you have a binary with letters where you want to count how many times each lowercase letter happens, ignoring all uppercase ones. For instance, for the string `"AbCabCABc"`, we want to return the map `%{"a" => 1, "b" => 2, "c" => 1}`. If we were to use `:into`, we would need a data type that computes the frequency of each element it holds. While there is no such data type in Elixir, you could implement one yourself. A simpler option would be to use comprehensions for the mapping and filtering of letters, and then we invoke [`Enum.reduce/3`](enum#reduce/3) to build a map, for example: ``` iex> letters = for <<x <- "AbCabCABc">>, x in ?a..?z, do: <<x>> iex> Enum.reduce(letters, %{}, fn x, acc -> Map.update(acc, x, 1, & &1 + 1) end) %{"a" => 1, "b" => 2, "c" => 1} ``` While the above is straight-forward, it has the downside of traversing the data at least twice. If you are expecting long strings as inputs, this can be quite expensive. Luckily, comprehensions also support the `:reduce` option, which would allow us to fuse both steps above into a single step: ``` iex> for <<x <- "AbCabCABc">>, x in ?a..?z, reduce: %{} do ...> acc -> Map.update(acc, <<x>>, 1, & &1 + 1) ...> end %{"a" => 1, "b" => 2, "c" => 1} ``` When the `:reduce` key is given, its value is used as the initial accumulator and the `do` block must be changed to use `->` clauses, where the left side of `->` receives the accumulated value of the previous iteration and the expression on the right side must return the new accumulator value. Once there are no more elements, the final accumulated value is returned. If there are no elements at all, then the initial accumulator value is returned. ### import(module, opts) Imports functions and macros from other modules. [`import/2`](#import/2) allows one to easily access functions or macros from other modules without using the qualified name. #### Examples If you are using several functions from a given module, you can import those functions and reference them as local functions, for example: ``` iex> import List iex> flatten([1, [2], 3]) [1, 2, 3] ``` #### Selector By default, Elixir imports functions and macros from the given module, except the ones starting with underscore (which are usually callbacks): ``` import List ``` A developer can filter to import only macros or functions via the only option: ``` import List, only: :functions import List, only: :macros ``` Alternatively, Elixir allows a developer to pass pairs of name/arities to `:only` or `:except` as a fine grained control on what to import (or not): ``` import List, only: [flatten: 1] import String, except: [split: 2] ``` Notice that calling `except` is always exclusive on a previously declared [`import/2`](#import/2). If there is no previous import, then it applies to all functions and macros in the module. For example: ``` import List, only: [flatten: 1, keyfind: 4] import List, except: [flatten: 1] ``` After the two import calls above, only [`List.keyfind/4`](list#keyfind/4) will be imported. #### Underscore functions By default functions starting with `_` are not imported. If you really want to import a function starting with `_` you must explicitly include it in the `:only` selector. ``` import File.Stream, only: [__build__: 3] ``` #### Lexical scope It is important to notice that [`import/2`](#import/2) is lexical. This means you can import specific macros inside specific functions: ``` defmodule Math do def some_function do # 1) Disable "if/2" from Kernel import Kernel, except: [if: 2] # 2) Require the new "if/2" macro from MyMacros import MyMacros # 3) Use the new macro if do_something, it_works end end ``` In the example above, we imported macros from `MyMacros`, replacing the original [`if/2`](kernel#if/2) implementation by our own within that specific function. All other functions in that module will still be able to use the original one. #### Warnings If you import a module and you don't use any of the imported functions or macros from this module, Elixir is going to issue a warning implying the import is not being used. In case the import is generated automatically by a macro, Elixir won't emit any warnings though, since the import was not explicitly defined. Both warning behaviours could be changed by explicitly setting the `:warn` option to `true` or `false`. #### Ambiguous function/macro names If two modules `A` and `B` are imported and they both contain a `foo` function with an arity of `1`, an error is only emitted if an ambiguous call to `foo/1` is actually made; that is, the errors are emitted lazily, not eagerly. ### quote(opts, block) Gets the representation of any expression. #### Examples ``` iex> quote do ...> sum(1, 2, 3) ...> end {:sum, [], [1, 2, 3]} ``` #### Elixir's AST (Abstract Syntax Tree) Any Elixir code can be represented using Elixir data structures. The building block of Elixir macros is a tuple with three elements, for example: ``` {:sum, [], [1, 2, 3]} ``` The tuple above represents a function call to `sum` passing 1, 2 and 3 as arguments. The tuple elements are: * The first element of the tuple is always an atom or another tuple in the same representation. * The second element of the tuple represents metadata. * The third element of the tuple are the arguments for the function call. The third argument may be an atom, which is usually a variable (or a local call). Besides the tuple described above, Elixir has a few literals that are also part of its AST. Those literals return themselves when quoted. They are: ``` :sum #=> Atoms 1 #=> Integers 2.0 #=> Floats [1, 2] #=> Lists "strings" #=> Strings {key, value} #=> Tuples with two elements ``` Any other value, such as a map or a four-element tuple, must be escaped ([`Macro.escape/1`](macro#escape/1)) before being introduced into an AST. #### Options * `:unquote` - when `false`, disables unquoting. This means any `unquote` call will be kept as is in the AST, instead of replaced by the `unquote` arguments. For example: ``` iex> quote do ...> unquote("hello") ...> end "hello" iex> quote unquote: false do ...> unquote("hello") ...> end {:unquote, [], ["hello"]} ``` * `:location` - when set to `:keep`, keeps the current line and file from quote. Read the Stacktrace information section below for more information. * `:line` - sets the quoted expressions to have the given line. * `:generated` - marks the given chunk as generated so it does not emit warnings. Currently it only works on special forms (for example, you can annotate a `case` but not an `if`). * `:context` - sets the resolution context. * `:bind_quoted` - passes a binding to the macro. Whenever a binding is given, [`unquote/1`](#unquote/1) is automatically disabled. #### Quote and macros [`quote/2`](#quote/2) is commonly used with macros for code generation. As an exercise, let's define a macro that multiplies a number by itself (squared). In practice, there is no reason to define such a macro (and it would actually be seen as a bad practice), but it is simple enough that it allows us to focus on the important aspects of quotes and macros: ``` defmodule Math do defmacro squared(x) do quote do unquote(x) * unquote(x) end end end ``` We can invoke it as: ``` import Math IO.puts("Got #{squared(5)}") ``` At first, there is nothing in this example that actually reveals it is a macro. But what is happening is that, at compilation time, `squared(5)` becomes `5 * 5`. The argument `5` is duplicated in the produced code, we can see this behaviour in practice though because our macro actually has a bug: ``` import Math my_number = fn -> IO.puts("Returning 5") 5 end IO.puts("Got #{squared(my_number.())}") ``` The example above will print: ``` Returning 5 Returning 5 Got 25 ``` Notice how "Returning 5" was printed twice, instead of just once. This is because a macro receives an expression and not a value (which is what we would expect in a regular function). This means that: ``` squared(my_number.()) ``` Actually expands to: ``` my_number.() * my_number.() ``` Which invokes the function twice, explaining why we get the printed value twice! In the majority of the cases, this is actually unexpected behaviour, and that's why one of the first things you need to keep in mind when it comes to macros is to **not unquote the same value more than once**. Let's fix our macro: ``` defmodule Math do defmacro squared(x) do quote do x = unquote(x) x * x end end end ``` Now invoking `squared(my_number.())` as before will print the value just once. In fact, this pattern is so common that most of the times you will want to use the `bind_quoted` option with [`quote/2`](#quote/2): ``` defmodule Math do defmacro squared(x) do quote bind_quoted: [x: x] do x * x end end end ``` `:bind_quoted` will translate to the same code as the example above. `:bind_quoted` can be used in many cases and is seen as good practice, not only because it helps prevent us from running into common mistakes, but also because it allows us to leverage other tools exposed by macros, such as unquote fragments discussed in some sections below. Before we finish this brief introduction, you will notice that, even though we defined a variable `x` inside our quote: ``` quote do x = unquote(x) x * x end ``` When we call: ``` import Math squared(5) x #=> ** (CompileError) undefined variable x or undefined function x/0 ``` We can see that `x` did not leak to the user context. This happens because Elixir macros are hygienic, a topic we will discuss at length in the next sections as well. #### Hygiene in variables Consider the following example: ``` defmodule Hygiene do defmacro no_interference do quote do a = 1 end end end require Hygiene a = 10 Hygiene.no_interference() a #=> 10 ``` In the example above, `a` returns 10 even if the macro is apparently setting it to 1 because variables defined in the macro do not affect the context the macro is executed in. If you want to set or get a variable in the caller's context, you can do it with the help of the `var!` macro: ``` defmodule NoHygiene do defmacro interference do quote do var!(a) = 1 end end end require NoHygiene a = 10 NoHygiene.interference() a #=> 1 ``` You cannot even access variables defined in the same module unless you explicitly give it a context: ``` defmodule Hygiene do defmacro write do quote do a = 1 end end defmacro read do quote do a end end end Hygiene.write() Hygiene.read() #=> ** (RuntimeError) undefined variable a or undefined function a/0 ``` For such, you can explicitly pass the current module scope as argument: ``` defmodule ContextHygiene do defmacro write do quote do var!(a, ContextHygiene) = 1 end end defmacro read do quote do var!(a, ContextHygiene) end end end ContextHygiene.write() ContextHygiene.read() #=> 1 ``` #### Hygiene in aliases Aliases inside quote are hygienic by default. Consider the following example: ``` defmodule Hygiene do alias Map, as: M defmacro no_interference do quote do M.new() end end end require Hygiene Hygiene.no_interference() #=> %{} ``` Notice that, even though the alias `M` is not available in the context the macro is expanded, the code above works because `M` still expands to [`Map`](map). Similarly, even if we defined an alias with the same name before invoking a macro, it won't affect the macro's result: ``` defmodule Hygiene do alias Map, as: M defmacro no_interference do quote do M.new() end end end require Hygiene alias SomethingElse, as: M Hygiene.no_interference() #=> %{} ``` In some cases, you want to access an alias or a module defined in the caller. For such, you can use the `alias!` macro: ``` defmodule Hygiene do # This will expand to Elixir.Nested.hello() defmacro no_interference do quote do Nested.hello() end end # This will expand to Nested.hello() for # whatever is Nested in the caller defmacro interference do quote do alias!(Nested).hello() end end end defmodule Parent do defmodule Nested do def hello, do: "world" end require Hygiene Hygiene.no_interference() #=> ** (UndefinedFunctionError) ... Hygiene.interference() #=> "world" end ``` #### Hygiene in imports Similar to aliases, imports in Elixir are hygienic. Consider the following code: ``` defmodule Hygiene do defmacrop get_length do quote do length([1, 2, 3]) end end def return_length do import Kernel, except: [length: 1] get_length end end Hygiene.return_length() #=> 3 ``` Notice how `Hygiene.return_length/0` returns `3` even though the [`Kernel.length/1`](kernel#length/1) function is not imported. In fact, even if `return_length/0` imported a function with the same name and arity from another module, it wouldn't affect the function result: ``` def return_length do import String, only: [length: 1] get_length end ``` Calling this new `return_length/0` will still return `3` as result. Elixir is smart enough to delay the resolution to the latest possible moment. So, if you call `length([1, 2, 3])` inside quote, but no [`length/1`](kernel#length/1) function is available, it is then expanded in the caller: ``` defmodule Lazy do defmacrop get_length do import Kernel, except: [length: 1] quote do length("hello") end end def return_length do import Kernel, except: [length: 1] import String, only: [length: 1] get_length end end Lazy.return_length() #=> 5 ``` #### Stacktrace information When defining functions via macros, developers have the option of choosing if runtime errors will be reported from the caller or from inside the quote. Let's see an example: ``` # adder.ex defmodule Adder do @doc "Defines a function that adds two numbers" defmacro defadd do quote location: :keep do def add(a, b), do: a + b end end end # sample.ex defmodule Sample do import Adder defadd end require Sample Sample.add(:one, :two) #=> ** (ArithmeticError) bad argument in arithmetic expression #=> adder.ex:5: Sample.add/2 ``` When using `location: :keep` and invalid arguments are given to `Sample.add/2`, the stacktrace information will point to the file and line inside the quote. Without `location: :keep`, the error is reported to where `defadd` was invoked. `location: :keep` affects only definitions inside the quote. #### Binding and unquote fragments Elixir quote/unquote mechanisms provide a functionality called unquote fragments. Unquote fragments provide an easy way to generate functions on the fly. Consider this example: ``` kv = [foo: 1, bar: 2] Enum.each(kv, fn {k, v} -> def unquote(k)(), do: unquote(v) end) ``` In the example above, we have generated the functions `foo/0` and `bar/0` dynamically. Now, imagine that we want to convert this functionality into a macro: ``` defmacro defkv(kv) do Enum.map(kv, fn {k, v} -> quote do def unquote(k)(), do: unquote(v) end end) end ``` We can invoke this macro as: ``` defkv [foo: 1, bar: 2] ``` However, we can't invoke it as follows: ``` kv = [foo: 1, bar: 2] defkv kv ``` This is because the macro is expecting its arguments to be a keyword list at **compilation** time. Since in the example above we are passing the representation of the variable `kv`, our code fails. This is actually a common pitfall when developing macros. We are assuming a particular shape in the macro. We can work around it by unquoting the variable inside the quoted expression: ``` defmacro defkv(kv) do quote do Enum.each(unquote(kv), fn {k, v} -> def unquote(k)(), do: unquote(v) end) end end ``` If you try to run our new macro, you will notice it won't even compile, complaining that the variables `k` and `v` do not exist. This is because of the ambiguity: `unquote(k)` can either be an unquote fragment, as previously, or a regular unquote as in `unquote(kv)`. One solution to this problem is to disable unquoting in the macro, however, doing that would make it impossible to inject the `kv` representation into the tree. That's when the `:bind_quoted` option comes to the rescue (again!). By using `:bind_quoted`, we can automatically disable unquoting while still injecting the desired variables into the tree: ``` defmacro defkv(kv) do quote bind_quoted: [kv: kv] do Enum.each(kv, fn {k, v} -> def unquote(k)(), do: unquote(v) end) end end ``` In fact, the `:bind_quoted` option is recommended every time one desires to inject a value into the quote. ### receive(args) Checks if there is a message matching the given clauses in the current process mailbox. In case there is no such message, the current process hangs until a message arrives or waits until a given timeout value. #### Examples ``` receive do {:selector, number, name} when is_integer(number) -> name name when is_atom(name) -> name _ -> IO.puts(:stderr, "Unexpected message received") end ``` An optional `after` clause can be given in case the message was not received after the given timeout period, specified in milliseconds: ``` receive do {:selector, number, name} when is_integer(number) -> name name when is_atom(name) -> name _ -> IO.puts(:stderr, "Unexpected message received") after 5000 -> IO.puts(:stderr, "No message in 5 seconds") end ``` The `after` clause can be specified even if there are no match clauses. The timeout value given to `after` can be any expression evaluating to one of the allowed values: * `:infinity` - the process should wait indefinitely for a matching message, this is the same as not using the after clause * `0` - if there is no matching message in the mailbox, the timeout will occur immediately * positive integer smaller than or equal to `4_294_967_295` (`0xFFFFFFFF` in hexadecimal notation) - it should be possible to represent the timeout value as an unsigned 32-bit integer. #### Variable handling The [`receive/1`](#receive/1) special form handles variables exactly as the [`case/2`](#case/2) special macro. For more information, check the docs for [`case/2`](#case/2). ### require(module, opts) Requires a module in order to use its macros. #### Examples Public functions in modules are globally available, but in order to use macros, you need to opt-in by requiring the module they are defined in. Let's suppose you created your own [`if/2`](kernel#if/2) implementation in the module `MyMacros`. If you want to invoke it, you need to first explicitly require the `MyMacros`: ``` defmodule Math do require MyMacros MyMacros.if do_something, it_works end ``` An attempt to call a macro that was not loaded will raise an error. #### Alias shortcut [`require/2`](#require/2) also accepts `:as` as an option so it automatically sets up an alias. Please check [`alias/2`](#alias/2) for more information. ### super(args) Calls the overridden function when overriding it with [`Kernel.defoverridable/1`](kernel#defoverridable/1). See [`Kernel.defoverridable/1`](kernel#defoverridable/1) for more information and documentation. ### try(args) Evaluates the given expressions and handles any error, exit, or throw that may have happened. #### Examples ``` try do do_something_that_may_fail(some_arg) rescue ArgumentError -> IO.puts("Invalid argument given") catch value -> IO.puts("Caught #{inspect(value)}") else value -> IO.puts("Success! The result was #{inspect(value)}") after IO.puts("This is printed regardless if it failed or succeeded") end ``` The `rescue` clause is used to handle exceptions while the `catch` clause can be used to catch thrown values and exits. The `else` clause can be used to control flow based on the result of the expression. `catch`, `rescue`, and `else` clauses work based on pattern matching (similar to the `case` special form). Calls inside [`try/1`](#try/1) are not tail recursive since the VM needs to keep the stacktrace in case an exception happens. To retrieve the stacktrace, access [`__STACKTRACE__/0`](#__STACKTRACE__/0) inside the `rescue` or `catch` clause. #### `rescue` clauses Besides relying on pattern matching, `rescue` clauses provide some conveniences around exceptions that allow one to rescue an exception by its name. All the following formats are valid patterns in `rescue` clauses: ``` # Rescue a single exception without binding the exception # to a variable try do UndefinedModule.undefined_function rescue UndefinedFunctionError -> nil end # Rescue any of the given exception without binding try do UndefinedModule.undefined_function rescue [UndefinedFunctionError, ArgumentError] -> nil end # Rescue and bind the exception to the variable "x" try do UndefinedModule.undefined_function rescue x in [UndefinedFunctionError] -> nil end # Rescue all kinds of exceptions and bind the rescued exception # to the variable "x" try do UndefinedModule.undefined_function rescue x -> nil end ``` ### Erlang errors Erlang errors are transformed into Elixir ones when rescuing: ``` try do :erlang.error(:badarg) rescue ArgumentError -> :ok end #=> :ok ``` The most common Erlang errors will be transformed into their Elixir counterpart. Those which are not will be transformed into the more generic [`ErlangError`](erlangerror): ``` try do :erlang.error(:unknown) rescue ErlangError -> :ok end #=> :ok ``` In fact, [`ErlangError`](erlangerror) can be used to rescue any error that is not a proper Elixir error. For example, it can be used to rescue the earlier `:badarg` error too, prior to transformation: ``` try do :erlang.error(:badarg) rescue ErlangError -> :ok end #=> :ok ``` #### `catch` clauses The `catch` clause can be used to catch thrown values, exits, and errors. ### Catching thrown values `catch` can be used to catch values thrown by [`Kernel.throw/1`](kernel#throw/1): ``` try do throw(:some_value) catch thrown_value -> IO.puts("A value was thrown: #{inspect(thrown_value)}") end ``` ### Catching values of any kind The `catch` clause also supports catching exits and errors. To do that, it allows matching on both the *kind* of the caught value as well as the value itself: ``` try do exit(:shutdown) catch :exit, value -> IO.puts("Exited with value #{inspect(value)}") end try do exit(:shutdown) catch kind, value when kind in [:exit, :throw] -> IO.puts("Caught exit or throw with value #{inspect(value)}") end ``` The `catch` clause also supports `:error` alongside `:exit` and `:throw` as in Erlang, although this is commonly avoided in favor of `raise`/`rescue` control mechanisms. One reason for this is that when catching `:error`, the error is not automatically transformed into an Elixir error: ``` try do :erlang.error(:badarg) catch :error, :badarg -> :ok end #=> :ok ``` #### `after` clauses An `after` clause allows you to define cleanup logic that will be invoked both when the block of code passed to [`try/1`](#try/1) succeeds and also when an error is raised. Note that the process will exit as usual when receiving an exit signal that causes it to exit abruptly and so the `after` clause is not guaranteed to be executed. Luckily, most resources in Elixir (such as open files, ETS tables, ports, sockets, and so on) are linked to or monitor the owning process and will automatically clean themselves up if that process exits. ``` File.write!("tmp/story.txt", "Hello, World") try do do_something_with("tmp/story.txt") after File.rm("tmp/story.txt") end ``` #### `else` clauses `else` clauses allow the result of the body passed to [`try/1`](#try/1) to be pattern matched on: ``` x = 2 try do 1 / x rescue ArithmeticError -> :infinity else y when y < 1 and y > -1 -> :small _ -> :large end ``` If an `else` clause is not present and no exceptions are raised, the result of the expression will be returned: ``` x = 1 ^x = try do 1 / x rescue ArithmeticError -> :infinity end ``` However, when an `else` clause is present but the result of the expression does not match any of the patterns then an exception will be raised. This exception will not be caught by a `catch` or `rescue` in the same `try`: ``` x = 1 try do try do 1 / x rescue # The TryClauseError cannot be rescued here: TryClauseError -> :error_a else 0 -> :small end rescue # The TryClauseError is rescued here: TryClauseError -> :error_b end ``` Similarly, an exception inside an `else` clause is not caught or rescued inside the same `try`: ``` try do try do nil catch # The exit(1) call below can not be caught here: :exit, _ -> :exit_a else _ -> exit(1) end catch # The exit is caught here: :exit, _ -> :exit_b end ``` This means the VM no longer needs to keep the stacktrace once inside an `else` clause and so tail recursion is possible when using a `try` with a tail call as the final call inside an `else` clause. The same is true for `rescue` and `catch` clauses. Only the result of the tried expression falls down to the `else` clause. If the `try` ends up in the `rescue` or `catch` clauses, their result will not fall down to `else`: ``` try do throw(:catch_this) catch :throw, :catch_this -> :it_was_caught else # :it_was_caught will not fall down to this "else" clause. other -> {:else, other} end ``` #### Variable handling Since an expression inside `try` may not have been evaluated due to an exception, any variable created inside `try` cannot be accessed externally. For instance: ``` try do x = 1 do_something_that_may_fail(same_arg) :ok catch _, _ -> :failed end x #=> unbound variable "x" ``` In the example above, `x` cannot be accessed since it was defined inside the `try` clause. A common practice to address this issue is to return the variables defined inside `try`: ``` x = try do x = 1 do_something_that_may_fail(same_arg) x catch _, _ -> :failed end ``` ### unquote(expr) Unquotes the given expression inside a quoted expression. This function expects a valid Elixir AST, also known as quoted expression, as argument. If you would like to `unquote` any value, such as a map or a four-element tuple, you should call [`Macro.escape/1`](macro#escape/1) before unquoting. #### Examples Imagine the situation you have a quoted expression and you want to inject it inside some quote. The first attempt would be: ``` value = quote do 13 end quote do sum(1, value, 3) end ``` Which would then return: ``` {:sum, [], [1, {:value, [], Elixir}, 3]} ``` Which is not the expected result. For this, we use `unquote`: ``` iex> value = ...> quote do ...> 13 ...> end iex> quote do ...> sum(1, unquote(value), 3) ...> end {:sum, [], [1, 13, 3]} ``` If you want to unquote a value that is not a quoted expression, such as a map, you need to call [`Macro.escape/1`](macro#escape/1) before: ``` iex> value = %{foo: :bar} iex> quote do ...> process_map(unquote(Macro.escape(value))) ...> end {:process_map, [], [{:%{}, [], [foo: :bar]}]} ``` If you forget to escape it, Elixir will raise an error when compiling the code. ### unquote\_splicing(expr) Unquotes the given list expanding its arguments. Similar to [`unquote/1`](#unquote/1). #### Examples ``` iex> values = [2, 3, 4] iex> quote do ...> sum(1, unquote_splicing(values), 5) ...> end {:sum, [], [1, 2, 3, 4, 5]} ``` ### with(args) Used to combine matching clauses. Let's start with an example: ``` iex> opts = %{width: 10, height: 15} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height) do ...> {:ok, width * height} ...> end {:ok, 150} ``` If all clauses match, the `do` block is executed, returning its result. Otherwise the chain is aborted and the non-matched value is returned: ``` iex> opts = %{width: 10} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height) do ...> {:ok, width * height} ...> end :error ``` Guards can be used in patterns as well: ``` iex> users = %{"melany" => "guest", "bob" => :admin} iex> with {:ok, role} when not is_binary(role) <- Map.fetch(users, "bob") do ...> {:ok, to_string(role)} ...> end {:ok, "admin"} ``` As in [`for/1`](#for/1), variables bound inside [`with/1`](#with/1) won't leak. Expressions without `<-` may also be used in clauses. For instance, you can perform regular matches with the `=` operator: ``` iex> width = nil iex> opts = %{width: 10, height: 15} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> double_width = width * 2, ...> {:ok, height} <- Map.fetch(opts, :height) do ...> {:ok, double_width * height} ...> end {:ok, 300} iex> width nil ``` The behaviour of any expression in a clause is the same as outside. For example, `=` will raise a [`MatchError`](matcherror) instead of returning the non-matched value: ``` with :foo = :bar, do: :ok #=> ** (MatchError) no match of right hand side value: :bar ``` As with any other function or macro call in Elixir, explicit parens can also be used around the arguments before the `do`/`end` block: ``` iex> opts = %{width: 10, height: 15} iex> with( ...> {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height) ...> ) do ...> {:ok, width * height} ...> end {:ok, 150} ``` The choice between parens and no parens is a matter of preference. An `else` option can be given to modify what is being returned from `with` in the case of a failed match: ``` iex> opts = %{width: 10} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height) do ...> {:ok, width * height} ...> else ...> :error -> ...> {:error, :wrong_data} ...> end {:error, :wrong_data} ``` If an `else` block is used and there are no matching clauses, a [`WithClauseError`](withclauseerror) exception is raised. ### {args} Creates a tuple. More information about the tuple data type and about functions to manipulate tuples can be found in the [`Tuple`](tuple) module; some functions for working with tuples are also available in [`Kernel`](kernel) (such as [`Kernel.elem/2`](kernel#elem/2) or [`Kernel.tuple_size/1`](kernel#tuple_size/1)). #### AST representation Only two-element tuples are considered literals in Elixir and return themselves when quoted. Therefore, all other tuples are represented in the AST as calls to the `:{}` special form. ``` iex> quote do ...> {1, 2} ...> end {1, 2} iex> quote do ...> {1, 2, 3} ...> end {:{}, [], [1, 2, 3]} ```
programming_docs
elixir Elixir Elixir ====== Elixir v1.9.4 API Reference ============================ Modules -------- [Access](access) Key-based access to data structures. [Agent](agent) Agents are a simple abstraction around state. [Application](application) A module for working with applications and defining application callbacks. [ArgumentError](argumenterror) [ArithmeticError](arithmeticerror) [Atom](atom) Convenience functions for working with atoms. [BadArityError](badarityerror) [BadBooleanError](badbooleanerror) [BadFunctionError](badfunctionerror) [BadMapError](badmaperror) [BadStructError](badstructerror) [Base](base) This module provides data encoding and decoding functions according to [RFC 4648](https://tools.ietf.org/html/rfc4648). [Behaviour](behaviour) deprecated Mechanism for handling behaviours. [Bitwise](bitwise) A set of macros that perform calculations on bits. [Calendar](calendar) This module defines the responsibilities for working with calendars, dates, times and datetimes in Elixir. [Calendar.ISO](calendar.iso) A calendar implementation that follows to ISO 8601. [Calendar.TimeZoneDatabase](calendar.timezonedatabase) This module defines a behaviour for providing time zone data. [Calendar.UTCOnlyTimeZoneDatabase](calendar.utconlytimezonedatabase) Built-in time zone database that works only in Etc/UTC. [CaseClauseError](caseclauseerror) [Code](code) Utilities for managing code compilation, code evaluation, and code loading. [Code.LoadError](code.loaderror) [Collectable](collectable) A protocol to traverse data structures. [CompileError](compileerror) [CondClauseError](condclauseerror) [Config](config) A simple keyword-based configuration API. [Config.Provider](config.provider) Specifies a provider API that loads configuration during boot. [Config.Reader](config.reader) API for reading config files defined with [`Config`](config). [Date](date) A Date struct and functions. [Date.Range](date.range) Returns an inclusive range between dates. [DateTime](datetime) A datetime implementation with a time zone. [Dict](dict) deprecated Generic API for dictionaries. [DynamicSupervisor](dynamicsupervisor) A supervisor that starts children dynamically. [Enum](enum) Provides a set of algorithms to work with enumerables. [Enum.EmptyError](enum.emptyerror) [Enum.OutOfBoundsError](enum.outofboundserror) [Enumerable](enumerable) Enumerable protocol used by [`Enum`](enum) and [`Stream`](stream) modules. [ErlangError](erlangerror) [Exception](exception) Functions to format throw/catch/exit and exceptions. [File](file) This module contains functions to manipulate files. [File.CopyError](file.copyerror) [File.Error](file.error) [File.LinkError](file.linkerror) [File.RenameError](file.renameerror) [File.Stat](file.stat) A struct that holds file information. [File.Stream](file.stream) Defines a [`File.Stream`](#content) struct returned by [`File.stream!/3`](file#stream!/3). [Float](float) Functions for working with floating-point numbers. [Function](function) A set of functions for working with functions. [FunctionClauseError](functionclauseerror) [GenEvent](genevent) deprecated A event manager with event handlers behaviour. [GenServer](genserver) A behaviour module for implementing the server of a client-server relation. [HashDict](hashdict) deprecated Tuple-based HashDict implementation. [HashSet](hashset) deprecated Tuple-based HashSet implementation. [IO](io) Functions handling input/output (IO). [IO.ANSI](io.ansi) Functionality to render ANSI escape sequences. [IO.Stream](io.stream) Defines an [`IO.Stream`](#content) struct returned by [`IO.stream/2`](io#stream/2) and [`IO.binstream/2`](io#binstream/2). [IO.StreamError](io.streamerror) [Inspect](inspect) The [`Inspect`](#content) protocol converts an Elixir data structure into an algebra document. [Inspect.Algebra](inspect.algebra) A set of functions for creating and manipulating algebra documents. [Inspect.Error](inspect.error) Raised when a struct cannot be inspected. [Inspect.Opts](inspect.opts) Defines the options used by the [`Inspect`](inspect) protocol. [Integer](integer) Functions for working with integers. [Kernel](kernel) [`Kernel`](#content) is Elixir's default environment. [Kernel.ParallelCompiler](kernel.parallelcompiler) A module responsible for compiling and requiring files in parallel. [Kernel.SpecialForms](kernel.specialforms) Special forms are the basic building blocks of Elixir, and therefore cannot be overridden by the developer. [KeyError](keyerror) [Keyword](keyword) A set of functions for working with keywords. [List](list) Functions that work on (linked) lists. [List.Chars](list.chars) The [`List.Chars`](#content) protocol is responsible for converting a structure to a charlist (only if applicable). [Macro](macro) Conveniences for working with macros. [Macro.Env](macro.env) A struct that holds compile time environment information. [Map](map) A set of functions for working with maps. [MapSet](mapset) Functions that work on sets. [MatchError](matcherror) [Module](module) Provides functions to deal with modules during compilation time. [NaiveDateTime](naivedatetime) A NaiveDateTime struct (without a time zone) and functions. [Node](node) Functions related to VM nodes. [OptionParser](optionparser) Functions for parsing command line arguments. [OptionParser.ParseError](optionparser.parseerror) [Path](path) This module provides conveniences for manipulating or retrieving file system paths. [Port](port) Functions for interacting with the external world through ports. [Process](process) Conveniences for working with processes and the process dictionary. [Protocol](protocol) Reference and functions for working with protocols. [Protocol.UndefinedError](protocol.undefinederror) [Range](range) Defines a range. [Record](record) Module to work with, define, and import records. [Regex](regex) Provides regular expressions for Elixir. [Regex.CompileError](regex.compileerror) [Registry](registry) A local, decentralized and scalable key-value process storage. [RuntimeError](runtimeerror) [Set](set) deprecated Generic API for sets. [Stream](stream) Functions for creating and composing streams. [String](string) A String in Elixir is a UTF-8 encoded binary. [String.Chars](string.chars) The [`String.Chars`](#content) protocol is responsible for converting a structure to a binary (only if applicable). [StringIO](stringio) Controls an IO device process that wraps a string. [Supervisor](supervisor) A behaviour module for implementing supervisors. [Supervisor.Spec](supervisor.spec) deprecated Outdated functions for building child specifications. [SyntaxError](syntaxerror) [System](system) The [`System`](#content) module provides functions that interact directly with the VM or the host system. [SystemLimitError](systemlimiterror) [Task](task) Conveniences for spawning and awaiting tasks. [Task.Supervisor](task.supervisor) A task supervisor. [Time](time) A Time struct and functions. [TokenMissingError](tokenmissingerror) [TryClauseError](tryclauseerror) [Tuple](tuple) Functions for working with tuples. [URI](uri) Utilities for working with URIs. [UndefinedFunctionError](undefinedfunctionerror) [UnicodeConversionError](unicodeconversionerror) [Version](version) Functions for parsing and matching versions against requirements. [Version.InvalidRequirementError](version.invalidrequirementerror) [Version.InvalidVersionError](version.invalidversionerror) [Version.Requirement](version.requirement) A struct that holds version requirement information. [WithClauseError](withclauseerror) elixir mix help mix help ========= Lists all tasks and aliases or prints the documentation for a given task or alias. Arguments ---------- ``` mix help - prints all aliases, tasks and their short descriptions mix help ALIAS - prints the definition for the given alias mix help TASK - prints full docs for the given task mix help --search PATTERN - prints all tasks and aliases that contain PATTERN in the name mix help --names - prints all task names and aliases (useful for autocompleting) ``` Colors ------- When possible, [`mix help`](#content) is going to use coloring for formatting guides. The formatting can be customized by configuring the Mix application either inside your project (in `config/config.exs`) or by using the local config (in `~/.mix/config.exs`). For example, to disable color, one may use the configuration: ``` [mix: [colors: [enabled: false]]] ``` The available color options are: * `:enabled` - shows ANSI formatting (defaults to [`IO.ANSI.enabled?/0`](https://hexdocs.pm/elixir/IO.ANSI.html#enabled?/0)) * `:doc_code` - the attributes for code blocks (cyan, bright) * `:doc_inline_code` - inline code (cyan) * `:doc_headings` - h1 and h2 (yellow, bright) * `:doc_title` - the overall heading for the output (reverse, yellow, bright) * `:doc_bold` - (bright) * `:doc_underline` - (underline) elixir Enumerable protocol Enumerable protocol ==================== Enumerable protocol used by [`Enum`](enum) and [`Stream`](stream) modules. When you invoke a function in the [`Enum`](enum) module, the first argument is usually a collection that must implement this protocol. For example, the expression: ``` Enum.map([1, 2, 3], &(&1 * 2)) ``` invokes [`Enumerable.reduce/3`](enumerable#reduce/3) to perform the reducing operation that builds a mapped list by calling the mapping function `&(&1 * 2)` on every element in the collection and consuming the element with an accumulated list. Internally, [`Enum.map/2`](enum#map/2) is implemented as follows: ``` def map(enumerable, fun) do reducer = fn x, acc -> {:cont, [fun.(x) | acc]} end Enumerable.reduce(enumerable, {:cont, []}, reducer) |> elem(1) |> :lists.reverse() end ``` Notice the user-supplied function is wrapped into a [`reducer/0`](#t:reducer/0) function. The [`reducer/0`](#t:reducer/0) function must return a tagged tuple after each step, as described in the [`acc/0`](#t:acc/0) type. At the end, [`Enumerable.reduce/3`](enumerable#reduce/3) returns [`result/0`](#t:result/0). This protocol uses tagged tuples to exchange information between the reducer function and the data type that implements the protocol. This allows enumeration of resources, such as files, to be done efficiently while also guaranteeing the resource will be closed at the end of the enumeration. This protocol also allows suspension of the enumeration, which is useful when interleaving between many enumerables is required (as in the `zip/1` and `zip/2` functions). This protocol requires four functions to be implemented, [`reduce/3`](#reduce/3), [`count/1`](#count/1), [`member?/2`](#member?/2), and [`slice/1`](#slice/1). The core of the protocol is the [`reduce/3`](#reduce/3) function. All other functions exist as optimizations paths for data structures that can implement certain properties in better than linear time. Summary ======== Types ------ [acc()](#t:acc/0) The accumulator value for each step. [continuation()](#t:continuation/0) A partially applied reduce function. [reducer()](#t:reducer/0) The reducer function. [result()](#t:result/0) The result of the reduce operation. [slicing\_fun()](#t:slicing_fun/0) A slicing function that receives the initial position and the number of elements in the slice. [t()](#t:t/0) Functions ---------- [count(enumerable)](#count/1) Retrieves the number of elements in the `enumerable`. [member?(enumerable, element)](#member?/2) Checks if an `element` exists within the `enumerable`. [reduce(enumerable, acc, fun)](#reduce/3) Reduces the `enumerable` into an element. [slice(enumerable)](#slice/1) Returns a function that slices the data structure contiguously. Types ====== ### acc() #### Specs ``` acc() :: {:cont, term()} | {:halt, term()} | {:suspend, term()} ``` The accumulator value for each step. It must be a tagged tuple with one of the following "tags": * `:cont` - the enumeration should continue * `:halt` - the enumeration should halt immediately * `:suspend` - the enumeration should be suspended immediately Depending on the accumulator value, the result returned by [`Enumerable.reduce/3`](enumerable#reduce/3) will change. Please check the [`result/0`](#t:result/0) type documentation for more information. In case a [`reducer/0`](#t:reducer/0) function returns a `:suspend` accumulator, it must be explicitly handled by the caller and never leak. ### continuation() #### Specs ``` continuation() :: (acc() -> result()) ``` A partially applied reduce function. The continuation is the closure returned as a result when the enumeration is suspended. When invoked, it expects a new accumulator and it returns the result. A continuation can be trivially implemented as long as the reduce function is defined in a tail recursive fashion. If the function is tail recursive, all the state is passed as arguments, so the continuation is the reducing function partially applied. ### reducer() #### Specs ``` reducer() :: (term(), term() -> acc()) ``` The reducer function. Should be called with the `enumerable` element and the accumulator contents. Returns the accumulator for the next enumeration step. ### result() #### Specs ``` result() :: {:done, term()} | {:halted, term()} | {:suspended, term(), continuation()} ``` The result of the reduce operation. It may be *done* when the enumeration is finished by reaching its end, or *halted*/*suspended* when the enumeration was halted or suspended by the [`reducer/0`](#t:reducer/0) function. In case a [`reducer/0`](#t:reducer/0) function returns the `:suspend` accumulator, the `:suspended` tuple must be explicitly handled by the caller and never leak. In practice, this means regular enumeration functions just need to be concerned about `:done` and `:halted` results. Furthermore, a `:suspend` call must always be followed by another call, eventually halting or continuing until the end. ### slicing\_fun() #### Specs ``` slicing_fun() :: (start :: non_neg_integer(), length :: pos_integer() -> [term()]) ``` A slicing function that receives the initial position and the number of elements in the slice. The `start` position is a number `>= 0` and guaranteed to exist in the `enumerable`. The length is a number `>= 1` in a way that `start + length <= count`, where `count` is the maximum amount of elements in the enumerable. The function should return a non empty list where the amount of elements is equal to `length`. ### t() #### Specs ``` t() :: term() ``` Functions ========== ### count(enumerable) #### Specs ``` count(t()) :: {:ok, non_neg_integer()} | {:error, module()} ``` Retrieves the number of elements in the `enumerable`. It should return `{:ok, count}` if you can count the number of elements in the `enumerable`. Otherwise it should return `{:error, __MODULE__}` and a default algorithm built on top of [`reduce/3`](#reduce/3) that runs in linear time will be used. ### member?(enumerable, element) #### Specs ``` member?(t(), term()) :: {:ok, boolean()} | {:error, module()} ``` Checks if an `element` exists within the `enumerable`. It should return `{:ok, boolean}` if you can check the membership of a given element in the `enumerable` with [`===/2`](kernel#===/2) without traversing the whole enumerable. Otherwise it should return `{:error, __MODULE__}` and a default algorithm built on top of [`reduce/3`](#reduce/3) that runs in linear time will be used. ### reduce(enumerable, acc, fun) #### Specs ``` reduce(t(), acc(), reducer()) :: result() ``` Reduces the `enumerable` into an element. Most of the operations in [`Enum`](enum) are implemented in terms of reduce. This function should apply the given [`reducer/0`](#t:reducer/0) function to each element in the `enumerable` and proceed as expected by the returned accumulator. See the documentation of the types [`result/0`](#t:result/0) and [`acc/0`](#t:acc/0) for more information. #### Examples As an example, here is the implementation of `reduce` for lists: ``` def reduce(_list, {:halt, acc}, _fun), do: {:halted, acc} def reduce(list, {:suspend, acc}, fun), do: {:suspended, acc, &reduce(list, &1, fun)} def reduce([], {:cont, acc}, _fun), do: {:done, acc} def reduce([head | tail], {:cont, acc}, fun), do: reduce(tail, fun.(head, acc), fun) ``` ### slice(enumerable) #### Specs ``` slice(t()) :: {:ok, size :: non_neg_integer(), slicing_fun()} | {:error, module()} ``` Returns a function that slices the data structure contiguously. It should return `{:ok, size, slicing_fun}` if the `enumerable` has a known bound and can access a position in the `enumerable` without traversing all previous elements. Otherwise it should return `{:error, __MODULE__}` and a default algorithm built on top of [`reduce/3`](#reduce/3) that runs in linear time will be used. #### Differences to [`count/1`](#count/1) The `size` value returned by this function is used for boundary checks, therefore it is extremely important that this function only returns `:ok` if retrieving the `size` of the `enumerable` is cheap, fast and takes constant time. Otherwise the simplest of operations, such as `Enum.at(enumerable, 0)`, will become too expensive. On the other hand, the [`count/1`](#count/1) function in this protocol should be implemented whenever you can count the number of elements in the collection. elixir Calendar behaviour Calendar behaviour =================== This module defines the responsibilities for working with calendars, dates, times and datetimes in Elixir. Currently it defines types and the minimal implementation for a calendar behaviour in Elixir. The goal of the Calendar features in Elixir is to provide a base for interoperability instead of full-featured datetime API. For the actual date, time and datetime structures, see [`Date`](date), [`Time`](time), [`NaiveDateTime`](naivedatetime) and [`DateTime`](datetime). Note the year, month, day, etc. designations are overspecified (i.e. an integer instead of `1..12` for months) because different calendars may have a different number of days per month, months per year and so on. Summary ======== Types ------ [calendar()](#t:calendar/0) A calendar implementation [date()](#t:date/0) Any map/struct that contains the date fields [datetime()](#t:datetime/0) Any map/struct that contains the datetime fields [day()](#t:day/0) [day\_fraction()](#t:day_fraction/0) The internal time format is used when converting between calendars. [day\_of\_week()](#t:day_of_week/0) [era()](#t:era/0) [hour()](#t:hour/0) [iso\_days()](#t:iso_days/0) The internal date format that is used when converting between calendars. [microsecond()](#t:microsecond/0) Microseconds with stored precision. [minute()](#t:minute/0) [month()](#t:month/0) [naive\_datetime()](#t:naive_datetime/0) Any map/struct that contains the naive\_datetime fields [second()](#t:second/0) [std\_offset()](#t:std_offset/0) The time zone standard offset in seconds (not zero in summer times) [time()](#t:time/0) Any map/struct that contains the time fields [time\_zone()](#t:time_zone/0) The time zone ID according to the IANA tz database (e.g. Europe/Zurich) [time\_zone\_database()](#t:time_zone_database/0) Specifies the time zone database for calendar operations. [utc\_offset()](#t:utc_offset/0) The time zone UTC offset in seconds [week()](#t:week/0) [year()](#t:year/0) [zone\_abbr()](#t:zone_abbr/0) The time zone abbreviation (e.g. CET or CEST or BST etc.) Functions ---------- [compatible\_calendars?(calendar, calendar)](#compatible_calendars?/2) Returns `true` if two calendars have the same moment of starting a new day, `false` otherwise. [get\_time\_zone\_database()](#get_time_zone_database/0) Gets the current time zone database. [put\_time\_zone\_database(database)](#put_time_zone_database/1) Sets the current time zone database. [truncate(microsecond\_tuple, atom)](#truncate/2) Returns a microsecond tuple truncated to a given precision (`:microsecond`, `:millisecond` or `:second`). Callbacks ---------- [date\_to\_string(year, month, day)](#c:date_to_string/3) Converts the date into a string according to the calendar. [datetime\_to\_string(year, month, day, hour, minute, second, microsecond, time\_zone, zone\_abbr, utc\_offset, std\_offset)](#c:datetime_to_string/11) Converts the datetime (with time zone) into a string according to the calendar. [day\_of\_era(year, month, day)](#c:day_of_era/3) Calculates the day and era from the given `year`, `month`, and `day`. [day\_of\_week(year, month, day)](#c:day_of_week/3) Calculates the day of the week from the given `year`, `month`, and `day`. [day\_of\_year(year, month, day)](#c:day_of_year/3) Calculates the day of the year from the given `year`, `month`, and `day`. [day\_rollover\_relative\_to\_midnight\_utc()](#c:day_rollover_relative_to_midnight_utc/0) Define the rollover moment for the given calendar. [days\_in\_month(year, month)](#c:days_in_month/2) Returns how many days there are in the given year-month. [leap\_year?(year)](#c:leap_year?/1) Returns `true` if the given year is a leap year. [months\_in\_year(year)](#c:months_in_year/1) Returns how many months there are in the given year. [naive\_datetime\_from\_iso\_days(iso\_days)](#c:naive_datetime_from_iso_days/1) Converts [`iso_days/0`](#t:iso_days/0) to the Calendar's datetime format. [naive\_datetime\_to\_iso\_days(year, month, day, hour, minute, second, microsecond)](#c:naive_datetime_to_iso_days/7) Converts the given datetime (without time zone) into the [`iso_days/0`](#t:iso_days/0) format. [naive\_datetime\_to\_string(year, month, day, hour, minute, second, microsecond)](#c:naive_datetime_to_string/7) Converts the datetime (without time zone) into a string according to the calendar. [quarter\_of\_year(year, month, day)](#c:quarter_of_year/3) Calculates the quarter of the year from the given `year`, `month`, and `day`. [time\_from\_day\_fraction(day\_fraction)](#c:time_from_day_fraction/1) Converts [`day_fraction/0`](#t:day_fraction/0) to the Calendar's time format. [time\_to\_day\_fraction(hour, minute, second, microsecond)](#c:time_to_day_fraction/4) Converts the given time to the [`day_fraction/0`](#t:day_fraction/0) format. [time\_to\_string(hour, minute, second, microsecond)](#c:time_to_string/4) Converts the time into a string according to the calendar. [valid\_date?(year, month, day)](#c:valid_date?/3) Should return `true` if the given date describes a proper date in the calendar. [valid\_time?(hour, minute, second, microsecond)](#c:valid_time?/4) Should return `true` if the given time describes a proper time in the calendar. [year\_of\_era(year)](#c:year_of_era/1) Calculates the year and era from the given `year`. Types ====== ### calendar() #### Specs ``` calendar() :: module() ``` A calendar implementation ### date() #### Specs ``` date() :: %{ optional(any()) => any(), :calendar => calendar(), :year => year(), :month => month(), :day => day() } ``` Any map/struct that contains the date fields ### datetime() #### Specs ``` datetime() :: %{ optional(any()) => any(), :calendar => calendar(), :year => year(), :month => month(), :day => day(), :hour => hour(), :minute => minute(), :second => second(), :microsecond => microsecond(), :time_zone => time_zone(), :zone_abbr => zone_abbr(), :utc_offset => utc_offset(), :std_offset => std_offset() } ``` Any map/struct that contains the datetime fields ### day() #### Specs ``` day() :: pos_integer() ``` ### day\_fraction() #### Specs ``` day_fraction() :: {parts_in_day :: non_neg_integer(), parts_per_day :: pos_integer()} ``` The internal time format is used when converting between calendars. It represents time as a fraction of a day (starting from midnight). `parts_in_day` specifies how much of the day is already passed, while `parts_per_day` signifies how many parts there fit in a day. ### day\_of\_week() #### Specs ``` day_of_week() :: non_neg_integer() ``` ### era() #### Specs ``` era() :: non_neg_integer() ``` ### hour() #### Specs ``` hour() :: non_neg_integer() ``` ### iso\_days() #### Specs ``` iso_days() :: {days :: integer(), day_fraction()} ``` The internal date format that is used when converting between calendars. This is the number of days including the fractional part that has passed of the last day since 0000-01-01+00:00T00:00.000000 in ISO 8601 notation (also known as midnight 1 January BC 1 of the proleptic Gregorian calendar). ### microsecond() #### Specs ``` microsecond() :: {0..999_999, 0..6} ``` Microseconds with stored precision. The precision represents the number of digits that must be used when representing the microseconds to external format. If the precision is 0, it means microseconds must be skipped. ### minute() #### Specs ``` minute() :: non_neg_integer() ``` ### month() #### Specs ``` month() :: pos_integer() ``` ### naive\_datetime() #### Specs ``` naive_datetime() :: %{ optional(any()) => any(), :calendar => calendar(), :year => year(), :month => month(), :day => day(), :hour => hour(), :minute => minute(), :second => second(), :microsecond => microsecond() } ``` Any map/struct that contains the naive\_datetime fields ### second() #### Specs ``` second() :: non_neg_integer() ``` ### std\_offset() #### Specs ``` std_offset() :: integer() ``` The time zone standard offset in seconds (not zero in summer times) ### time() #### Specs ``` time() :: %{ optional(any()) => any(), :hour => hour(), :minute => minute(), :second => second(), :microsecond => microsecond() } ``` Any map/struct that contains the time fields ### time\_zone() #### Specs ``` time_zone() :: String.t() ``` The time zone ID according to the IANA tz database (e.g. Europe/Zurich) ### time\_zone\_database() #### Specs ``` time_zone_database() :: module() ``` Specifies the time zone database for calendar operations. Many functions in the [`DateTime`](datetime) module require a time zone database. By default, it uses the default time zone database returned by [`Calendar.get_time_zone_database/0`](calendar#get_time_zone_database/0), which defaults to [`Calendar.UTCOnlyTimeZoneDatabase`](calendar.utconlytimezonedatabase) which only handles "Etc/UTC" datetimes and returns `{:error, :utc_only_time_zone_database}` for any other time zone. Other time zone databases (including ones provided by packages) can be configure as default either via configuration: ``` config :elixir, :time_zone_database, CustomTimeZoneDatabase ``` or by calling [`Calendar.put_time_zone_database/1`](calendar#put_time_zone_database/1). See [`Calendar.TimeZoneDatabase`](calendar.timezonedatabase) for more information on custom time zone databases. ### utc\_offset() #### Specs ``` utc_offset() :: integer() ``` The time zone UTC offset in seconds ### week() #### Specs ``` week() :: pos_integer() ``` ### year() #### Specs ``` year() :: integer() ``` ### zone\_abbr() #### Specs ``` zone_abbr() :: String.t() ``` The time zone abbreviation (e.g. CET or CEST or BST etc.) Functions ========== ### compatible\_calendars?(calendar, calendar) #### Specs ``` compatible_calendars?(Calendar.calendar(), Calendar.calendar()) :: boolean() ``` Returns `true` if two calendars have the same moment of starting a new day, `false` otherwise. If two calendars are not compatible, we can only convert datetimes and times between them. If they are compatible, this means that we can also convert dates as well as naive datetimes between them. ### get\_time\_zone\_database() #### Specs ``` get_time_zone_database() :: time_zone_database() ``` Gets the current time zone database. ### put\_time\_zone\_database(database) #### Specs ``` put_time_zone_database(time_zone_database()) :: :ok ``` Sets the current time zone database. ### truncate(microsecond\_tuple, atom) #### Specs ``` truncate(Calendar.microsecond(), :microsecond | :millisecond | :second) :: Calendar.microsecond() ``` Returns a microsecond tuple truncated to a given precision (`:microsecond`, `:millisecond` or `:second`). Callbacks ========== ### date\_to\_string(year, month, day) #### Specs ``` date_to_string(year(), month(), day()) :: String.t() ``` Converts the date into a string according to the calendar. ### datetime\_to\_string(year, month, day, hour, minute, second, microsecond, time\_zone, zone\_abbr, utc\_offset, std\_offset) #### Specs ``` datetime_to_string( year(), month(), day(), hour(), minute(), second(), microsecond(), time_zone(), zone_abbr(), utc_offset(), std_offset() ) :: String.t() ``` Converts the datetime (with time zone) into a string according to the calendar. ### day\_of\_era(year, month, day) #### Specs ``` day_of_era(year(), month(), day()) :: {non_neg_integer(), era()} ``` Calculates the day and era from the given `year`, `month`, and `day`. ### day\_of\_week(year, month, day) #### Specs ``` day_of_week(year(), month(), day()) :: day_of_week() ``` Calculates the day of the week from the given `year`, `month`, and `day`. ### day\_of\_year(year, month, day) #### Specs ``` day_of_year(year(), month(), day()) :: non_neg_integer() ``` Calculates the day of the year from the given `year`, `month`, and `day`. ### day\_rollover\_relative\_to\_midnight\_utc() #### Specs ``` day_rollover_relative_to_midnight_utc() :: day_fraction() ``` Define the rollover moment for the given calendar. This is the moment, in your calendar, when the current day ends and the next day starts. The result of this function is used to check if two calendars rollover at the same time of day. If they do not, we can only convert datetimes and times between them. If they do, this means that we can also convert dates as well as naive datetimes between them. This day fraction should be in its most simplified form possible, to make comparisons fast. #### Examples * If, in your Calendar, a new day starts at midnight, return {0, 1}. * If, in your Calendar, a new day starts at sunrise, return {1, 4}. * If, in your Calendar, a new day starts at noon, return {1, 2}. * If, in your Calendar, a new day starts at sunset, return {3, 4}. ### days\_in\_month(year, month) #### Specs ``` days_in_month(year(), month()) :: day() ``` Returns how many days there are in the given year-month. ### leap\_year?(year) #### Specs ``` leap_year?(year()) :: boolean() ``` Returns `true` if the given year is a leap year. A leap year is a year of a longer length than normal. The exact meaning is up to the calendar. A calendar must return `false` if it does not support the concept of leap years. ### months\_in\_year(year) #### Specs ``` months_in_year(year()) :: month() ``` Returns how many months there are in the given year. ### naive\_datetime\_from\_iso\_days(iso\_days) #### Specs ``` naive_datetime_from_iso_days(iso_days()) :: {year(), month(), day(), hour(), minute(), second(), microsecond()} ``` Converts [`iso_days/0`](#t:iso_days/0) to the Calendar's datetime format. ### naive\_datetime\_to\_iso\_days(year, month, day, hour, minute, second, microsecond) #### Specs ``` naive_datetime_to_iso_days( year(), month(), day(), hour(), minute(), second(), microsecond() ) :: iso_days() ``` Converts the given datetime (without time zone) into the [`iso_days/0`](#t:iso_days/0) format. ### naive\_datetime\_to\_string(year, month, day, hour, minute, second, microsecond) #### Specs ``` naive_datetime_to_string( year(), month(), day(), hour(), minute(), second(), microsecond() ) :: String.t() ``` Converts the datetime (without time zone) into a string according to the calendar. ### quarter\_of\_year(year, month, day) #### Specs ``` quarter_of_year(year(), month(), day()) :: non_neg_integer() ``` Calculates the quarter of the year from the given `year`, `month`, and `day`. ### time\_from\_day\_fraction(day\_fraction) #### Specs ``` time_from_day_fraction(day_fraction()) :: {hour(), minute(), second(), microsecond()} ``` Converts [`day_fraction/0`](#t:day_fraction/0) to the Calendar's time format. ### time\_to\_day\_fraction(hour, minute, second, microsecond) #### Specs ``` time_to_day_fraction(hour(), minute(), second(), microsecond()) :: day_fraction() ``` Converts the given time to the [`day_fraction/0`](#t:day_fraction/0) format. ### time\_to\_string(hour, minute, second, microsecond) #### Specs ``` time_to_string(hour(), minute(), second(), microsecond()) :: String.t() ``` Converts the time into a string according to the calendar. ### valid\_date?(year, month, day) #### Specs ``` valid_date?(year(), month(), day()) :: boolean() ``` Should return `true` if the given date describes a proper date in the calendar. ### valid\_time?(hour, minute, second, microsecond) #### Specs ``` valid_time?(hour(), minute(), second(), microsecond()) :: boolean() ``` Should return `true` if the given time describes a proper time in the calendar. ### year\_of\_era(year) #### Specs ``` year_of_era(year()) :: {year(), era()} ``` Calculates the year and era from the given `year`.
programming_docs
elixir Compatibility and Deprecations Compatibility and Deprecations ============================== Elixir is versioned according to a vMAJOR.MINOR.PATCH schema. Elixir is currently at major version v1. A new backwards compatible minor release happens every 6 months. Patch releases are not scheduled and are made whenever there are bug fixes or security patches. Elixir applies bug fixes only to the latest minor branch. Security patches are available for the last 5 minor branches: | Elixir version | Support | | --- | --- | | 1.9 | Bug fixes and security patches | | 1.8 | Security patches only | | 1.7 | Security patches only | | 1.6 | Security patches only | | 1.5 | Security patches only | New releases are announced in the read-only [announcements mailing list](https://groups.google.com/group/elixir-lang-ann). All security releases [will be tagged with `[security]`](https://groups.google.com/forum/#!searchin/elixir-lang-ann/%5Bsecurity%5D%7Csort:date). There are currently no plans for a major v2 release. Compatibility between non-major Elixir versions ------------------------------------------------ Elixir minor and patch releases are backwards compatible: well-defined behaviours and documented APIs in a given version will continue working on future versions. Although we expect the vast majority of programs to remain compatible over time, it is impossible to guarantee that no future change will break any program. Under some unlikely circumstances, we may introduce changes that break existing code: * Security: a security issue in the implementation may arise whose resolution requires backwards incompatible changes. We reserve the right to address such security issues. * Bugs: if an API has undesired behaviour, a program that depends on the buggy behaviour may break if the bug is fixed. We reserve the right to fix such bugs. * Compiler front-end: improvements may be done to the compiler, introducing new warnings for ambiguous modes and providing more detailed error messages. Those can lead to compilation errors (when running with `--warning-as-errors`) or tooling failures when asserting on specific error messages (although one should avoid such). We reserve the right to do such improvements. * Imports: new functions may be added to the [`Kernel`](kernel) module, which is auto-imported. They may collide with local functions defined in your modules. Collisions can be resolved in a backwards compatible fashion using `import Kernel, except: [...]` with a list of all functions you don't want to be imported from [`Kernel`](kernel). We reserve the right to do such additions. In order to continue evolving the language without introducing breaking changes, Elixir will rely on deprecations to demote certain practices and promote new ones. Our deprecation policy is outlined in the ["Deprecations" section](#deprecations). The only exception to the compatibility guarantees above are experimental features, which will be explicitly marked as such, and do not provide any compatibility guarantee until they are stabilized. Compatibility between Elixir and Erlang/OTP -------------------------------------------- Erlang/OTP versioning is independent from the versioning of Elixir. Each version of Elixir supports a specific range of Erlang/OTP versions. The compatibility table is shown below. | Elixir version | Supported Erlang/OTP versions | | --- | --- | | 1.0 | 17 - 17 (and Erlang/OTP 18 from v1.0.5) | | 1.1 | 17 - 18 | | 1.2 | 18 - 18 (and Erlang/OTP 19 from v1.2.6) | | 1.3 | 18 - 19 | | 1.4 | 18 - 19 (and Erlang/OTP 20 from v1.4.5) | | 1.5 | 18 - 20 | | 1.6 | 19 - 20 (and Erlang/OTP 21 from v1.6.6) | | 1.7 | 19 - 22 | | 1.8 | 20 - 22 | | 1.9 | 20 - 22 | While Elixir often adds compatibility to new Erlang/OTP versions on released branches, such as support for Erlang/OTP 20 in v1.4.5, those releases usually contain the minimum changes for Elixir to run without errors. Only the next minor release, in this case v1.5.0, does effectively leverage the new features provided by the latest Erlang/OTP release. Deprecations ------------- ### Policy Elixir deprecations happen in 3 steps: 1. The feature is soft-deprecated. It means both CHANGELOG and documentation must list the feature as deprecated but no warning is effectively emitted by running the code. There is no requirement to soft-deprecate a feature. 2. The feature is effectively deprecated by emitting warnings on usage. This is also known as hard-deprecation. In order to deprecate a feature, the proposed alternative MUST exist for AT LEAST THREE minor versions. For example, `Enum.uniq/2` was soft-deprecated in favor of [`Enum.uniq_by/2`](enum#uniq_by/2) in Elixir v1.1. This means a deprecation warning may only be emitted by Elixir v1.4 or later. 3. The feature is removed. This can only happen on major releases. This means deprecated features in Elixir v1.x shall only be removed by Elixir v2.x. ### Table of deprecations The first column is the version the feature was hard deprecated. The second column shortly describes the deprecated feature and the third column explains the replacement and from which the version the replacement is available from. | Version | Deprecated feature | Replaced by (available since) | | --- | --- | --- | | [v1.9](https://github.com/elixir-lang/elixir/blob/v1.9/CHANGELOG.md#4-hard-deprecations) | Passing `:insert_replaced` to [`String.replace/4`](string#replace/4) | Use [`:binary.replace/4`](http://www.erlang.org/doc/man/binary.html#replace-4) (v1.0) | | [v1.9](https://github.com/elixir-lang/elixir/blob/v1.9/CHANGELOG.md#4-hard-deprecations) | Enumerable keys in [`Map.drop/2`](map#drop/2), [`Map.split/2`](map#split/2), and [`Map.take/2`](map#take/2) | Call [`Enum.to_list/1`](enum#to_list/1) on the second argument before hand (v1.0) | | [v1.9](https://github.com/elixir-lang/elixir/blob/v1.9/CHANGELOG.md#4-hard-deprecations) | [`Mix.Project.load_paths/1`](https://hexdocs.pm/mix/Mix.Project.html#load_paths/1) | [`Mix.Project.compile_path/1`](https://hexdocs.pm/mix/Mix.Project.html#compile_path/1) (v1.0) | | [v1.9](https://github.com/elixir-lang/elixir/blob/v1.9/CHANGELOG.md#4-hard-deprecations) | `--detached` in CLI | `--erl "-detached"` (v1.0) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | Passing a non-empty list to [`Enum.into/2`](enum#into/2) | [`Kernel.++/2`](kernel#++/2) or [`Keyword.merge/2`](keyword#merge/2) (v1.0) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | Passing a non-empty list to `:into` in `for` | [`Kernel.++/2`](kernel#++/2) or [`Keyword.merge/2`](keyword#merge/2) (v1.0) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | `:seconds`, `:milliseconds`, etc. as time units | `:second`, `:millisecond`, etc. (v1.4) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | `Inspect.Algebra.surround/3` | [`Inspect.Algebra.concat/2`](inspect.algebra#concat/2) and [`Inspect.Algebra.nest/2`](inspect.algebra#nest/2) (v1.0) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | `Inspect.Algebra.surround_many/6` | [`Inspect.Algebra.container_doc/6`](inspect.algebra#container_doc/6) (v1.6) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | `Kernel.ParallelCompiler.files/2` | [`Kernel.ParallelCompiler.compile/2`](kernel.parallelcompiler#compile/2) (v1.6) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | `Kernel.ParallelCompiler.files_to_path/2` | [`Kernel.ParallelCompiler.compile_to_path/2`](kernel.parallelcompiler#compile_to_path/2) (v1.6) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | [`Kernel.ParallelRequire.files/2`](https://hexdocs.pm/elixir/Kernel.ParallelRequire.html#files/2) | [`Kernel.ParallelCompiler.require/2`](kernel.parallelcompiler#require/2) (v1.6) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | [`System.cwd/0`](system#cwd/0) and [`System.cwd!/0`](system#cwd!/0) | [`File.cwd/0`](file#cwd/0) and [`File.cwd!/0`](file#cwd!/0) (v1.0) | | [v1.8](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#4-hard-deprecations) | Returning `{:ok, contents}` or `:error` from [`Mix.Compilers.Erlang.compile/6`](https://hexdocs.pm/mix/Mix.Compilers.Erlang.html#compile/6)'s callback | Return `{:ok, contents, warnings}` or `{:error, errors, warnings}` (v1.6) | | [v1.7](https://github.com/elixir-lang/elixir/blob/v1.7/CHANGELOG.md#4-hard-deprecations) | [`Code.get_docs/2`](code#get_docs/2) | [`Code.fetch_docs/1`](code#fetch_docs/1) (v1.7) | | [v1.7](https://github.com/elixir-lang/elixir/blob/v1.7/CHANGELOG.md#4-hard-deprecations) | Calling [`super/1`](kernel.specialforms#super/1) on GenServer callbacks | Implenting the behaviour explicitly without calling [`super/1`](kernel.specialforms#super/1) (v1.0) | | v1.7] | `Enum.chunk/2`[`/3/4` | [`Enum.chunk_every/2`](enum#chunk_every/2)[`/3/4`](enum#chunk_every/4) (v1.5) | | [v1.7](https://github.com/elixir-lang/elixir/blob/v1.7/CHANGELOG.md#4-hard-deprecations) | `not left in right` | [`left not in right`](kernel#in/2) (v1.5) | | [v1.7](https://github.com/elixir-lang/elixir/blob/v1.7/CHANGELOG.md#4-hard-deprecations) | `Registry.start_link/3` | [`Registry.start_link/1`](registry#start_link/1) (v1.5) | | v1.7] | `Stream.chunk/2`[`/3/4` | [`Stream.chunk_every/2`](stream#chunk_every/2)[`/3/4`](stream#chunk_every/4) (v1.5) | | [v1.6](https://github.com/elixir-lang/elixir/blob/v1.6/CHANGELOG.md#4-deprecations) | `Enum.partition/2` | [`Enum.split_with/2`](enum#split_with/2) (v1.4) | | [v1.6](https://github.com/elixir-lang/elixir/blob/v1.6/CHANGELOG.md#4-deprecations) | `Keyword.replace/3` | [`Keyword.fetch/2`](keyword#fetch/2) + [`Keyword.put/3`](keyword#put/3) (v1.0) | | [v1.6](https://github.com/elixir-lang/elixir/blob/v1.6/CHANGELOG.md#4-deprecations) | `Macro.unescape_tokens/1/2` | Use [`Enum.map/2`](enum#map/2) to traverse over the arguments (v1.0) | | [v1.6](https://github.com/elixir-lang/elixir/blob/v1.6/CHANGELOG.md#4-deprecations) | `Module.add_doc/6` | `@doc` module attribute (v1.0) | | [v1.6](https://github.com/elixir-lang/elixir/blob/v1.6/CHANGELOG.md#4-deprecations) | `Map.replace/3` | [`Map.fetch/2`](map#fetch/2) + [`Map.put/3`](map#put/3) (v1.0) | | [v1.6](https://github.com/elixir-lang/elixir/blob/v1.6/CHANGELOG.md#4-deprecations) | `Range.range?/1` | Pattern match on `_.._` (v1.0) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `Atom.to_char_list/1` | [`Atom.to_charlist/1`](atom#to_charlist/1) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `Enum.filter_map/3` | [`Enum.filter/2`](enum#filter/2) + [`Enum.map/2`](enum#map/2) or [`for`](kernel.specialforms#for/1) comprehensions (v1.0) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `Float.to_char_list/1` | [`Float.to_charlist/1`](float#to_charlist/1) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | [`GenEvent`](genevent) module | [`Supervisor`](supervisor) and [`GenServer`](genserver) (v1.0);[`GenStage`](https://hex.pm/packages/gen_stage) (v1.3);[`:gen_event`](http://www.erlang.org/doc/man/gen_event.html) (Erlang/OTP 17) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `Integer.to_char_list/1/2` | [`Integer.to_charlist/1`](integer#to_charlist/1) and [`Integer.to_charlist/2`](integer#to_charlist/2) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `Kernel.to_char_list/1` | [`Kernel.to_charlist/1`](kernel#to_charlist/1) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `List.Chars.to_char_list/1` | [`List.Chars.to_charlist/1`](list.chars#to_charlist/1) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `Stream.filter_map/3` | [`Stream.filter/2`](stream#filter/2) + [`Stream.map/2`](stream#map/2) (v1.0) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `String.ljust/3` and `String.rjust/3` | Use [`String.pad_leading/3`](string#pad_leading/3) and [`String.pad_trailing/3`](string#pad_trailing/3) with a binary padding (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `String.strip/1` and `String.strip/2` | [`String.trim/1`](string#trim/1) and [`String.trim/2`](string#trim/2) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `String.lstrip/1` and `String.rstrip/1` | [`String.trim_leading/1`](string#trim_leading/1) and [`String.trim_trailing/1`](string#trim_trailing/1) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `String.lstrip/2` and `String.rstrip/2` | Use [`String.trim_leading/2`](string#trim_leading/2) and [`String.trim_trailing/2`](string#trim_trailing/2) with a binary as second argument (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `String.to_char_list/1` | [`String.to_charlist/1`](string#to_charlist/1) (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `()` to mean `nil` | `nil` (v1.0) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `char_list/0` type | [`charlist/0`](typespecs#built-in-types) type (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `:char_lists` key in [`Inspect.Opts.t/0`](inspect.opts#t:t/0) type | `:charlists` key (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `:as_char_lists` value in [`Inspect.Opts.t/0`](inspect.opts#t:t/0) type | `:as_charlists` value (v1.3) | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | `@compile {:parse_transform, _}` in [`Module`](module) | *None* | | [v1.5](https://github.com/elixir-lang/elixir/blob/v1.5/CHANGELOG.md#4-deprecations) | EEx: `<%=` in middle and end expressions | Use `<%` (`<%=` is allowed only on start expressions) (v1.0) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`Access.key/1`](access#key/1) | [`Access.key/2`](access#key/2) (v1.3) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`Behaviour`](behaviour) module | `@callback` module attribute (v1.0) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | `Enum.uniq/2` | [`Enum.uniq_by/2`](enum#uniq_by/2) (v1.2) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | `Float.to_char_list/2` | [`:erlang.float_to_list/2`](http://www.erlang.org/doc/man/erlang.html#float_to_list-2) (Erlang/OTP 17) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | `Float.to_string/2` | [`:erlang.float_to_binary/2`](http://www.erlang.org/doc/man/erlang.html#float_to_binary-2) (Erlang/OTP 17) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`HashDict`](hashdict) module | [`Map`](map) (v1.2) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`HashSet`](hashset) module | [`MapSet`](mapset) (v1.1) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | Multi-letter aliases in [`OptionParser`](optionparser) | Use single-letter aliases (v1.0) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`Set`](set) module | [`MapSet`](mapset) (v1.1) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | `Stream.uniq/2` | [`Stream.uniq_by/2`](stream#uniq_by/2) (v1.2) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`IEx.Helpers.import_file/2`](https://hexdocs.pm/iex/IEx.Helpers.html#import_file/2) | [`IEx.Helpers.import_file_if_available/1`](https://hexdocs.pm/iex/IEx.Helpers.html#import_file_if_available/1) (v1.3) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`Mix.Utils.camelize/1`](https://hexdocs.pm/mix/Mix.Utils.html#camelize/1) | [`Macro.camelize/1`](macro#camelize/1) (v1.2) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | [`Mix.Utils.underscore/1`](https://hexdocs.pm/mix/Mix.Utils.html#underscore/1) | [`Macro.underscore/1`](macro#underscore/1) (v1.2) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | Variable used as function call | Use parentheses (v1.0) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | Anonymous functions with no expression after `->` | Use an expression or explicitly return `nil` (v1.0) | | [v1.4](https://github.com/elixir-lang/elixir/blob/v1.4/CHANGELOG.md#4-deprecations) | Support for making private functions overridable | Use public functions (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | [`Dict`](dict) module | [`Keyword`](keyword) (v1.0) or [`Map`](map) (v1.2) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `Keyword.size/1` | [`Kernel.length/1`](kernel#length/1) (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `Map.size/1` | [`Kernel.map_size/1`](kernel#map_size/1) (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | [`Set`](set) behaviour | [`MapSet`](mapset) data structure (v1.1) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `String.valid_character?/1` | [`String.valid?/1`](string#valid?/1) (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `Task.find/2` | Use direct message matching (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `:append_first` option in [`Kernel.defdelegate/2`](kernel#defdelegate/2) | Define the function explicitly (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `/r` option in [`Regex`](regex) | `/U` (v1.1) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | `\x{X*}` inside strings/sigils/charlists | `\uXXXX` or `\u{X*}` (v1.1) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | Map/dictionary as 2nd argument in [`Enum.group_by/3`](enum#group_by/3) | [`Enum.reduce/3`](enum#reduce/3) (v1.0) | | [v1.3](https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#4-deprecations) | Non-map as 2nd argument in [`URI.decode_query/2`](uri#decode_query/2) | Use a map (v1.0) | | [v1.2](https://github.com/elixir-lang/elixir/blob/v1.2/CHANGELOG.md#changelog-for-elixir-v12) | [`Dict`](dict) behaviour | [`MapSet`](mapset) data structure (v1.1) | | [v1.1](https://github.com/elixir-lang/elixir/blob/v1.1/CHANGELOG.md#4-deprecations) | [`Access`](access) protocol | [`Access`](access) behaviour (v1.1) | | [v1.1](https://github.com/elixir-lang/elixir/blob/v1.1/CHANGELOG.md#4-deprecations) | `as: true | false` in [`alias/2`](kernel.specialforms#alias/2) and [`require/2`](kernel.specialforms#require/2) | *None* | | [v1.1](https://github.com/elixir-lang/elixir/blob/v1.1/CHANGELOG.md#4-deprecations) | `?\xHEX` | `0xHEX` (v1.0) | elixir Application behaviour Application behaviour ====================== A module for working with applications and defining application callbacks. Applications are the idiomatic way to package software in Erlang/OTP. To get the idea, they are similar to the "library" concept common in other programming languages, but with some additional characteristics. An application is a component implementing some specific functionality, with a standardized directory structure, configuration, and lifecycle. Applications are *loaded*, *started*, and *stopped*. The application resource file ------------------------------ Applications are specified in their [*resource file*](http://erlang.org/doc/man/app.html), which is a file called `APP.app`, where `APP` is the application name. For example, the application resource file of the OTP application `ex_unit` is called `ex_unit.app`. You'll find the resource file of an application in its `ebin` directory, it is generated automatically by Mix. Some of its keys are taken from the keyword lists returned by the `project/0` and `application/0` functions defined in `mix.exs`, and others are generated by Mix itself. You can learn more about the generation of application resource files in the documentation of [`Mix.Tasks.Compile.App`](https://hexdocs.pm/mix/Mix.Tasks.Compile.App.html), available as well by running [`mix help compile.app`](https://hexdocs.pm/mix/Mix.Tasks.Compile.App.html). The application environment ---------------------------- The key `env` of an application resource file has a list of tuples that map atoms to terms, and its contents are known as the application *environment*. Note that this environment is unrelated to the operating system environment. By default, the environment of an application is an empty list. In a Mix project you can set that key in `application/0`: ``` def application do [env: [redis_host: "localhost"]] end ``` and the generated application resource file is going to have it included. The environment is available after loading the application, which is a process explained later: ``` Application.load(:APP_NAME) #=> :ok Application.get_env(:APP_NAME, :redis_host) #=> "localhost" ``` In Mix projects, the environment of the application and its dependencies can be overridden via the `config/config.exs` file. If you start the application with Mix, that configuration is available at compile time, and at runtime too, but take into account it is not included in the generated application resource file, and it is not available if you start the application without Mix. For example, someone using your application can override its `:redis_host` environment variable as follows: ``` config :APP_NAME, redis_host: "redis.local" ``` The function [`put_env/3`](#put_env/3) allows dynamic configuration of the application environment, but as a rule of thumb each application is responsible for its own environment. Please do not use the functions in this module for directly accessing or modifying the environment of other applications. The application environment can be overridden via the `-config` option of `erl`, as well as command-line options, as we are going to see below. The application callback module -------------------------------- The `mod` key of an application resource file configures an application callback module and start argument: ``` def application do [mod: {MyApp, []}] end ``` This key is optional, only needed for applications that start a supervision tree. The `MyApp` module given to `:mod` needs to implement the [`Application`](#content) behaviour. This can be done by putting `use Application` in that module and implementing the [`start/2`](#c:start/2) callback, for example: ``` defmodule MyApp do use Application def start(_type, _args) do children = [] Supervisor.start_link(children, strategy: :one_for_one) end end ``` The [`start/2`](#c:start/2) callback has to spawn and link a supervisor and return `{:ok, pid}` or `{:ok, pid, state}`, where `pid` is the PID of the supervisor, and `state` is an optional application state. `args` is the second element of the tuple given to the `:mod` option. The `type` argument passed to [`start/2`](#c:start/2) is usually `:normal` unless in a distributed setup where application takeovers and failovers are configured. Distributed applications are beyond the scope of this documentation. When an application is shutting down, its [`stop/1`](#c:stop/1) callback is called after the supervision tree has been stopped by the runtime. This callback allows the application to do any final cleanup. The argument is the state returned by [`start/2`](#c:start/2), if it did, or `[]` otherwise. The return value of [`stop/1`](#c:stop/1) is ignored. By using [`Application`](#content), modules get a default implementation of [`stop/1`](#c:stop/1) that ignores its argument and returns `:ok`, but it can be overridden. Application callback modules may also implement the optional callback [`prep_stop/1`](#c:prep_stop/1). If present, [`prep_stop/1`](#c:prep_stop/1) is invoked before the supervision tree is terminated. Its argument is the state returned by [`start/2`](#c:start/2), if it did, or `[]` otherwise, and its return value is passed to [`stop/1`](#c:stop/1). The application lifecycle -------------------------- ### Loading applications Applications are *loaded*, which means that the runtime finds and processes their resource files: ``` Application.load(:ex_unit) #=> :ok ``` If an application has included applications, they are also loaded. And the procedure recurses if they in turn have included applications. Included applications are unrelated to applications in Mix umbrella projects, they are an Erlang/OTP concept that has to do with coordinated starts. When an application is loaded, the environment specified in its resource file is merged with any overrides from config files passed to `erl` via the `-config` option. It is worth highlighting that releases pass `sys.config` this way. The resulting environment can still be overridden again via specific `-Application` options passed to `erl`. Loading an application *does not* load its modules. In practice, you rarely load applications by hand because that is part of the start process, explained next. ### Starting applications Applications are also *started*: ``` Application.start(:ex_unit) #=> :ok ``` Once your application is compiled, running your system is a matter of starting your current application and its dependencies. Differently from other languages, Elixir does not have a `main` procedure that is responsible for starting your system. Instead, you start one or more applications, each with their own initialization and termination logic. When an application is started, the runtime loads it if it hasn't been loaded yet (in the technical sense described above). Then, it checks if the dependencies listed in the `applications` key of the resource file are already started. Having at least one dependency not started is an error condition, but when you start an application with [`mix run`](https://hexdocs.pm/mix/Mix.Tasks.Run.html), Mix takes care of starting all the dependencies for you, so in practice you don't need to worry about it unless you are starting applications manually with the API provided by this module. If the application does not have a callback module configured, starting is done at this point. Otherwise, its [`start/2`](#c:start/2) callback if invoked. The PID of the top-level supervisor returned by this function is stored by the runtime for later use, and the returned application state is saved too, if any. ### Stopping applications Started applications are, finally, *stopped*: ``` Application.stop(:ex_unit) #=> :ok ``` Stopping an application without a callback module is defined, but except for some system tracing, it is in practice a no-op. Stopping an application with a callback module has three steps: 1. If present, invoke the optional callback [`prep_stop/1`](#c:prep_stop/1). 2. Terminate the top-level supervisor. 3. Invoke the required callback [`stop/1`](#c:stop/1). The arguments passed to the callbacks are related to the state optionally returned by [`start/2`](#c:start/2), and are documented in the section about the callback module above. It is important to highlight that step 2 is a blocking one. Termination of a supervisor triggers a recursive chain of children terminations, therefore orderly shutting down all descendant processes. The [`stop/1`](#c:stop/1) callback is invoked only after termination of the whole supervision tree. Shutting down a live system cleanly can be done by calling [`System.stop/1`](system#stop/1). It will shut down every application in the opposite order they had been started. By default, a SIGTERM from the operating system will automatically translate to [`System.stop/0`](system#stop/0). You can also have more explicit control over operating system signals via the [`:os.set_signal/2`](http://www.erlang.org/doc/man/os.html#set_signal-2) function. Tooling -------- The Mix build tool can also be used to start your applications. For example, [`mix test`](https://hexdocs.pm/mix/Mix.Tasks.Test.html) automatically starts your application dependencies and your application itself before your test runs. `mix run --no-halt` boots your current project and can be used to start a long running system. See [`mix help run`](https://hexdocs.pm/mix/Mix.Tasks.Run.html). Developers can also use tools like [Distillery](https://github.com/bitwalker/distillery) that build **releases**. Releases are able to package all of your source code as well as the Erlang VM into a single directory. Releases also give you explicit control over how each application is started and in which order. They also provide a more streamlined mechanism for starting and stopping systems, debugging, logging, as well as system monitoring. Finally, Elixir provides tools such as escripts and archives, which are different mechanisms for packaging your application. Those are typically used when tools must be shared between developers and not as deployment options. See [`mix help archive.build`](https://hexdocs.pm/mix/Mix.Tasks.Archive.Build.html) and [`mix help escript.build`](https://hexdocs.pm/mix/Mix.Tasks.Escript.Build.html) for more detail. Further information -------------------- For further details on applications please check the documentation of the [`application`](http://www.erlang.org/doc/man/application.html) Erlang module, and the [Applications](http://www.erlang.org/doc/design_principles/applications.html) section of the [OTP Design Principles User's Guide](http://erlang.org/doc/design_principles/users_guide.html). Summary ======== Types ------ [app()](#t:app/0) [application\_key()](#t:application_key/0) [key()](#t:key/0) [restart\_type()](#t:restart_type/0) [start\_type()](#t:start_type/0) [state()](#t:state/0) [value()](#t:value/0) Functions ---------- [app\_dir(app)](#app_dir/1) Gets the directory for app. [app\_dir(app, path)](#app_dir/2) Returns the given path inside [`app_dir/1`](#app_dir/1). [delete\_env(app, key, opts \\ [])](#delete_env/3) Deletes the `key` from the given `app` environment. [ensure\_all\_started(app, type \\ :temporary)](#ensure_all_started/2) Ensures the given `app` and its applications are started. [ensure\_started(app, type \\ :temporary)](#ensure_started/2) Ensures the given `app` is started. [fetch\_env(app, key)](#fetch_env/2) Returns the value for `key` in `app`'s environment in a tuple. [fetch\_env!(app, key)](#fetch_env!/2) Returns the value for `key` in `app`'s environment. [format\_error(reason)](#format_error/1) Formats the error reason returned by [`start/2`](#start/2), [`ensure_started/2`](#ensure_started/2), [`stop/1`](#stop/1), [`load/1`](#load/1) and [`unload/1`](#unload/1), returns a string. [get\_all\_env(app)](#get_all_env/1) Returns all key-value pairs for `app`. [get\_application(module)](#get_application/1) Gets the application for the given module. [get\_env(app, key, default \\ nil)](#get_env/3) Returns the value for `key` in `app`'s environment. [load(app)](#load/1) Loads the given `app`. [loaded\_applications()](#loaded_applications/0) Returns a list with information about the applications which have been loaded. [put\_all\_env(config, opts \\ [])](#put_all_env/2) Puts the environment for multiple apps at the same time. [put\_env(app, key, value, opts \\ [])](#put_env/4) Puts the `value` in `key` for the given `app`. [spec(app)](#spec/1) Returns the spec for `app`. [spec(app, key)](#spec/2) Returns the value for `key` in `app`'s specification. [start(app, type \\ :temporary)](#start/2) Starts the given `app`. [started\_applications(timeout \\ 5000)](#started_applications/1) Returns a list with information about the applications which are currently running. [stop(app)](#stop/1) Stops the given `app`. [unload(app)](#unload/1) Unloads the given `app`. Callbacks ---------- [config\_change(changed, new, removed)](#c:config_change/3) Callback invoked after code upgrade, if the application environment has changed. [prep\_stop(state)](#c:prep_stop/1) Called before stopping the application. [start(start\_type, start\_args)](#c:start/2) Called when an application is started. [start\_phase(phase, start\_type, phase\_args)](#c:start_phase/3) Starts an application in synchronous phases. [stop(state)](#c:stop/1) Called after an application has been stopped. Types ====== ### app() #### Specs ``` app() :: atom() ``` ### application\_key() #### Specs ``` application_key() :: :start_phases | :mod | :applications | :included_applications | :registered | :maxT | :maxP | :modules | :vsn | :id | :description ``` ### key() #### Specs ``` key() :: atom() ``` ### restart\_type() #### Specs ``` restart_type() :: :permanent | :transient | :temporary ``` ### start\_type() #### Specs ``` start_type() :: :normal | {:takeover, node()} | {:failover, node()} ``` ### state() #### Specs ``` state() :: term() ``` ### value() #### Specs ``` value() :: term() ``` Functions ========== ### app\_dir(app) #### Specs ``` app_dir(app()) :: String.t() ``` Gets the directory for app. This information is returned based on the code path. Here is an example: ``` File.mkdir_p!("foo/ebin") Code.prepend_path("foo/ebin") Application.app_dir(:foo) #=> "foo" ``` Even though the directory is empty and there is no `.app` file it is considered the application directory based on the name "foo/ebin". The name may contain a dash `-` which is considered to be the app version and it is removed for the lookup purposes: ``` File.mkdir_p!("bar-123/ebin") Code.prepend_path("bar-123/ebin") Application.app_dir(:bar) #=> "bar-123" ``` For more information on code paths, check the [`Code`](code) module in Elixir and also Erlang's [`:code` module](http://www.erlang.org/doc/man/code.html). ### app\_dir(app, path) #### Specs ``` app_dir(app(), String.t() | [String.t()]) :: String.t() ``` Returns the given path inside [`app_dir/1`](#app_dir/1). If `path` is a string, then it will be used as the path inside [`app_dir/1`](#app_dir/1). If `path` is a list of strings, it will be joined (see [`Path.join/1`](path#join/1)) and the result will be used as the path inside [`app_dir/1`](#app_dir/1). #### Examples ``` File.mkdir_p!("foo/ebin") Code.prepend_path("foo/ebin") Application.app_dir(:foo, "my_path") #=> "foo/my_path" Application.app_dir(:foo, ["my", "nested", "path"]) #=> "foo/my/nested/path" ``` ### delete\_env(app, key, opts \\ []) #### Specs ``` delete_env(app(), key(), timeout: timeout(), persistent: boolean()) :: :ok ``` Deletes the `key` from the given `app` environment. It receives the same options as [`put_env/4`](#put_env/4). Returns `:ok`. ### ensure\_all\_started(app, type \\ :temporary) #### Specs ``` ensure_all_started(app(), restart_type()) :: {:ok, [app()]} | {:error, {app(), term()}} ``` Ensures the given `app` and its applications are started. Same as [`start/2`](#start/2) but also starts the applications listed under `:applications` in the `.app` file in case they were not previously started. ### ensure\_started(app, type \\ :temporary) #### Specs ``` ensure_started(app(), restart_type()) :: :ok | {:error, term()} ``` Ensures the given `app` is started. Same as [`start/2`](#start/2) but returns `:ok` if the application was already started. This is useful in scripts and in test setup, where test applications need to be explicitly started: ``` :ok = Application.ensure_started(:my_test_dep) ``` ### fetch\_env(app, key) #### Specs ``` fetch_env(app(), key()) :: {:ok, value()} | :error ``` Returns the value for `key` in `app`'s environment in a tuple. If the configuration parameter does not exist, the function returns `:error`. ### fetch\_env!(app, key) #### Specs ``` fetch_env!(app(), key()) :: value() ``` Returns the value for `key` in `app`'s environment. If the configuration parameter does not exist, raises [`ArgumentError`](argumenterror). ### format\_error(reason) #### Specs ``` format_error(any()) :: String.t() ``` Formats the error reason returned by [`start/2`](#start/2), [`ensure_started/2`](#ensure_started/2), [`stop/1`](#stop/1), [`load/1`](#load/1) and [`unload/1`](#unload/1), returns a string. ### get\_all\_env(app) #### Specs ``` get_all_env(app()) :: [{key(), value()}] ``` Returns all key-value pairs for `app`. ### get\_application(module) #### Specs ``` get_application(atom()) :: atom() | nil ``` Gets the application for the given module. The application is located by analyzing the spec of all loaded applications. Returns `nil` if the module is not listed in any application spec. ### get\_env(app, key, default \\ nil) #### Specs ``` get_env(app(), key(), value()) :: value() ``` Returns the value for `key` in `app`'s environment. If the configuration parameter does not exist, the function returns the `default` value. #### Examples [`get_env/3`](#get_env/3) is commonly used to read the configuration of your OTP applications. Since Mix configurations are commonly used to configure applications, we will use this as a point of illustration. Consider a new application `:my_app`. `:my_app` contains a database engine which supports a pool of databases. The database engine needs to know the configuration for each of those databases, and that configuration is supplied by key-value pairs in environment of `:my_app`. ``` config :my_app, Databases.RepoOne, # A database configuration ip: "localhost", port: 5433 config :my_app, Databases.RepoTwo, # Another database configuration (for the same OTP app) ip: "localhost", port: 20717 config :my_app, my_app_databases: [Databases.RepoOne, Databases.RepoTwo] ``` Our database engine used by `:my_app` needs to know what databases exist, and what the database configurations are. The database engine can make a call to `get_env(:my_app, :my_app_databases)` to retrieve the list of databases (specified by module names). Our database engine can then traverse each repository in the list and then call `get_env(:my_app, Databases.RepoOne)` and so forth to retrieve the configuration of each one. **Important:** if you are writing a library to be used by other developers, it is generally recommended to avoid the application environment, as the application environment is effectively a global storage. For more information, read our [library guidelines](library-guidelines). ### load(app) #### Specs ``` load(app()) :: :ok | {:error, term()} ``` Loads the given `app`. In order to be loaded, an `.app` file must be in the load paths. All `:included_applications` will also be loaded. Loading the application does not start it nor load its modules, but it does load its environment. ### loaded\_applications() #### Specs ``` loaded_applications() :: [{app(), description :: charlist(), vsn :: charlist()}] ``` Returns a list with information about the applications which have been loaded. ### put\_all\_env(config, opts \\ []) #### Specs ``` put_all_env([{app(), [{key(), value()}]}], timeout: timeout(), persistent: boolean() ) :: :ok ``` Puts the environment for multiple apps at the same time. The given config should not: * have the same application listed more than once * have the same key inside the same application listed more than once If those conditions are not met, the behaviour is undefined (on Erlang/OTP 21 and earlier) or will raise (on Erlang/OTP 22 and later). It receives the same options as [`put_env/4`](#put_env/4). Returns `:ok`. ### put\_env(app, key, value, opts \\ []) #### Specs ``` put_env(app(), key(), value(), timeout: timeout(), persistent: boolean()) :: :ok ``` Puts the `value` in `key` for the given `app`. #### Options * `:timeout` - the timeout for the change (defaults to `5_000` milliseconds) * `:persistent` - persists the given value on application load and reloads If [`put_env/4`](#put_env/4) is called before the application is loaded, the application environment values specified in the `.app` file will override the ones previously set. The `:persistent` option can be set to `true` when there is a need to guarantee parameters set with this function will not be overridden by the ones defined in the application resource file on load. This means persistent values will stick after the application is loaded and also on application reload. ### spec(app) #### Specs ``` spec(app()) :: [{application_key(), value()}] | nil ``` Returns the spec for `app`. The following keys are returned: * `:description` * `:id` * `:vsn` * `:modules` * `:maxP` * `:maxT` * `:registered` * `:included_applications` * `:applications` * `:mod` * `:start_phases` Note the environment is not returned as it can be accessed via [`fetch_env/2`](#fetch_env/2). Returns `nil` if the application is not loaded. ### spec(app, key) #### Specs ``` spec(app(), application_key()) :: value() | nil ``` Returns the value for `key` in `app`'s specification. See [`spec/1`](#spec/1) for the supported keys. If the given specification parameter does not exist, this function will raise. Returns `nil` if the application is not loaded. ### start(app, type \\ :temporary) #### Specs ``` start(app(), restart_type()) :: :ok | {:error, term()} ``` Starts the given `app`. If the `app` is not loaded, the application will first be loaded using [`load/1`](#load/1). Any included application, defined in the `:included_applications` key of the `.app` file will also be loaded, but they won't be started. Furthermore, all applications listed in the `:applications` key must be explicitly started before this application is. If not, `{:error, {:not_started, app}}` is returned, where `app` is the name of the missing application. In case you want to automatically load **and start** all of `app`'s dependencies, see [`ensure_all_started/2`](#ensure_all_started/2). The `type` argument specifies the type of the application: * `:permanent` - if `app` terminates, all other applications and the entire node are also terminated. * `:transient` - if `app` terminates with `:normal` reason, it is reported but no other applications are terminated. If a transient application terminates abnormally, all other applications and the entire node are also terminated. * `:temporary` - if `app` terminates, it is reported but no other applications are terminated (the default). Note that it is always possible to stop an application explicitly by calling [`stop/1`](#stop/1). Regardless of the type of the application, no other applications will be affected. Note also that the `:transient` type is of little practical use, since when a supervision tree terminates, the reason is set to `:shutdown`, not `:normal`. ### started\_applications(timeout \\ 5000) #### Specs ``` started_applications(timeout()) :: [ {app(), description :: charlist(), vsn :: charlist()} ] ``` Returns a list with information about the applications which are currently running. ### stop(app) #### Specs ``` stop(app()) :: :ok | {:error, term()} ``` Stops the given `app`. When stopped, the application is still loaded. ### unload(app) #### Specs ``` unload(app()) :: :ok | {:error, term()} ``` Unloads the given `app`. It will also unload all `:included_applications`. Note that the function does not purge the application modules. Callbacks ========== ### config\_change(changed, new, removed) #### Specs ``` config_change(changed, new, removed) :: :ok when changed: keyword(), new: keyword(), removed: [atom()] ``` Callback invoked after code upgrade, if the application environment has changed. `changed` is a keyword list of keys and their changed values in the application environment. `new` is a keyword list with all new keys and their values. `removed` is a list with all removed keys. ### prep\_stop(state) #### Specs ``` prep_stop(state()) :: state() ``` Called before stopping the application. This function is called before the top-level supervisor is terminated. It receives the state returned by [`start/2`](#c:start/2), if it did, or `[]` otherwise. The return value is later passed to [`stop/1`](#c:stop/1). ### start(start\_type, start\_args) #### Specs ``` start(start_type(), start_args :: term()) :: {:ok, pid()} | {:ok, pid(), state()} | {:error, reason :: term()} ``` Called when an application is started. This function is called when an application is started using [`Application.start/2`](application#start/2) (and functions on top of that, such as [`Application.ensure_started/2`](application#ensure_started/2)). This function should start the top-level process of the application (which should be the top supervisor of the application's supervision tree if the application follows the OTP design principles around supervision). `start_type` defines how the application is started: * `:normal` - used if the startup is a normal startup or if the application is distributed and is started on the current node because of a failover from another node and the application specification key `:start_phases` is `:undefined`. * `{:takeover, node}` - used if the application is distributed and is started on the current node because of a failover on the node `node`. * `{:failover, node}` - used if the application is distributed and is started on the current node because of a failover on node `node`, and the application specification key `:start_phases` is not `:undefined`. `start_args` are the arguments passed to the application in the `:mod` specification key (e.g., `mod: {MyApp, [:my_args]}`). This function should either return `{:ok, pid}` or `{:ok, pid, state}` if startup is successful. `pid` should be the PID of the top supervisor. `state` can be an arbitrary term, and if omitted will default to `[]`; if the application is later stopped, `state` is passed to the [`stop/1`](#stop/1) callback (see the documentation for the [`stop/1`](#c:stop/1) callback for more information). `use Application` provides no default implementation for the [`start/2`](#start/2) callback. ### start\_phase(phase, start\_type, phase\_args) #### Specs ``` start_phase(phase :: term(), start_type(), phase_args :: term()) :: :ok | {:error, reason :: term()} ``` Starts an application in synchronous phases. This function is called after [`start/2`](#start/2) finishes but before [`Application.start/2`](application#start/2) returns. It will be called once for every start phase defined in the application's (and any included applications') specification, in the order they are listed in. ### stop(state) #### Specs ``` stop(state()) :: term() ``` Called after an application has been stopped. This function is called after an application has been stopped, i.e., after its supervision tree has been stopped. It should do the opposite of what the [`start/2`](#c:start/2) callback did, and should perform any necessary cleanup. The return value of this callback is ignored. `state` is the state returned by [`start/2`](#c:start/2), if it did, or `[]` otherwise. If the optional callback [`prep_stop/1`](#c:prep_stop/1) is present, `state` is its return value instead. `use Application` defines a default implementation of this function which does nothing and just returns `:ok`.
programming_docs
elixir Config.Provider behaviour Config.Provider behaviour ========================== Specifies a provider API that loads configuration during boot. Config providers are typically used during releases to load external configuration while the system boots. This is done by starting the VM with the minimum amount of applications running, then invoking all of the providers, and then restarting the system. This requires a mutable configuration file on disk, as the results of the providers are written to the file system. For more information on runtime configuration, see [`mix release`](https://hexdocs.pm/mix/Mix.Tasks.Release.html). Sample config provider ----------------------- For example, imagine you need to load some configuration from a JSON file and load that into the system. Said configuration provider would look like: ``` defmodule JSONConfigProvider do @behaviour Config.Provider # Let's pass the path to the JSON file as config def init(path) when is_binary(path), do: path def load(config, path) do # We need to start any app we may depend on. {:ok, _} = Application.ensure_all_started(:jason) json = path |> File.read!() |> Jason.decode!() Config.Reader.merge( config, my_app: [ some_value: json["my_app_some_value"], another_value: json["my_app_another_value"], ] ) end end ``` Then when specifying your release, you can specify the provider: ``` config_providers: [{JSONConfigProvider, "/etc/config.json"}] ``` Now once the system boots, it will invoke the provider early in the boot process, save the merged configuration to the disk, and reboot the system with the new values in place. Summary ======== Types ------ [config()](#t:config/0) [config\_path()](#t:config_path/0) A path pointing to a configuration file. [state()](#t:state/0) Functions ---------- [resolve\_config\_path!(path)](#resolve_config_path!/1) Resolves a [`config_path/0`](#t:config_path/0) to an actual path. [validate\_config\_path!(path)](#validate_config_path!/1) Validates a [`config_path/0`](#t:config_path/0). Callbacks ---------- [init(term)](#c:init/1) Invoked when initializing a config provider. [load(config, state)](#c:load/2) Loads configuration (typically during system boot). Types ====== ### config() #### Specs ``` config() :: keyword() ``` ### config\_path() #### Specs ``` config_path() :: {:system, binary(), binary()} | binary() ``` A path pointing to a configuration file. Since configuration files are often accessed on target machines, it can be expressed either as: * a binary representing an absolute path * a tuple {:system, system\_var, path} where the config is the concatenation of the `system_var` with the given `path` ### state() #### Specs ``` state() :: term() ``` Functions ========== ### resolve\_config\_path!(path) #### Specs ``` resolve_config_path!(config_path()) :: binary() ``` Resolves a [`config_path/0`](#t:config_path/0) to an actual path. ### validate\_config\_path!(path) #### Specs ``` validate_config_path!(config_path()) :: :ok ``` Validates a [`config_path/0`](#t:config_path/0). Callbacks ========== ### init(term) #### Specs ``` init(term()) :: state() ``` Invoked when initializing a config provider. A config provider is typically initialized on the machine where the system is assembled and not on the target machine. The [`init/1`](#c:init/1) callback is useful to verify the arguments given to the provider and prepare the state that will be given to [`load/2`](#c:load/2). Furthermore, because the state returned by [`init/1`](#c:init/1) can be written to text-based config files, it should be restricted only to simple data types, such as integers, strings, atoms, tuples, maps, and lists. Entries such as PIDs, references, and functions cannot be serialized. ### load(config, state) #### Specs ``` load(config(), state()) :: config() ``` Loads configuration (typically during system boot). It receives the current `config` and the `state` returned by [`init/1`](#c:init/1). Then you typically read the extra configuration from an external source and merge it into the received `config`. Merging should be done with [`Config.Reader.merge/2`](config.reader#merge/2), as it performs deep merge. It should return the updated config. Note that [`load/2`](#c:load/2) is typically invoked very early in the boot process, therefore if you need to use an application in the provider, it is your responsibility to start it. elixir Process Process ======== Conveniences for working with processes and the process dictionary. Besides the functions available in this module, the [`Kernel`](kernel) module exposes and auto-imports some basic functionality related to processes available through the following functions: * [`Kernel.spawn/1`](kernel#spawn/1) and [`Kernel.spawn/3`](kernel#spawn/3) * [`Kernel.spawn_link/1`](kernel#spawn_link/1) and [`Kernel.spawn_link/3`](kernel#spawn_link/3) * [`Kernel.spawn_monitor/1`](kernel#spawn_monitor/1) and [`Kernel.spawn_monitor/3`](kernel#spawn_monitor/3) * [`Kernel.self/0`](kernel#self/0) * [`Kernel.send/2`](kernel#send/2) While this module provides low-level conveniences to work with processes, developers typically use abstractions such as [`Agent`](agent), [`GenServer`](genserver), [`Registry`](registry), [`Supervisor`](supervisor) and [`Task`](task) for building their systems and resort to this module for gathering information, trapping exits, links and monitoring. Summary ======== Types ------ [dest()](#t:dest/0) A process destination. [spawn\_opt()](#t:spawn_opt/0) [spawn\_opts()](#t:spawn_opts/0) Functions ---------- [alive?(pid)](#alive?/1) Tells whether the given process is alive on the local node. [cancel\_timer(timer\_ref, options \\ [])](#cancel_timer/2) Cancels a timer returned by [`send_after/3`](#send_after/3). [delete(key)](#delete/1) Deletes the given `key` from the process dictionary. [demonitor(monitor\_ref, options \\ [])](#demonitor/2) Demonitors the monitor identified by the given `reference`. [exit(pid, reason)](#exit/2) Sends an exit signal with the given `reason` to `pid`. [flag(flag, value)](#flag/2) Sets the given `flag` to `value` for the calling process. [flag(pid, flag, value)](#flag/3) Sets the given `flag` to `value` for the given process `pid`. [get()](#get/0) Returns all key-value pairs in the process dictionary. [get(key, default \\ nil)](#get/2) Returns the value for the given `key` in the process dictionary, or `default` if `key` is not set. [get\_keys()](#get_keys/0) Returns all keys in the process dictionary. [get\_keys(value)](#get_keys/1) Returns all keys in the process dictionary that have the given `value`. [group\_leader()](#group_leader/0) Returns the PID of the group leader for the calling process. [group\_leader(pid, leader)](#group_leader/2) Sets the group leader of the given `pid` to `leader`. [hibernate(mod, fun\_name, args)](#hibernate/3) Puts the calling process into a "hibernation" state. [info(pid)](#info/1) Returns information about the process identified by `pid`, or returns `nil` if the process is not alive. [info(pid, spec)](#info/2) Returns information about the process identified by `pid`, or returns `nil` if the process is not alive. [link(pid\_or\_port)](#link/1) Creates a link between the calling process and the given item (process or port). [list()](#list/0) Returns a list of PIDs corresponding to all the processes currently existing on the local node. [monitor(item)](#monitor/1) Starts monitoring the given `item` from the calling process. [put(key, value)](#put/2) Stores the given `key`-`value` pair in the process dictionary. [read\_timer(timer\_ref)](#read_timer/1) Reads a timer created by [`send_after/3`](#send_after/3). [register(pid\_or\_port, name)](#register/2) Registers the given `pid_or_port` under the given `name`. [registered()](#registered/0) Returns a list of names which have been registered using [`register/2`](#register/2). [send(dest, msg, options)](#send/3) Sends a message to the given `dest`. [send\_after(dest, msg, time, opts \\ [])](#send_after/4) Sends `msg` to `dest` after `time` milliseconds. [sleep(timeout)](#sleep/1) Sleeps the current process for the given `timeout`. [spawn(fun, opts)](#spawn/2) Spawns the given function according to the given options. [spawn(mod, fun, args, opts)](#spawn/4) Spawns the given function `fun` from module `mod`, passing the given `args` according to the given options. [unlink(pid\_or\_port)](#unlink/1) Removes the link between the calling process and the given item (process or port). [unregister(name)](#unregister/1) Removes the registered `name`, associated with a PID or a port identifier. [whereis(name)](#whereis/1) Returns the PID or port identifier registered under `name` or `nil` if the name is not registered. Types ====== ### dest() #### Specs ``` dest() :: pid() | port() | (registered_name :: atom()) | {registered_name :: atom(), node()} ``` A process destination. A remote or local PID, a local port, a locally registered name, or a tuple in the form of `{registered_name, node}` for a registered name at another node. ### spawn\_opt() #### Specs ``` spawn_opt() :: :link | :monitor | {:priority, :low | :normal | :high} | {:fullsweep_after, non_neg_integer()} | {:min_heap_size, non_neg_integer()} | {:min_bin_vheap_size, non_neg_integer()} ``` ### spawn\_opts() #### Specs ``` spawn_opts() :: [spawn_opt()] ``` Functions ========== ### alive?(pid) #### Specs ``` alive?(pid()) :: boolean() ``` Tells whether the given process is alive on the local node. If the process identified by `pid` is alive (that is, it's not exiting and has not exited yet) than this function returns `true`. Otherwise, it returns `false`. `pid` must refer to a process running on the local node or [`ArgumentError`](argumenterror) is raised. Inlined by the compiler. ### cancel\_timer(timer\_ref, options \\ []) #### Specs ``` cancel_timer(reference(), options) :: non_neg_integer() | false | :ok when options: [async: boolean(), info: boolean()] ``` Cancels a timer returned by [`send_after/3`](#send_after/3). When the result is an integer, it represents the time in milliseconds left until the timer would have expired. When the result is `false`, a timer corresponding to `timer_ref` could not be found. This can happen either because the timer expired, because it has already been canceled, or because `timer_ref` never corresponded to a timer. Even if the timer had expired and the message was sent, this function does not tell you if the timeout message has arrived at its destination yet. Inlined by the compiler. #### Options * `:async` - (boolean) when `false`, the request for cancellation is synchronous. When `true`, the request for cancellation is asynchronous, meaning that the request to cancel the timer is issued and `:ok` is returned right away. Defaults to `false`. * `:info` - (boolean) whether to return information about the timer being cancelled. When the `:async` option is `false` and `:info` is `true`, then either an integer or `false` (like described above) is returned. If `:async` is `false` and `:info` is `false`, `:ok` is returned. If `:async` is `true` and `:info` is `true`, a message in the form `{:cancel_timer, timer_ref, result}` (where `result` is an integer or `false` like described above) is sent to the caller of this function when the cancellation has been performed. If `:async` is `true` and `:info` is `false`, no message is sent. Defaults to `true`. ### delete(key) #### Specs ``` delete(term()) :: term() | nil ``` Deletes the given `key` from the process dictionary. Returns the value that was under `key` in the process dictionary, or `nil` if `key` was not stored in the process dictionary. #### Examples ``` iex> Process.put(:comments, ["comment", "other comment"]) iex> Process.delete(:comments) ["comment", "other comment"] iex> Process.delete(:comments) nil ``` ### demonitor(monitor\_ref, options \\ []) #### Specs ``` demonitor(reference(), options :: [:flush | :info]) :: boolean() ``` Demonitors the monitor identified by the given `reference`. If `monitor_ref` is a reference which the calling process obtained by calling [`monitor/1`](#monitor/1), that monitoring is turned off. If the monitoring is already turned off, nothing happens. See [`:erlang.demonitor/2`](http://www.erlang.org/doc/man/erlang.html#demonitor-2) for more information. Inlined by the compiler. #### Examples ``` pid = spawn(fn -> 1 + 2 end) ref = Process.monitor(pid) Process.demonitor(ref) #=> true ``` ### exit(pid, reason) #### Specs ``` exit(pid(), term()) :: true ``` Sends an exit signal with the given `reason` to `pid`. The following behaviour applies if `reason` is any term except `:normal` or `:kill`: 1. If `pid` is not trapping exits, `pid` will exit with the given `reason`. 2. If `pid` is trapping exits, the exit signal is transformed into a message `{:EXIT, from, reason}` and delivered to the message queue of `pid`. If `reason` is the atom `:normal`, `pid` will not exit (unless `pid` is the calling process, in which case it will exit with the reason `:normal`). If it is trapping exits, the exit signal is transformed into a message `{:EXIT, from, :normal}` and delivered to its message queue. If `reason` is the atom `:kill`, that is if `Process.exit(pid, :kill)` is called, an untrappable exit signal is sent to `pid` which will unconditionally exit with reason `:killed`. Inlined by the compiler. #### Examples ``` Process.exit(pid, :kill) #=> true ``` ### flag(flag, value) #### Specs ``` flag(:error_handler, module()) :: module() ``` ``` flag(:max_heap_size, heap_size()) :: heap_size() ``` ``` flag(:message_queue_data, :erlang.message_queue_data()) :: :erlang.message_queue_data() ``` ``` flag(:min_bin_vheap_size, non_neg_integer()) :: non_neg_integer() ``` ``` flag(:min_heap_size, non_neg_integer()) :: non_neg_integer() ``` ``` flag(:monitor_nodes, term()) :: term() ``` ``` flag({:monitor_nodes, term()}, term()) :: term() ``` ``` flag(:priority, priority_level()) :: priority_level() ``` ``` flag(:save_calls, 0..10000) :: 0..10000 ``` ``` flag(:sensitive, boolean()) :: boolean() ``` ``` flag(:trap_exit, boolean()) :: boolean() ``` Sets the given `flag` to `value` for the calling process. Returns the old value of `flag`. See [`:erlang.process_flag/2`](http://www.erlang.org/doc/man/erlang.html#process_flag-2) for more information. Inlined by the compiler. ### flag(pid, flag, value) #### Specs ``` flag(pid(), :save_calls, 0..10000) :: 0..10000 ``` Sets the given `flag` to `value` for the given process `pid`. Returns the old value of `flag`. It raises [`ArgumentError`](argumenterror) if `pid` is not a local process. The allowed values for `flag` are only a subset of those allowed in [`flag/2`](#flag/2), namely `:save_calls`. See [`:erlang.process_flag/3`](http://www.erlang.org/doc/man/erlang.html#process_flag-3) for more information. Inlined by the compiler. ### get() #### Specs ``` get() :: [{term(), term()}] ``` Returns all key-value pairs in the process dictionary. Inlined by the compiler. ### get(key, default \\ nil) #### Specs ``` get(term(), default :: term()) :: term() ``` Returns the value for the given `key` in the process dictionary, or `default` if `key` is not set. #### Examples ``` # Assuming :locale was not set iex> Process.get(:locale, "pt") "pt" iex> Process.put(:locale, "fr") nil iex> Process.get(:locale, "pt") "fr" ``` ### get\_keys() #### Specs ``` get_keys() :: [term()] ``` Returns all keys in the process dictionary. Inlined by the compiler. #### Examples ``` # Assuming :locale was not set iex> :locale in Process.get_keys() false iex> Process.put(:locale, "pt") nil iex> :locale in Process.get_keys() true ``` ### get\_keys(value) #### Specs ``` get_keys(term()) :: [term()] ``` Returns all keys in the process dictionary that have the given `value`. Inlined by the compiler. ### group\_leader() #### Specs ``` group_leader() :: pid() ``` Returns the PID of the group leader for the calling process. Inlined by the compiler. #### Examples ``` Process.group_leader() #=> #PID<0.53.0> ``` ### group\_leader(pid, leader) #### Specs ``` group_leader(pid(), leader :: pid()) :: true ``` Sets the group leader of the given `pid` to `leader`. Typically, this is used when a process started from a certain shell should have a group leader other than `:init`. Inlined by the compiler. ### hibernate(mod, fun\_name, args) #### Specs ``` hibernate(module(), atom(), list()) :: no_return() ``` Puts the calling process into a "hibernation" state. The calling process is put into a waiting state where its memory allocation has been reduced as much as possible, which is useful if the process does not expect to receive any messages in the near future. See [`:erlang.hibernate/3`](http://www.erlang.org/doc/man/erlang.html#hibernate-3) for more information. Inlined by the compiler. ### info(pid) #### Specs ``` info(pid()) :: keyword() | nil ``` Returns information about the process identified by `pid`, or returns `nil` if the process is not alive. Use this only for debugging information. See [`:erlang.process_info/1`](http://www.erlang.org/doc/man/erlang.html#process_info-1) for more information. ### info(pid, spec) #### Specs ``` info(pid(), atom() | [atom()]) :: {atom(), term()} | [{atom(), term()}] | nil ``` Returns information about the process identified by `pid`, or returns `nil` if the process is not alive. See [`:erlang.process_info/2`](http://www.erlang.org/doc/man/erlang.html#process_info-2) for more information. ### link(pid\_or\_port) #### Specs ``` link(pid() | port()) :: true ``` Creates a link between the calling process and the given item (process or port). Links are bidirectional. Linked processes can be unlinked by using [`unlink/1`](#unlink/1). If such a link exists already, this function does nothing since there can only be one link between two given processes. If a process tries to create a link to itself, nothing will happen. When two processes are linked, each one receives exit signals from the other (see also [`exit/2`](#exit/2)). Let's assume `pid1` and `pid2` are linked. If `pid2` exits with a reason other than `:normal` (which is also the exit reason used when a process finishes its job) and `pid1` is not trapping exits (see [`flag/2`](#flag/2)), then `pid1` will exit with the same reason as `pid2` and in turn emit an exit signal to all its other linked processes. The behaviour when `pid1` is trapping exits is described in [`exit/2`](#exit/2). See [`:erlang.link/1`](http://www.erlang.org/doc/man/erlang.html#link-1) for more information. Inlined by the compiler. ### list() #### Specs ``` list() :: [pid()] ``` Returns a list of PIDs corresponding to all the processes currently existing on the local node. Note that if a process is exiting, it is considered to exist but not be alive. This means that for such process, [`alive?/1`](#alive?/1) will return `false` but its PID will be part of the list of PIDs returned by this function. See [`:erlang.processes/0`](http://www.erlang.org/doc/man/erlang.html#processes-0) for more information. Inlined by the compiler. #### Examples ``` Process.list() #=> [#PID<0.0.0>, #PID<0.1.0>, #PID<0.2.0>, #PID<0.3.0>, ...] ``` ### monitor(item) #### Specs ``` monitor(pid() | {name, node()} | name) :: reference() when name: atom() ``` Starts monitoring the given `item` from the calling process. Once the monitored process dies, a message is delivered to the monitoring process in the shape of: ``` {:DOWN, ref, :process, object, reason} ``` where: * `ref` is a monitor reference returned by this function; * `object` is either a `pid` of the monitored process (if monitoring a PID) or `{name, node}` (if monitoring a remote or local name); * `reason` is the exit reason. If the process is already dead when calling [`Process.monitor/1`](process#monitor/1), a `:DOWN` message is delivered immediately. See [the need for monitoring](https://elixir-lang.org/getting-started/mix-otp/genserver.html#the-need-for-monitoring) for an example. See [`:erlang.monitor/2`](http://www.erlang.org/doc/man/erlang.html#monitor-2) for more information. Inlined by the compiler. #### Examples ``` pid = spawn(fn -> 1 + 2 end) #=> #PID<0.118.0> Process.monitor(pid) #=> #Reference<0.906660723.3006791681.40191> Process.exit(pid, :kill) #=> true receive do msg -> msg end #=> {:DOWN, #Reference<0.906660723.3006791681.40191>, :process, #PID<0.118.0>, :noproc} ``` ### put(key, value) #### Specs ``` put(term(), term()) :: term() | nil ``` Stores the given `key`-`value` pair in the process dictionary. The return value of this function is the value that was previously stored under `key`, or `nil` in case no value was stored under it. #### Examples ``` # Assuming :locale was not set iex> Process.put(:locale, "en") nil iex> Process.put(:locale, "fr") "en" ``` ### read\_timer(timer\_ref) #### Specs ``` read_timer(reference()) :: non_neg_integer() | false ``` Reads a timer created by [`send_after/3`](#send_after/3). When the result is an integer, it represents the time in milliseconds left until the timer will expire. When the result is `false`, a timer corresponding to `timer_ref` could not be found. This can be either because the timer expired, because it has already been canceled, or because `timer_ref` never corresponded to a timer. Even if the timer had expired and the message was sent, this function does not tell you if the timeout message has arrived at its destination yet. Inlined by the compiler. ### register(pid\_or\_port, name) #### Specs ``` register(pid() | port(), atom()) :: true ``` Registers the given `pid_or_port` under the given `name`. `name` must be an atom and can then be used instead of the PID/port identifier when sending messages with [`Kernel.send/2`](kernel#send/2). [`register/2`](#register/2) will fail with [`ArgumentError`](argumenterror) in any of the following cases: * the PID/Port is not existing locally and alive * the name is already registered * the `pid_or_port` is already registered under a different `name` The following names are reserved and cannot be assigned to processes nor ports: * `nil` * `false` * `true` * `:undefined` #### Examples ``` Process.register(self(), :test) #=> true send(:test, :hello) #=> :hello send(:wrong_name, :hello) #=> ** (ArgumentError) argument error ``` ### registered() #### Specs ``` registered() :: [atom()] ``` Returns a list of names which have been registered using [`register/2`](#register/2). Inlined by the compiler. #### Examples ``` Process.register(self(), :test) Process.registered() #=> [:test, :elixir_config, :inet_db, ...] ``` ### send(dest, msg, options) #### Specs ``` send(dest, msg, [option]) :: :ok | :noconnect | :nosuspend when dest: dest(), msg: any(), option: :noconnect | :nosuspend ``` Sends a message to the given `dest`. `dest` may be a remote or local PID, a local port, a locally registered name, or a tuple in the form of `{registered_name, node}` for a registered name at another node. Inlined by the compiler. #### Options * `:noconnect` - when used, if sending the message would require an auto-connection to another node the message is not sent and `:noconnect` is returned. * `:nosuspend` - when used, if sending the message would cause the sender to be suspended the message is not sent and `:nosuspend` is returned. Otherwise the message is sent and `:ok` is returned. #### Examples ``` iex> Process.send({:name, :node_that_does_not_exist}, :hi, [:noconnect]) :noconnect ``` ### send\_after(dest, msg, time, opts \\ []) #### Specs ``` send_after(pid() | atom(), term(), non_neg_integer(), [option]) :: reference() when option: {:abs, boolean()} ``` Sends `msg` to `dest` after `time` milliseconds. If `dest` is a PID, it must be the PID of a local process, dead or alive. If `dest` is an atom, it must be the name of a registered process which is looked up at the time of delivery. No error is produced if the name does not refer to a process. The message is not sent immediately. Therefore, `dest` can receive other messages in-between even when `time` is `0`. This function returns a timer reference, which can be read with [`read_timer/1`](#read_timer/1) or canceled with [`cancel_timer/1`](#cancel_timer/1). The timer will be automatically canceled if the given `dest` is a PID which is not alive or when the given PID exits. Note that timers will not be automatically canceled when `dest` is an atom (as the atom resolution is done on delivery). Inlined by the compiler. #### Options * `:abs` - (boolean) when `false`, `time` is treated as relative to the current monotonic time. When `true`, `time` is the absolute value of the Erlang monotonic time at which `msg` should be delivered to `dest`. To read more about Erlang monotonic time and other time-related concepts, look at the documentation for the [`System`](system) module. Defaults to `false`. #### Examples ``` timer_ref = Process.send_after(pid, :hi, 1000) ``` ### sleep(timeout) #### Specs ``` sleep(timeout()) :: :ok ``` Sleeps the current process for the given `timeout`. `timeout` is either the number of milliseconds to sleep as an integer or the atom `:infinity`. When `:infinity` is given, the current process will sleep forever, and not consume or reply to messages. **Use this function with extreme care**. For almost all situations where you would use [`sleep/1`](#sleep/1) in Elixir, there is likely a more correct, faster and precise way of achieving the same with message passing. For example, if you are waiting for a process to perform some action, it is better to communicate the progress of such action with messages. In other words, **do not**: ``` Task.start_link(fn -> do_something() ... end) # Wait until work is done Process.sleep(2000) ``` But **do**: ``` parent = self() Task.start_link(fn -> do_something() send(parent, :work_is_done) ... end) receive do :work_is_done -> :ok after # Optional timeout 30_000 -> :timeout end ``` For cases like the one above, [`Task.async/1`](task#async/1) and [`Task.await/2`](task#await/2) are preferred. Similarly, if you are waiting for a process to terminate, monitor that process instead of sleeping. **Do not**: ``` Task.start_link(fn -> ... end) # Wait until task terminates Process.sleep(2000) ``` Instead **do**: ``` {:ok, pid} = Task.start_link(fn -> ... end) ref = Process.monitor(pid) receive do {:DOWN, ^ref, _, _, _} -> :task_is_down after # Optional timeout 30_000 -> :timeout end ``` ### spawn(fun, opts) #### Specs ``` spawn((() -> any()), spawn_opts()) :: pid() | {pid(), reference()} ``` Spawns the given function according to the given options. The result depends on the given options. In particular, if `:monitor` is given as an option, it will return a tuple containing the PID and the monitoring reference, otherwise just the spawned process PID. More options are available; for the comprehensive list of available options check [`:erlang.spawn_opt/4`](http://www.erlang.org/doc/man/erlang.html#spawn_opt-4). Inlined by the compiler. #### Examples ``` Process.spawn(fn -> 1 + 2 end, [:monitor]) #=> {#PID<0.93.0>, #Reference<0.18808174.1939079169.202418>} Process.spawn(fn -> 1 + 2 end, [:link]) #=> #PID<0.95.0> ``` ### spawn(mod, fun, args, opts) #### Specs ``` spawn(module(), atom(), list(), spawn_opts()) :: pid() | {pid(), reference()} ``` Spawns the given function `fun` from module `mod`, passing the given `args` according to the given options. The result depends on the given options. In particular, if `:monitor` is given as an option, it will return a tuple containing the PID and the monitoring reference, otherwise just the spawned process PID. It also accepts extra options, for the list of available options check [`:erlang.spawn_opt/4`](http://www.erlang.org/doc/man/erlang.html#spawn_opt-4). Inlined by the compiler. ### unlink(pid\_or\_port) #### Specs ``` unlink(pid() | port()) :: true ``` Removes the link between the calling process and the given item (process or port). If there is no such link, this function does nothing. If `pid_or_port` does not exist, this function does not produce any errors and simply does nothing. The return value of this function is always `true`. See [`:erlang.unlink/1`](http://www.erlang.org/doc/man/erlang.html#unlink-1) for more information. Inlined by the compiler. ### unregister(name) #### Specs ``` unregister(atom()) :: true ``` Removes the registered `name`, associated with a PID or a port identifier. Fails with [`ArgumentError`](argumenterror) if the name is not registered to any PID or port. Inlined by the compiler. #### Examples ``` Process.register(self(), :test) #=> true Process.unregister(:test) #=> true Process.unregister(:wrong_name) #=> ** (ArgumentError) argument error ``` ### whereis(name) #### Specs ``` whereis(atom()) :: pid() | port() | nil ``` Returns the PID or port identifier registered under `name` or `nil` if the name is not registered. See [`:erlang.whereis/1`](http://www.erlang.org/doc/man/erlang.html#whereis-1) for more information. #### Examples ``` Process.register(self(), :test) Process.whereis(:test) #=> #PID<0.84.0> Process.whereis(:wrong_name) #=> nil ```
programming_docs
elixir Exception behaviour Exception behaviour ==================== Functions to format throw/catch/exit and exceptions. Note that stacktraces in Elixir are only available inside catch and rescue by using the [`__STACKTRACE__/0`](kernel.specialforms#__STACKTRACE__/0) variable. Do not rely on the particular format returned by the `format*` functions in this module. They may be changed in future releases in order to better suit Elixir's tool chain. In other words, by using the functions in this module it is guaranteed you will format exceptions as in the current Elixir version being used. Summary ======== Types ------ [kind()](#t:kind/0) The kind handled by formatting functions [stacktrace()](#t:stacktrace/0) [stacktrace\_entry()](#t:stacktrace_entry/0) [t()](#t:t/0) The exception type Functions ---------- [blame(kind, error, stacktrace)](#blame/3) Attaches information to exceptions for extra debugging. [blame\_mfa(module, function, args)](#blame_mfa/3) Blames the invocation of the given module, function and arguments. [exception?(term)](#exception?/1) Returns `true` if the given `term` is an exception. [format(kind, payload, stacktrace \\ [])](#format/3) Normalizes and formats throw/errors/exits and stacktraces. [format\_banner(kind, exception, stacktrace \\ [])](#format_banner/3) Normalizes and formats any throw/error/exit. [format\_exit(reason)](#format_exit/1) Formats an exit. It returns a string. [format\_fa(fun, arity)](#format_fa/2) Receives an anonymous function and arity and formats it as shown in stacktraces. The arity may also be a list of arguments. [format\_file\_line(file, line, suffix \\ "")](#format_file_line/3) Formats the given `file` and `line` as shown in stacktraces. If any of the values are `nil`, they are omitted. [format\_mfa(module, fun, arity)](#format_mfa/3) Receives a module, fun and arity and formats it as shown in stacktraces. The arity may also be a list of arguments. [format\_stacktrace(trace \\ nil)](#format_stacktrace/1) Formats the stacktrace. [format\_stacktrace\_entry(entry)](#format_stacktrace_entry/1) Receives a stacktrace entry and formats it into a string. [message(exception)](#message/1) Gets the message for an `exception`. [normalize(kind, payload, stacktrace \\ [])](#normalize/3) Normalizes an exception, converting Erlang exceptions to Elixir exceptions. Callbacks ---------- [blame(t, stacktrace)](#c:blame/2) Called from [`Exception.blame/3`](exception#blame/3) to augment the exception struct. [exception(term)](#c:exception/1) [message(t)](#c:message/1) Types ====== ### kind() #### Specs ``` kind() :: :error | non_error_kind() ``` The kind handled by formatting functions ### stacktrace() #### Specs ``` stacktrace() :: [stacktrace_entry()] ``` ### stacktrace\_entry() #### Specs ``` stacktrace_entry() :: {module(), atom(), arity_or_args(), location()} | {(... -> any()), arity_or_args(), location()} ``` ### t() #### Specs ``` t() :: %module(){:__exception__ => true, optional(atom()) => any()} ``` The exception type Functions ========== ### blame(kind, error, stacktrace) #### Specs ``` blame(:error, any(), stacktrace()) :: {t(), stacktrace()} ``` ``` blame(non_error_kind(), payload, stacktrace()) :: {payload, stacktrace()} when payload: var ``` Attaches information to exceptions for extra debugging. This operation is potentially expensive, as it reads data from the file system, parses beam files, evaluates code and so on. If the exception module implements the optional [`blame/2`](#c:blame/2) callback, it will be invoked to perform the computation. ### blame\_mfa(module, function, args) #### Specs ``` blame_mfa(module(), function(), args :: [term()]) :: {:ok, :def | :defp | :defmacro | :defmacrop, [{args :: [term()], guards :: [term()]}]} | :error ``` Blames the invocation of the given module, function and arguments. This function will retrieve the available clauses from bytecode and evaluate them against the given arguments. The clauses are returned as a list of `{args, guards}` pairs where each argument and each top-level condition in a guard separated by `and`/`or` is wrapped in a tuple with blame metadata. This function returns either `{:ok, definition, clauses}` or `:error`. Where `definition` is `:def`, `:defp`, `:defmacro` or `:defmacrop`. ### exception?(term) Returns `true` if the given `term` is an exception. ### format(kind, payload, stacktrace \\ []) #### Specs ``` format(kind(), any(), stacktrace()) :: String.t() ``` Normalizes and formats throw/errors/exits and stacktraces. It relies on [`format_banner/3`](#format_banner/3) and [`format_stacktrace/1`](#format_stacktrace/1) to generate the final format. If `kind` is `{:EXIT, pid}`, it does not generate a stacktrace, as such exits are retrieved as messages without stacktraces. ### format\_banner(kind, exception, stacktrace \\ []) #### Specs ``` format_banner(kind(), any(), stacktrace()) :: String.t() ``` Normalizes and formats any throw/error/exit. The message is formatted and displayed in the same format as used by Elixir's CLI. The third argument is the stacktrace which is used to enrich a normalized error with more information. It is only used when the kind is an error. ### format\_exit(reason) #### Specs ``` format_exit(any()) :: String.t() ``` Formats an exit. It returns a string. Often there are errors/exceptions inside exits. Exits are often wrapped by the caller and provide stacktraces too. This function formats exits in a way to nicely show the exit reason, caller and stacktrace. ### format\_fa(fun, arity) Receives an anonymous function and arity and formats it as shown in stacktraces. The arity may also be a list of arguments. #### Examples ``` Exception.format_fa(fn -> nil end, 1) #=> "#Function<...>/1" ``` ### format\_file\_line(file, line, suffix \\ "") Formats the given `file` and `line` as shown in stacktraces. If any of the values are `nil`, they are omitted. #### Examples ``` iex> Exception.format_file_line("foo", 1) "foo:1:" iex> Exception.format_file_line("foo", nil) "foo:" iex> Exception.format_file_line(nil, nil) "" ``` ### format\_mfa(module, fun, arity) Receives a module, fun and arity and formats it as shown in stacktraces. The arity may also be a list of arguments. #### Examples ``` iex> Exception.format_mfa(Foo, :bar, 1) "Foo.bar/1" iex> Exception.format_mfa(Foo, :bar, []) "Foo.bar()" iex> Exception.format_mfa(nil, :bar, []) "nil.bar()" ``` Anonymous functions are reported as -func/arity-anonfn-count-, where func is the name of the enclosing function. Convert to "anonymous fn in func/arity" ### format\_stacktrace(trace \\ nil) Formats the stacktrace. A stacktrace must be given as an argument. If not, the stacktrace is retrieved from [`Process.info/2`](process#info/2). ### format\_stacktrace\_entry(entry) #### Specs ``` format_stacktrace_entry(stacktrace_entry()) :: String.t() ``` Receives a stacktrace entry and formats it into a string. ### message(exception) Gets the message for an `exception`. ### normalize(kind, payload, stacktrace \\ []) #### Specs ``` normalize(:error, any(), stacktrace()) :: t() ``` ``` normalize(non_error_kind(), payload, stacktrace()) :: payload when payload: var ``` Normalizes an exception, converting Erlang exceptions to Elixir exceptions. It takes the `kind` spilled by `catch` as an argument and normalizes only `:error`, returning the untouched payload for others. The third argument is the stacktrace which is used to enrich a normalized error with more information. It is only used when the kind is an error. Callbacks ========== ### blame(t, stacktrace) #### Specs ``` blame(t(), stacktrace()) :: {t(), stacktrace()} ``` Called from [`Exception.blame/3`](exception#blame/3) to augment the exception struct. Can be used to collect additional information about the exception or do some additional expensive computation. ### exception(term) #### Specs ``` exception(term()) :: t() ``` ### message(t) #### Specs ``` message(t()) :: String.t() ``` elixir Typespecs Typespecs ========= Elixir comes with a notation for declaring types and specifications. Elixir is a dynamically typed language, and as such, type specifications are never used by the compiler to optimize or modify code. Still, using type specifications is useful because: * they provide documentation (for example, tools such as [ExDoc](https://github.com/elixir-lang/ex_doc) show type specifications in the documentation) * they're used by tools such as [Dialyzer](http://www.erlang.org/doc/man/dialyzer.html), that can analyze code with typespec to find type inconsistencies and possible bugs Type specifications (sometimes referred to as *typespecs*) are defined in different contexts using the following attributes: * `@type` * `@opaque` * `@typep` * `@spec` * `@callback` * `@macrocallback` See the "User-defined types" and "Defining a specification" sub-sections below for more information on defining types and typespecs. A simple example ----------------- ``` defmodule StringHelpers do @type word() :: String.t() @spec long_word?(word()) :: boolean() def long_word?(word) when is_binary(word) do String.length(word) > 8 end end ``` In the example above, this happens: * we declare a new type (`word()`) that is equivalent to the string type (`String.t()`); * we specify that the `long_word?/1` function takes an argument of type `word()` and returns a boolean (`boolean()`), that is, either `true` or `false`. Types and their syntax ----------------------- The syntax Elixir provides for type specifications is similar to [the one in Erlang](http://www.erlang.org/doc/reference_manual/typespec.html). Most of the built-in types provided in Erlang (for example, `pid()`) are expressed in the same way: `pid()` (or simply `pid`). Parameterized types (such as `list(integer)`) are supported as well and so are remote types (such as `Enum.t`). Integers and atom literals are allowed as types (e.g., `1`, `:atom`, or `false`). All other types are built out of unions of predefined types. Some shorthands are allowed, such as `[...]`, `<<>>`, and `{...}`. The notation to represent the union of types is the pipe `|`. For example, the typespec `type :: atom() | pid() | tuple()` creates a type `type` that can be either an `atom`, a `pid`, or a `tuple`. This is usually called a [sum type](https://en.wikipedia.org/wiki/Tagged_union) in other languages ### Basic types ``` type :: any() # the top type, the set of all terms | none() # the bottom type, contains no terms | atom() | map() # any map | pid() # process identifier | port() # port identifier | reference() | struct() # any struct | tuple() # tuple of any size ## Numbers | float() | integer() | neg_integer() # ..., -3, -2, -1 | non_neg_integer() # 0, 1, 2, 3, ... | pos_integer() # 1, 2, 3, ... ## Lists | list(type) # proper list ([]-terminated) | nonempty_list(type) # non-empty proper list | maybe_improper_list(type1, type2) # proper or improper list | nonempty_improper_list(type1, type2) # improper list | nonempty_maybe_improper_list(type1, type2) # non-empty proper or improper list | Literals # Described in section "Literals" | BuiltIn # Described in section "Built-in types" | Remotes # Described in section "Remote types" | UserDefined # Described in section "User-defined types" ``` ### Literals The following literals are also supported in typespecs: ``` type :: ## Atoms :atom # atoms: :foo, :bar, ... | true | false | nil # special atom literals ## Bitstrings | <<>> # empty bitstring | <<_::size>> # size is 0 or a positive integer | <<_::_*unit>> # unit is an integer from 1 to 256 | <<_::size, _::_*unit>> ## (Anonymous) Functions | (-> type) # 0-arity, returns type | (type1, type2 -> type) # 2-arity, returns type | (... -> type) # any arity, returns type ## Integers | 1 # integer | 1..10 # integer from 1 to 10 ## Lists | [type] # list with any number of type elements | [] # empty list | [...] # shorthand for nonempty_list(any()) | [type, ...] # shorthand for nonempty_list(type) | [key: value_type] # keyword list with key :key of value_type ## Maps | %{} # empty map | %{key: value_type} # map with required key :key of value_type | %{required(key_type) => value_type} # map with required pairs of key_type and value_type | %{optional(key_type) => value_type} # map with optional pairs of key_type and value_type | %SomeStruct{} # struct with all fields of any type | %SomeStruct{key: value_type} # struct with required key :key of value_type ## Tuples | {} # empty tuple | {:ok, type} # two-element tuple with an atom and any type ``` ### Built-in types The following types are also provided by Elixir as shortcuts on top of the basic and literal types described above. | Built-in type | Defined as | | --- | --- | | `term()` | `any()` | | `arity()` | `0..255` | | `as_boolean(t)` | `t` | | `binary()` | `<<_::_*8>>` | | `bitstring()` | `<<_::_*1>>` | | `boolean()` | `true` | `false` | | `byte()` | `0..255` | | `char()` | `0..0x10FFFF` | | `charlist()` | `[char()]` | | `nonempty_charlist()` | `[char(), ...]` | | `fun()` | `(... -> any)` | | `function()` | `fun()` | | `identifier()` | `pid()` | `port()` | `reference()` | | `iodata()` | `iolist()` | `binary()` | | `iolist()` | `maybe_improper_list(byte() | binary() | iolist(), binary() | [])` | | `keyword()` | `[{atom(), any()}]` | | `keyword(t)` | `[{atom(), t}]` | | `list()` | `[any()]` | | `nonempty_list()` | `nonempty_list(any())` | | `maybe_improper_list()` | `maybe_improper_list(any(), any())` | | `nonempty_maybe_improper_list()` | `nonempty_maybe_improper_list(any(), any())` | | `mfa()` | `{module(), atom(), arity()}` | | `module()` | `atom()` | | `no_return()` | `none()` | | `node()` | `atom()` | | `number()` | `integer()` | `float()` | | `struct()` | `%{:__struct__ => atom(), optional(atom()) => any()}` | | `timeout()` | `:infinity` | `non_neg_integer()` | `as_boolean(t)` exists to signal users that the given value will be treated as a boolean, where `nil` and `false` will be evaluated as `false` and everything else is `true`. For example, [`Enum.filter/2`](enum#filter/2) has the following specification: `filter(t, (element -> as_boolean(term))) :: list`. ### Remote types Any module is also able to define its own types and the modules in Elixir are no exception. For example, the [`Range`](range) module defines a [`t/0`](t:Range.t/0) type that represents a range: this type can be referred to as [`Range.t/0`](range#t:t/0). In a similar fashion, a string is [`String.t/0`](string#t:t/0), any enumerable can be [`Enum.t/0`](enum#t:t/0), and so on. ### Maps The key types in maps are allowed to overlap, and if they do, the leftmost key takes precedence. A map value does not belong to this type if it contains a key that is not in the allowed map keys. If you want to denote that keys that were not previously defined in the map are allowed, it is common to end a map type with `optional(any) => any`. Notice that the syntactic representation of `map()` is `%{optional(any) => any}`, not `%{}`. The notation `%{}` specifies the singleton type for the empty map. ### User-defined types The `@type`, `@typep`, and `@opaque` module attributes can be used to define new types: ``` @type type_name :: type @typep type_name :: type @opaque type_name :: type ``` A type defined with `@typep` is private. An opaque type, defined with `@opaque` is a type where the internal structure of the type will not be visible, but the type is still public. Types can be parameterized by defining variables as parameters; these variables can then be used to define the type. ``` @type dict(key, value) :: [{key, value}] ``` Defining a specification ------------------------- A specification for a function can be defined as follows: ``` @spec function_name(type1, type2) :: return_type ``` Guards can be used to restrict type variables given as arguments to the function. ``` @spec function(arg) :: [arg] when arg: atom ``` If you want to specify more than one variable, you separate them by a comma. ``` @spec function(arg1, arg2) :: {arg1, arg2} when arg1: atom, arg2: integer ``` Type variables with no restriction can also be defined using `var`. ``` @spec function(arg) :: [arg] when arg: var ``` You can also name your arguments in a typespec using `arg_name :: arg_type` syntax. This is particularly useful in documentation as a way to differentiate multiple arguments of the same type (or multiple elements of the same type in a type definition): ``` @spec days_since_epoch(year :: integer, month :: integer, day :: integer) :: integer @type color :: {red :: integer, green :: integer, blue :: integer} ``` Specifications can be overloaded just like ordinary functions. ``` @spec function(integer) :: atom @spec function(atom) :: integer ``` Behaviours ----------- Behaviours in Elixir (and Erlang) are a way to separate and abstract the generic part of a component (which becomes the *behaviour module*) from the specific part (which becomes the *callback module*). A behaviour module defines a set of functions and macros (referred to as *callbacks*) that callback modules implementing that behaviour must export. This "interface" identifies the specific part of the component. For example, the [`GenServer`](genserver) behaviour and functions abstract away all the message-passing (sending and receiving) and error reporting that a "server" process will likely want to implement from the specific parts such as the actions that this server process has to perform. To define a behaviour module, it's enough to define one or more callbacks in that module. To define callbacks, the `@callback` and `@macrocallback` module attributes can be used (for function callbacks and macro callbacks respectively). ``` defmodule MyBehaviour do @callback my_fun(arg :: any) :: any @macrocallback my_macro(arg :: any) :: Macro.t end ``` As seen in the example above, defining a callback is a matter of defining a specification for that callback, made of: * the callback name (`my_fun` or `my_macro` in the example) * the arguments that the callback must accept (`arg :: any` in the example) * the *expected* type of the callback return value ### Optional callbacks Optional callbacks are callbacks that callback modules may implement if they want to, but are not required to. Usually, behaviour modules know if they should call those callbacks based on configuration, or they check if the callbacks are defined with [`function_exported?/3`](kernel#function_exported?/3) or [`macro_exported?/3`](kernel#macro_exported?/3). Optional callbacks can be defined through the `@optional_callbacks` module attribute, which has to be a keyword list with function or macro name as key and arity as value. For example: ``` defmodule MyBehaviour do @callback vital_fun() :: any @callback non_vital_fun() :: any @macrocallback non_vital_macro(arg :: any) :: Macro.t @optional_callbacks non_vital_fun: 0, non_vital_macro: 1 end ``` One example of optional callback in Elixir's standard library is [`GenServer.format_status/2`](genserver#c:format_status/2). ### Implementing behaviours To specify that a module implements a given behaviour, the `@behaviour` attribute must be used: ``` defmodule MyBehaviour do @callback my_fun(arg :: any) :: any end defmodule MyCallbackModule do @behaviour MyBehaviour def my_fun(arg), do: arg end ``` If a callback module that implements a given behaviour doesn't export all the functions and macros defined by that behaviour, the user will be notified through warnings during the compilation process (no errors will happen). Elixir's standard library contains a few frequently used behaviours such as [`GenServer`](genserver), [`Supervisor`](supervisor), and [`Application`](application). The `string()` type -------------------- Elixir discourages the use of the `string()` type. The `string()` type refers to Erlang strings, which are known as "charlists" in Elixir. They do not refer to Elixir strings, which are UTF-8 encoded binaries. To avoid confusion, if you attempt to use the type `string()`, Elixir will emit a warning. You should use `charlist()`, `nonempty_charlist()`, `binary()` or `String.t()` accordingly, or any of the several literal representations for these types. Note that `String.t()` and `binary()` are equivalent to analysis tools. Although, for those reading the documentation, `String.t()` implies it is a UTF-8 encoded binary.
programming_docs
elixir Base Base ===== This module provides data encoding and decoding functions according to [RFC 4648](https://tools.ietf.org/html/rfc4648). This document defines the commonly used base 16, base 32, and base 64 encoding schemes. Base 16 alphabet ----------------- ``` | Value | Encoding | Value | Encoding | Value | Encoding | Value | Encoding | |------:|---------:|------:|---------:|------:|---------:|------:|---------:| | 0| 0| 4| 4| 8| 8| 12| C| | 1| 1| 5| 5| 9| 9| 13| D| | 2| 2| 6| 6| 10| A| 14| E| | 3| 3| 7| 7| 11| B| 15| F| ``` Base 32 alphabet ----------------- ``` | Value | Encoding | Value | Encoding | Value | Encoding | Value | Encoding | |------:|---------:|------:|---------:|------:|---------:|------:|---------:| | 0| A| 9| J| 18| S| 27| 3| | 1| B| 10| K| 19| T| 28| 4| | 2| C| 11| L| 20| U| 29| 5| | 3| D| 12| M| 21| V| 30| 6| | 4| E| 13| N| 22| W| 31| 7| | 5| F| 14| O| 23| X| | | | 6| G| 15| P| 24| Y| (pad)| =| | 7| H| 16| Q| 25| Z| | | | 8| I| 17| R| 26| 2| | | ``` Base 32 (extended hex) alphabet -------------------------------- ``` | Value | Encoding | Value | Encoding | Value | Encoding | Value | Encoding | |------:|---------:|------:|---------:|------:|---------:|------:|---------:| | 0| 0| 9| 9| 18| I| 27| R| | 1| 1| 10| A| 19| J| 28| S| | 2| 2| 11| B| 20| K| 29| T| | 3| 3| 12| C| 21| L| 30| U| | 4| 4| 13| D| 22| M| 31| V| | 5| 5| 14| E| 23| N| | | | 6| 6| 15| F| 24| O| (pad)| =| | 7| 7| 16| G| 25| P| | | | 8| 8| 17| H| 26| Q| | | ``` Base 64 alphabet ----------------- ``` | Value | Encoding | Value | Encoding | Value | Encoding | Value | Encoding | |------:|---------:|------:|---------:|------:|---------:|------:|---------:| | 0| A| 17| R| 34| i| 51| z| | 1| B| 18| S| 35| j| 52| 0| | 2| C| 19| T| 36| k| 53| 1| | 3| D| 20| U| 37| l| 54| 2| | 4| E| 21| V| 38| m| 55| 3| | 5| F| 22| W| 39| n| 56| 4| | 6| G| 23| X| 40| o| 57| 5| | 7| H| 24| Y| 41| p| 58| 6| | 8| I| 25| Z| 42| q| 59| 7| | 9| J| 26| a| 43| r| 60| 8| | 10| K| 27| b| 44| s| 61| 9| | 11| L| 28| c| 45| t| 62| +| | 12| M| 29| d| 46| u| 63| /| | 13| N| 30| e| 47| v| | | | 14| O| 31| f| 48| w| (pad)| =| | 15| P| 32| g| 49| x| | | | 16| Q| 33| h| 50| y| | | ``` Base 64 (URL and filename safe) alphabet ----------------------------------------- ``` | Value | Encoding | Value | Encoding | Value | Encoding | Value | Encoding | |------:|---------:|------:|---------:|------:|---------:|------:|---------:| | 0| A| 17| R| 34| i| 51| z| | 1| B| 18| S| 35| j| 52| 0| | 2| C| 19| T| 36| k| 53| 1| | 3| D| 20| U| 37| l| 54| 2| | 4| E| 21| V| 38| m| 55| 3| | 5| F| 22| W| 39| n| 56| 4| | 6| G| 23| X| 40| o| 57| 5| | 7| H| 24| Y| 41| p| 58| 6| | 8| I| 25| Z| 42| q| 59| 7| | 9| J| 26| a| 43| r| 60| 8| | 10| K| 27| b| 44| s| 61| 9| | 11| L| 28| c| 45| t| 62| -| | 12| M| 29| d| 46| u| 63| _| | 13| N| 30| e| 47| v| | | | 14| O| 31| f| 48| w| (pad)| =| | 15| P| 32| g| 49| x| | | | 16| Q| 33| h| 50| y| | | ``` Summary ======== Functions ---------- [decode16(string, opts \\ [])](#decode16/2) Decodes a base 16 encoded string into a binary string. [decode16!(string, opts \\ [])](#decode16!/2) Decodes a base 16 encoded string into a binary string. [decode32(string, opts \\ [])](#decode32/2) Decodes a base 32 encoded string into a binary string. [decode32!(string, opts \\ [])](#decode32!/2) Decodes a base 32 encoded string into a binary string. [decode64(string, opts \\ [])](#decode64/2) Decodes a base 64 encoded string into a binary string. [decode64!(string, opts \\ [])](#decode64!/2) Decodes a base 64 encoded string into a binary string. [encode16(data, opts \\ [])](#encode16/2) Encodes a binary string into a base 16 encoded string. [encode32(data, opts \\ [])](#encode32/2) Encodes a binary string into a base 32 encoded string. [encode64(data, opts \\ [])](#encode64/2) Encodes a binary string into a base 64 encoded string. [hex\_decode32(string, opts \\ [])](#hex_decode32/2) Decodes a base 32 encoded string with extended hexadecimal alphabet into a binary string. [hex\_decode32!(string, opts \\ [])](#hex_decode32!/2) Decodes a base 32 encoded string with extended hexadecimal alphabet into a binary string. [hex\_encode32(data, opts \\ [])](#hex_encode32/2) Encodes a binary string into a base 32 encoded string with an extended hexadecimal alphabet. [url\_decode64(string, opts \\ [])](#url_decode64/2) Decodes a base 64 encoded string with URL and filename safe alphabet into a binary string. [url\_decode64!(string, opts \\ [])](#url_decode64!/2) Decodes a base 64 encoded string with URL and filename safe alphabet into a binary string. [url\_encode64(data, opts \\ [])](#url_encode64/2) Encodes a binary string into a base 64 encoded string with URL and filename safe alphabet. Functions ========== ### decode16(string, opts \\ []) #### Specs ``` decode16(binary(), keyword()) :: {:ok, binary()} | :error ``` Decodes a base 16 encoded string into a binary string. #### Options The accepted options are: * `:case` - specifies the character case to accept when decoding The values for `:case` can be: * `:upper` - only allows upper case characters (default) * `:lower` - only allows lower case characters * `:mixed` - allows mixed case characters #### Examples ``` iex> Base.decode16("666F6F626172") {:ok, "foobar"} iex> Base.decode16("666f6f626172", case: :lower) {:ok, "foobar"} iex> Base.decode16("666f6F626172", case: :mixed) {:ok, "foobar"} ``` ### decode16!(string, opts \\ []) #### Specs ``` decode16!(binary(), keyword()) :: binary() ``` Decodes a base 16 encoded string into a binary string. #### Options The accepted options are: * `:case` - specifies the character case to accept when decoding The values for `:case` can be: * `:upper` - only allows upper case characters (default) * `:lower` - only allows lower case characters * `:mixed` - allows mixed case characters An [`ArgumentError`](argumenterror) exception is raised if the padding is incorrect or a non-alphabet character is present in the string. #### Examples ``` iex> Base.decode16!("666F6F626172") "foobar" iex> Base.decode16!("666f6f626172", case: :lower) "foobar" iex> Base.decode16!("666f6F626172", case: :mixed) "foobar" ``` ### decode32(string, opts \\ []) #### Specs ``` decode32(binary(), keyword()) :: {:ok, binary()} | :error ``` Decodes a base 32 encoded string into a binary string. #### Options The accepted options are: * `:case` - specifies the character case to accept when decoding * `:padding` - specifies whether to require padding The values for `:case` can be: * `:upper` - only allows upper case characters (default) * `:lower` - only allows lower case characters * `:mixed` - allows mixed case characters The values for `:padding` can be: * `true` - requires the input string to be padded to the nearest multiple of 8 (default) * `false` - ignores padding from the input string #### Examples ``` iex> Base.decode32("MZXW6YTBOI======") {:ok, "foobar"} iex> Base.decode32("mzxw6ytboi======", case: :lower) {:ok, "foobar"} iex> Base.decode32("mzXW6ytBOi======", case: :mixed) {:ok, "foobar"} iex> Base.decode32("MZXW6YTBOI", padding: false) {:ok, "foobar"} ``` ### decode32!(string, opts \\ []) #### Specs ``` decode32!(binary(), keyword()) :: binary() ``` Decodes a base 32 encoded string into a binary string. An [`ArgumentError`](argumenterror) exception is raised if the padding is incorrect or a non-alphabet character is present in the string. #### Options The accepted options are: * `:case` - specifies the character case to accept when decoding * `:padding` - specifies whether to require padding The values for `:case` can be: * `:upper` - only allows upper case characters (default) * `:lower` - only allows lower case characters * `:mixed` - allows mixed case characters The values for `:padding` can be: * `true` - requires the input string to be padded to the nearest multiple of 8 (default) * `false` - ignores padding from the input string #### Examples ``` iex> Base.decode32!("MZXW6YTBOI======") "foobar" iex> Base.decode32!("mzxw6ytboi======", case: :lower) "foobar" iex> Base.decode32!("mzXW6ytBOi======", case: :mixed) "foobar" iex> Base.decode32!("MZXW6YTBOI", padding: false) "foobar" ``` ### decode64(string, opts \\ []) #### Specs ``` decode64(binary(), keyword()) :: {:ok, binary()} | :error ``` Decodes a base 64 encoded string into a binary string. Accepts `ignore: :whitespace` option which will ignore all the whitespace characters in the input string. Accepts `padding: false` option which will ignore padding from the input string. #### Examples ``` iex> Base.decode64("Zm9vYmFy") {:ok, "foobar"} iex> Base.decode64("Zm9vYmFy\n", ignore: :whitespace) {:ok, "foobar"} iex> Base.decode64("Zm9vYg==") {:ok, "foob"} iex> Base.decode64("Zm9vYg", padding: false) {:ok, "foob"} ``` ### decode64!(string, opts \\ []) #### Specs ``` decode64!(binary(), keyword()) :: binary() ``` Decodes a base 64 encoded string into a binary string. Accepts `ignore: :whitespace` option which will ignore all the whitespace characters in the input string. Accepts `padding: false` option which will ignore padding from the input string. An [`ArgumentError`](argumenterror) exception is raised if the padding is incorrect or a non-alphabet character is present in the string. #### Examples ``` iex> Base.decode64!("Zm9vYmFy") "foobar" iex> Base.decode64!("Zm9vYmFy\n", ignore: :whitespace) "foobar" iex> Base.decode64!("Zm9vYg==") "foob" iex> Base.decode64!("Zm9vYg", padding: false) "foob" ``` ### encode16(data, opts \\ []) #### Specs ``` encode16(binary(), keyword()) :: binary() ``` Encodes a binary string into a base 16 encoded string. #### Options The accepted options are: * `:case` - specifies the character case to use when encoding The values for `:case` can be: * `:upper` - uses upper case characters (default) * `:lower` - uses lower case characters #### Examples ``` iex> Base.encode16("foobar") "666F6F626172" iex> Base.encode16("foobar", case: :lower) "666f6f626172" ``` ### encode32(data, opts \\ []) #### Specs ``` encode32(binary(), keyword()) :: binary() ``` Encodes a binary string into a base 32 encoded string. #### Options The accepted options are: * `:case` - specifies the character case to use when encoding * `:padding` - specifies whether to apply padding The values for `:case` can be: * `:upper` - uses upper case characters (default) * `:lower` - uses lower case characters The values for `:padding` can be: * `true` - pad the output string to the nearest multiple of 8 (default) * `false` - omit padding from the output string #### Examples ``` iex> Base.encode32("foobar") "MZXW6YTBOI======" iex> Base.encode32("foobar", case: :lower) "mzxw6ytboi======" iex> Base.encode32("foobar", padding: false) "MZXW6YTBOI" ``` ### encode64(data, opts \\ []) #### Specs ``` encode64(binary(), keyword()) :: binary() ``` Encodes a binary string into a base 64 encoded string. Accepts `padding: false` option which will omit padding from the output string. #### Examples ``` iex> Base.encode64("foobar") "Zm9vYmFy" iex> Base.encode64("foob") "Zm9vYg==" iex> Base.encode64("foob", padding: false) "Zm9vYg" ``` ### hex\_decode32(string, opts \\ []) #### Specs ``` hex_decode32(binary(), keyword()) :: {:ok, binary()} | :error ``` Decodes a base 32 encoded string with extended hexadecimal alphabet into a binary string. #### Options The accepted options are: * `:case` - specifies the character case to accept when decoding * `:padding` - specifies whether to require padding The values for `:case` can be: * `:upper` - only allows upper case characters (default) * `:lower` - only allows lower case characters * `:mixed` - allows mixed case characters The values for `:padding` can be: * `true` - requires the input string to be padded to the nearest multiple of 8 (default) * `false` - ignores padding from the input string #### Examples ``` iex> Base.hex_decode32("CPNMUOJ1E8======") {:ok, "foobar"} iex> Base.hex_decode32("cpnmuoj1e8======", case: :lower) {:ok, "foobar"} iex> Base.hex_decode32("cpnMuOJ1E8======", case: :mixed) {:ok, "foobar"} iex> Base.hex_decode32("CPNMUOJ1E8", padding: false) {:ok, "foobar"} ``` ### hex\_decode32!(string, opts \\ []) #### Specs ``` hex_decode32!(binary(), keyword()) :: binary() ``` Decodes a base 32 encoded string with extended hexadecimal alphabet into a binary string. An [`ArgumentError`](argumenterror) exception is raised if the padding is incorrect or a non-alphabet character is present in the string. #### Options The accepted options are: * `:case` - specifies the character case to accept when decoding * `:padding` - specifies whether to require padding The values for `:case` can be: * `:upper` - only allows upper case characters (default) * `:lower` - only allows lower case characters * `:mixed` - allows mixed case characters The values for `:padding` can be: * `true` - requires the input string to be padded to the nearest multiple of 8 (default) * `false` - ignores padding from the input string #### Examples ``` iex> Base.hex_decode32!("CPNMUOJ1E8======") "foobar" iex> Base.hex_decode32!("cpnmuoj1e8======", case: :lower) "foobar" iex> Base.hex_decode32!("cpnMuOJ1E8======", case: :mixed) "foobar" iex> Base.hex_decode32!("CPNMUOJ1E8", padding: false) "foobar" ``` ### hex\_encode32(data, opts \\ []) #### Specs ``` hex_encode32(binary(), keyword()) :: binary() ``` Encodes a binary string into a base 32 encoded string with an extended hexadecimal alphabet. #### Options The accepted options are: * `:case` - specifies the character case to use when encoding * `:padding` - specifies whether to apply padding The values for `:case` can be: * `:upper` - uses upper case characters (default) * `:lower` - uses lower case characters The values for `:padding` can be: * `true` - pad the output string to the nearest multiple of 8 (default) * `false` - omit padding from the output string #### Examples ``` iex> Base.hex_encode32("foobar") "CPNMUOJ1E8======" iex> Base.hex_encode32("foobar", case: :lower) "cpnmuoj1e8======" iex> Base.hex_encode32("foobar", padding: false) "CPNMUOJ1E8" ``` ### url\_decode64(string, opts \\ []) #### Specs ``` url_decode64(binary(), keyword()) :: {:ok, binary()} | :error ``` Decodes a base 64 encoded string with URL and filename safe alphabet into a binary string. Accepts `ignore: :whitespace` option which will ignore all the whitespace characters in the input string. Accepts `padding: false` option which will ignore padding from the input string. #### Examples ``` iex> Base.url_decode64("_3_-_A==") {:ok, <<255, 127, 254, 252>>} iex> Base.url_decode64("_3_-_A==\n", ignore: :whitespace) {:ok, <<255, 127, 254, 252>>} iex> Base.url_decode64("_3_-_A", padding: false) {:ok, <<255, 127, 254, 252>>} ``` ### url\_decode64!(string, opts \\ []) #### Specs ``` url_decode64!(binary(), keyword()) :: binary() ``` Decodes a base 64 encoded string with URL and filename safe alphabet into a binary string. Accepts `ignore: :whitespace` option which will ignore all the whitespace characters in the input string. Accepts `padding: false` option which will ignore padding from the input string. An [`ArgumentError`](argumenterror) exception is raised if the padding is incorrect or a non-alphabet character is present in the string. #### Examples ``` iex> Base.url_decode64!("_3_-_A==") <<255, 127, 254, 252>> iex> Base.url_decode64!("_3_-_A==\n", ignore: :whitespace) <<255, 127, 254, 252>> iex> Base.url_decode64!("_3_-_A", padding: false) <<255, 127, 254, 252>> ``` ### url\_encode64(data, opts \\ []) #### Specs ``` url_encode64(binary(), keyword()) :: binary() ``` Encodes a binary string into a base 64 encoded string with URL and filename safe alphabet. Accepts `padding: false` option which will omit padding from the output string. #### Examples ``` iex> Base.url_encode64(<<255, 127, 254, 252>>) "_3_-_A==" iex> Base.url_encode64(<<255, 127, 254, 252>>, padding: false) "_3_-_A" ``` elixir Tuple Tuple ====== Functions for working with tuples. Please note the following functions for tuples are found in [`Kernel`](kernel): * [`elem/2`](kernel#elem/2) - accesses a tuple by index * [`put_elem/3`](kernel#put_elem/3) - inserts a value into a tuple by index * [`tuple_size/1`](kernel#tuple_size/1) - gets the number of elements in a tuple Tuples are intended as fixed-size containers for multiple elements. To manipulate a collection of elements, use a list instead. [`Enum`](enum) functions do not work on tuples. Tuples are denoted with curly braces: ``` iex> {} {} iex> {1, :two, "three"} {1, :two, "three"} ``` A tuple may contain elements of different types, which are stored contiguously in memory. Accessing any element takes constant time, but modifying a tuple, which produces a shallow copy, takes linear time. Tuples are good for reading data while lists are better for traversals. Tuples are typically used either when a function has multiple return values or for error handling. [`File.read/1`](file#read/1) returns `{:ok, contents}` if reading the given file is successful, or else `{:error, reason}` such as when the file does not exist. The functions in this module that add and remove elements from tuples are rarely used in practice, as they typically imply tuples are being used as collections. To append to a tuple, it is preferable to extract the elements from the old tuple with pattern matching, and then create a new tuple: ``` tuple = {:ok, :example} # Avoid result = Tuple.insert_at(tuple, 2, %{}) # Prefer {:ok, atom} = tuple result = {:ok, atom, %{}} ``` Summary ======== Functions ---------- [append(tuple, value)](#append/2) Inserts an element at the end of a tuple. [delete\_at(tuple, index)](#delete_at/2) Removes an element from a tuple. [duplicate(data, size)](#duplicate/2) Creates a new tuple. [insert\_at(tuple, index, value)](#insert_at/3) Inserts an element into a tuple. [to\_list(tuple)](#to_list/1) Converts a tuple to a list. Functions ========== ### append(tuple, value) #### Specs ``` append(tuple(), term()) :: tuple() ``` Inserts an element at the end of a tuple. Returns a new tuple with the element appended at the end, and contains the elements in `tuple` followed by `value` as the last element. Inlined by the compiler. #### Examples ``` iex> tuple = {:foo, :bar} iex> Tuple.append(tuple, :baz) {:foo, :bar, :baz} ``` ### delete\_at(tuple, index) #### Specs ``` delete_at(tuple(), non_neg_integer()) :: tuple() ``` Removes an element from a tuple. Deletes the element at the given `index` from `tuple`. Raises an [`ArgumentError`](argumenterror) if `index` is negative or greater than or equal to the length of `tuple`. Index is zero-based. Inlined by the compiler. #### Examples ``` iex> tuple = {:foo, :bar, :baz} iex> Tuple.delete_at(tuple, 0) {:bar, :baz} ``` ### duplicate(data, size) #### Specs ``` duplicate(term(), non_neg_integer()) :: tuple() ``` Creates a new tuple. Creates a tuple of `size` containing the given `data` at every position. Inlined by the compiler. #### Examples ``` iex> Tuple.duplicate(:hello, 3) {:hello, :hello, :hello} ``` ### insert\_at(tuple, index, value) #### Specs ``` insert_at(tuple(), non_neg_integer(), term()) :: tuple() ``` Inserts an element into a tuple. Inserts `value` into `tuple` at the given `index`. Raises an [`ArgumentError`](argumenterror) if `index` is negative or greater than the length of `tuple`. Index is zero-based. Inlined by the compiler. #### Examples ``` iex> tuple = {:bar, :baz} iex> Tuple.insert_at(tuple, 0, :foo) {:foo, :bar, :baz} iex> Tuple.insert_at(tuple, 2, :bong) {:bar, :baz, :bong} ``` ### to\_list(tuple) #### Specs ``` to_list(tuple()) :: list() ``` Converts a tuple to a list. Returns a new list with all the tuple elements. Inlined by the compiler. #### Examples ``` iex> tuple = {:foo, :bar, :baz} iex> Tuple.to_list(tuple) [:foo, :bar, :baz] ```
programming_docs
elixir File.Stat File.Stat ========== A struct that holds file information. In Erlang, this struct is represented by a `:file_info` record. Therefore this module also provides functions for converting between the Erlang record and the Elixir struct. Its fields are: * `size` - size of file in bytes. * `type` - `:device | :directory | :regular | :other | :symlink`; the type of the file. * `access` - `:read | :write | :read_write | :none`; the current system access to the file. * `atime` - the last time the file was read. * `mtime` - the last time the file was written. * `ctime` - the interpretation of this time field depends on the operating system. On Unix, it is the last time the file or the inode was changed. In Windows, it is the time of creation. * `mode` - the file permissions. * `links` - the number of links to this file. This is always 1 for file systems which have no concept of links. * `major_device` - identifies the file system where the file is located. In Windows, the number indicates a drive as follows: 0 means A:, 1 means B:, and so on. * `minor_device` - only valid for character devices on Unix. In all other cases, this field is zero. * `inode` - gives the inode number. On non-Unix file systems, this field will be zero. * `uid` - indicates the owner of the file. Will be zero for non-Unix file systems. * `gid` - indicates the group that owns the file. Will be zero for non-Unix file systems. The time type returned in `atime`, `mtime`, and `ctime` is dependent on the time type set in options. `{:time, type}` where type can be `:local`, `:universal`, or `:posix`. Default is `:universal`. Summary ======== Types ------ [t()](#t:t/0) Functions ---------- [from\_record(file\_info)](#from_record/1) Converts a `:file_info` record into a [`File.Stat`](#content). [to\_record(stat)](#to_record/1) Converts a [`File.Stat`](#content) struct to a `:file_info` record. Types ====== ### t() #### Specs ``` t() :: %File.Stat{ access: :read | :write | :read_write | :none, atime: :calendar.datetime() | integer(), ctime: :calendar.datetime() | integer(), gid: non_neg_integer(), inode: non_neg_integer(), links: non_neg_integer(), major_device: non_neg_integer(), minor_device: non_neg_integer(), mode: non_neg_integer(), mtime: :calendar.datetime() | integer(), size: non_neg_integer(), type: :device | :directory | :regular | :other | :symlink, uid: non_neg_integer() } ``` Functions ========== ### from\_record(file\_info) #### Specs ``` from_record(:file.file_info()) :: t() ``` Converts a `:file_info` record into a [`File.Stat`](#content). ### to\_record(stat) #### Specs ``` to_record(t()) :: :file.file_info() ``` Converts a [`File.Stat`](#content) struct to a `:file_info` record. elixir Inspect protocol Inspect protocol ================= The [`Inspect`](#content) protocol converts an Elixir data structure into an algebra document. This documentation refers to implementing the [`Inspect`](#content) protocol for your own data structures. To learn more about using inspect, see [`Kernel.inspect/2`](kernel#inspect/2) and [`IO.inspect/2`](io#inspect/2). The [`inspect/2`](#inspect/2) function receives the entity to be inspected followed by the inspecting options, represented by the struct [`Inspect.Opts`](inspect.opts). Building of the algebra document is done with [`Inspect.Algebra`](inspect.algebra). Examples --------- Many times, inspecting a structure can be implemented in function of existing entities. For example, here is [`MapSet`](mapset)'s [`inspect/2`](#inspect/2) implementation: ``` defimpl Inspect, for: MapSet do import Inspect.Algebra def inspect(dict, opts) do concat(["#MapSet<", to_doc(MapSet.to_list(dict), opts), ">"]) end end ``` The [`concat/1`](inspect.algebra#concat/1) function comes from [`Inspect.Algebra`](inspect.algebra) and it concatenates algebra documents together. In the example above, it is concatenating the string `"MapSet<"` (all strings are valid algebra documents that keep their formatting when pretty printed), the document returned by [`Inspect.Algebra.to_doc/2`](inspect.algebra#to_doc/2) and the other string `">"`. Since regular strings are valid entities in an algebra document, an implementation of the [`Inspect`](#content) protocol may simply return a string, although that will devoid it of any pretty-printing. Error handling --------------- In case there is an error while your structure is being inspected, Elixir will raise an [`ArgumentError`](argumenterror) error and will automatically fall back to a raw representation for printing the structure. You can however access the underlying error by invoking the [`Inspect`](#content) implementation directly. For example, to test [`Inspect.MapSet`](https://hexdocs.pm/elixir/Inspect.MapSet.html) above, you can invoke it as: ``` Inspect.MapSet.inspect(MapSet.new(), %Inspect.Opts{}) ``` Deriving --------- The [`Inspect`](#content) protocol can be derived to hide certain fields from structs, so they don't show up in logs, inspects and similar. This is especially useful for fields containing private information. The options `:only` and `:except` can be used with `@derive` to specify which fields should and should not appear in the algebra document: ``` defmodule User do @derive {Inspect, only: [:id, :name]} defstruct [:id, :name, :address] end inspect(%User{id: 1, name: "Homer", address: "742 Evergreen Terrace"}) #=> #User<id: 1, name: "Homer", ...> ``` Summary ======== Types ------ [t()](#t:t/0) Functions ---------- [inspect(term, opts)](#inspect/2) Converts `term` into an algebra document. Types ====== ### t() #### Specs ``` t() :: term() ``` Functions ========== ### inspect(term, opts) #### Specs ``` inspect(t(), Inspect.Opts.t()) :: Inspect.Algebra.t() ``` Converts `term` into an algebra document. This function shouldn't be invoked directly, unless when implementing a custom `inspect_fun` to be given to [`Inspect.Opts`](inspect.opts). Everywhere else, [`Inspect.Algebra.to_doc/2`](inspect.algebra#to_doc/2) should be preferred as it handles structs and exceptions. elixir Mix.Shell.Process Mix.Shell.Process ================== Mix shell that uses the current process mailbox for communication. This module provides a Mix shell implementation that uses the current process mailbox for communication instead of IO. As an example, when `Mix.shell.info("hello")` is called, the following message will be sent to the calling process: ``` {:mix_shell, :info, ["hello"]} ``` This is mainly useful in tests, allowing us to assert if given messages were received or not instead of performing checks on some captured IO. Since we need to guarantee a clean slate between tests, there is also a [`flush/1`](#flush/1) function responsible for flushing all `:mix_shell` related messages from the process inbox. Examples --------- ``` Mix.shell().info("hello") receive do {:mix_shell, :info, [msg]} -> msg end #=> "hello" send(self(), {:mix_shell_input, :prompt, "Pretty cool"}) Mix.shell().prompt?("How cool was that?!") #=> "Pretty cool" ``` Summary ======== Functions ---------- [cmd(command, opts \\ [])](#cmd/2) Executes the given command and forwards its messages to the current process. [error(message)](#error/1) Forwards the error to the current process. [flush(callback \\ fn x -> x end)](#flush/1) Flushes all `:mix_shell` and `:mix_shell_input` messages from the current process. [info(message)](#info/1) Forwards the message to the current process. [print\_app()](#print_app/0) Prints the current application if it was not printed yet. [prompt(message)](#prompt/1) Forwards the message to the current process. [yes?(message)](#yes?/1) Forwards the message to the current process. Functions ========== ### cmd(command, opts \\ []) Executes the given command and forwards its messages to the current process. ### error(message) Forwards the error to the current process. ### flush(callback \\ fn x -> x end) Flushes all `:mix_shell` and `:mix_shell_input` messages from the current process. If a callback is given, it is invoked for each received message. #### Examples ``` flush(&IO.inspect/1) ``` ### info(message) Forwards the message to the current process. ### print\_app() Prints the current application if it was not printed yet. ### prompt(message) Forwards the message to the current process. It also checks the inbox for an input message matching: ``` {:mix_shell_input, :prompt, value} ``` If one does not exist, it will abort since there was no shell process inputs given. `value` must be a string. #### Examples The following will answer with `"Meg"` to the prompt `"What's your name?"`: ``` # The response is sent before calling prompt/1 so that prompt/1 can read it send(self(), {:mix_shell_input, :prompt, "Meg"}) Mix.shell().prompt("What's your name?") ``` ### yes?(message) Forwards the message to the current process. It also checks the inbox for an input message matching: ``` {:mix_shell_input, :yes?, value} ``` If one does not exist, it will abort since there was no shell process inputs given. `value` must be `true` or `false`. #### Example ``` # Send the response to self() first so that yes?/1 will be able to read it send(self(), {:mix_shell_input, :yes?, true}) Mix.shell().yes?("Are you sure you want to continue?") ``` elixir mix cmd mix cmd ======== Executes the given command. Useful in umbrella applications to execute a command on each child app: ``` mix cmd echo pwd ``` You can limit which apps the cmd runs in by passing the app names before the cmd using --app: ``` mix cmd --app app1 --app app2 echo pwd ``` Aborts when a command exits with a non-zero status. This task is automatically reenabled, so it can be called multiple times with different arguments. Zombie operating system processes ---------------------------------- Beware that the Erlang VM does not terminate child processes when it shuts down. Therefore, if you use [`mix cmd`](#content) to start long running processes and then shut down the VM, it is likely that those child processes won't be terminated with the VM. A solution is to make sure the child processes listen to the standard input and terminate when standard input is closed. We discuss this topic at length in the "Zombie operating system processes" of the [`Port`](https://hexdocs.pm/elixir/Port.html) module documentation. elixir Config.Reader Config.Reader ============== API for reading config files defined with [`Config`](config). As a provider -------------- [`Config.Reader`](#content) can also be used as a [`Config.Provider`](config.provider). When used as a provider, it expects a single argument: which the configuration path (as outlined in [`Config.Provider.config_path/0`](config.provider#t:config_path/0)) for the configuration to be read and loaded during the system boot. Summary ======== Functions ---------- [merge(config1, config2)](#merge/2) Merges two configurations. [read!(file, imported\_paths \\ [])](#read!/2) Reads the configuration file. [read\_imports!(file, imported\_paths \\ [])](#read_imports!/2) Reads the given configuration file alongside its imports. Functions ========== ### merge(config1, config2) #### Specs ``` merge(keyword(), keyword()) :: keyword() ``` Merges two configurations. The configurations are merged together with the values in the second one having higher preference than the first in case of conflicts. In case both values are set to keyword lists, it deep merges them. #### Examples ``` iex> Config.Reader.merge([app: [k: :v1]], [app: [k: :v2]]) [app: [k: :v2]] iex> Config.Reader.merge([app: [k: [v1: 1, v2: 2]]], [app: [k: [v2: :a, v3: :b]]]) [app: [k: [v1: 1, v2: :a, v3: :b]]] iex> Config.Reader.merge([app1: []], [app2: []]) [app1: [], app2: []] ``` ### read!(file, imported\_paths \\ []) #### Specs ``` read!(Path.t(), [Path.t()]) :: keyword() ``` Reads the configuration file. The same as [`read_imports!/2`](#read_imports!/2) but only returns the configuration in the given file, without returning the imported paths. It exists for convenience purposes. For example, you could invoke it inside your `mix.exs` to read some external data you decided to move to a configuration file: ``` releases: Config.Reader.read!("rel/releases.exs") ``` ### read\_imports!(file, imported\_paths \\ []) #### Specs ``` read_imports!(Path.t(), [Path.t()]) :: {keyword(), [Path.t()]} ``` Reads the given configuration file alongside its imports. It accepts a list of `imported_paths` that should raise if attempted to be imported again (to avoid recursive imports). It returns a tuple with the configuration and the imported paths. elixir Module behaviour Module behaviour ================= Provides functions to deal with modules during compilation time. It allows a developer to dynamically add, delete and register attributes, attach documentation and so forth. After a module is compiled, using many of the functions in this module will raise errors, since it is out of their scope to inspect runtime data. Most of the runtime data can be inspected via the [`__info__/1`](module#c:__info__/1) function attached to each compiled module. Module attributes ------------------ Each module can be decorated with one or more attributes. The following ones are currently defined by Elixir: ### `@after_compile` A hook that will be invoked right after the current module is compiled. Accepts a module or a `{module, function_name}`. See the "Compile callbacks" section below. ### `@before_compile` A hook that will be invoked before the module is compiled. Accepts a module or a `{module, function_or_macro_name}` tuple. See the "Compile callbacks" section below. ### `@behaviour` Note the British spelling! Behaviours can be referenced by modules to ensure they implement required specific function signatures defined by `@callback`. For example, you could specify a `URI.Parser` behaviour as follows: ``` defmodule URI.Parser do @doc "Defines a default port" @callback default_port() :: integer @doc "Parses the given URL" @callback parse(uri_info :: URI.t()) :: URI.t() end ``` And then a module may use it as: ``` defmodule URI.HTTP do @behaviour URI.Parser def default_port(), do: 80 def parse(info), do: info end ``` If the behaviour changes or `URI.HTTP` does not implement one of the callbacks, a warning will be raised. ### `@impl` To aid in the correct implementation of behaviours, you may optionally declare `@impl` for implemented callbacks of a behaviour. This makes callbacks explicit and can help you to catch errors in your code. The compiler will warn in these cases: * if you mark a function with `@impl` when that function is not a callback. * if you don't mark a function with `@impl` when other functions are marked with `@impl`. If you mark one function with `@impl`, you must mark all other callbacks for that behaviour as `@impl`. `@impl` works on a per-context basis. If you generate a function through a macro and mark it with `@impl`, that won't affect the module where that function is generated in. `@impl` also helps with maintainability by making it clear to other developers that the function is implementing a callback. Using `@impl`, the example above can be rewritten as: ``` defmodule URI.HTTP do @behaviour URI.Parser @impl true def default_port(), do: 80 @impl true def parse(info), do: info end ``` You may pass either `false`, `true`, or a specific behaviour to `@impl`. ``` defmodule Foo do @behaviour Bar @behaviour Baz # Will warn if neither Bar nor Baz specify a callback named bar/0. @impl true def bar(), do: :ok # Will warn if Baz does not specify a callback named baz/0. @impl Baz def baz(), do: :ok end ``` The code is now more readable, as it is now clear which functions are part of your API and which ones are callback implementations. To reinforce this idea, `@impl true` automatically marks the function as `@doc false`, disabling documentation unless `@doc` is explicitly set. ### `@compile` Defines options for module compilation. This is used to configure both Elixir and Erlang compilers, as any other compilation pass added by external tools. For example: ``` defmodule MyModule do @compile {:inline, my_fun: 1} def my_fun(arg) do to_string(arg) end end ``` Multiple uses of `@compile` will accumulate instead of overriding previous ones. See the "Compile options" section below. ### `@deprecated` Provides the deprecation reason for a function. For example: ``` defmodule Keyword do @deprecated "Use Kernel.length/1 instead" def size(keyword) do length(keyword) end end ``` The Mix compiler automatically looks for calls to deprecated modules and emit warnings during compilation, computed via `mix xref warnings`. Using the `@deprecated` attribute will also be reflected in the documentation of the given function and macro. You can choose between the `@deprecated` attribute and the documentation metadata to provide hard-deprecations (with warnings) and soft-deprecations (without warnings): This is a soft-deprecation as it simply annotates the documentation as deprecated: ``` @doc deprecated: "Use Kernel.length/1 instead" def size(keyword) ``` This is a hard-deprecation as it emits warnings and annotates the documentation as deprecated: ``` @deprecated "Use Kernel.length/1 instead" def size(keyword) ``` Currently `@deprecated` only supports functions and macros. However you can use the `:deprecated` key in the annotation metadata to annotate the docs of modules, types and callbacks too. We recommend using this feature with care, especially library authors. Deprecating code always pushes the burden towards library users. We also recommend for deprecated functionality to be maintained for long periods of time, even after deprecation, giving developers plenty of time to update (except for cases where keeping the deprecated API is undesired, such as in the presence of security issues). ### `@doc` and `@typedoc` Provides documentation for the entity that follows the attribute. `@doc` is to be used with a function, macro, callback, or macrocallback, while `@typedoc` with a type (public or opaque). Accepts a string (often a heredoc) or `false` where `@doc false` will make the entity invisible to documentation extraction tools like [`ExDoc`](https://hexdocs.pm/ex_doc/). For example: ``` defmodule MyModule do @typedoc "This type" @typedoc since: "1.1.0" @type t :: term @doc "Hello world" @doc since: "1.1.0" def hello do "world" end @doc """ Sums `a` to `b`. """ def sum(a, b) do a + b end end ``` As can be seen in the example above, `@doc` and `@typedoc` also accept a keyword list that serves as a way to provide arbitrary metadata about the entity. Tools like [`ExDoc`](https://hexdocs.pm/ex_doc/) and [`IEx`](https://hexdocs.pm/iex/IEx.html) may use this information to display annotations. A common use case is `since` that may be used to annotate in which version the function was introduced. As illustrated in the example, it is possible to use these attributes more than once before an entity. However, the compiler will warn if used twice with binaries as that replaces the documentation text from the preceding use. Multiple uses with keyword lists will merge the lists into one. Note that since the compiler also defines some additional metadata, there are a few reserved keys that will be ignored and warned if used. Currently these are: `:opaque` and `:defaults`. Once this module is compiled, this information becomes available via the [`Code.fetch_docs/1`](code#fetch_docs/1) function. ### `@dialyzer` Defines warnings to request or suppress when using a version of `:dialyzer` that supports module attributes. Accepts an atom, a tuple, or a list of atoms and tuples. For example: ``` defmodule MyModule do @dialyzer {:nowarn_function, my_fun: 1} def my_fun(arg) do M.not_a_function(arg) end end ``` For the list of supported warnings, see [`:dialyzer` module](http://www.erlang.org/doc/man/dialyzer.html). Multiple uses of `@dialyzer` will accumulate instead of overriding previous ones. ### `@external_resource` Specifies an external resource for the current module. Sometimes a module embeds information from an external file. This attribute allows the module to annotate which external resources have been used. Tools like Mix may use this information to ensure the module is recompiled in case any of the external resources change. ### `@file` Changes the filename used in stacktraces for the function or macro that follows the attribute, such as: ``` defmodule MyModule do @doc "Hello world" @file "hello.ex" def hello do "world" end end ``` ### `@moduledoc` Provides documentation for the current module. ``` defmodule MyModule do @moduledoc """ A very useful module. """ @moduledoc authors: ["Alice", "Bob"] end ``` Accepts a string (often a heredoc) or `false` where `@moduledoc false` will make the module invisible to documentation extraction tools like [`ExDoc`](https://hexdocs.pm/ex_doc/). Similarly to `@doc` also accepts a keyword list to provide metadata about the module. For more details, see the documentation of `@doc` above. Once this module is compiled, this information becomes available via the [`Code.fetch_docs/1`](code#fetch_docs/1) function. ### `@on_definition` A hook that will be invoked when each function or macro in the current module is defined. Useful when annotating functions. Accepts a module or a `{module, function_name}` tuple. See the "Compile callbacks" section below. ### `@on_load` A hook that will be invoked whenever the module is loaded. Accepts the function name (as an atom) of a function in the current module or `{function_name, 0}` tuple where `function_name` is the name of a function in the current module. The function must be public and have an arity of 0 (no arguments). If the function does not return `:ok`, the loading of the module will be aborted. For example: ``` defmodule MyModule do @on_load :load_check def load_check do if some_condition() do :ok else :abort end end def some_condition do false end end ``` Modules compiled with HiPE would not call this hook. ### `@vsn` Specify the module version. Accepts any valid Elixir value, for example: ``` defmodule MyModule do @vsn "1.0" end ``` ### Typespec attributes The following attributes are part of typespecs and are also built-in in Elixir: * `@type` - defines a type to be used in `@spec` * `@typep` - defines a private type to be used in `@spec` * `@opaque` - defines an opaque type to be used in `@spec` * `@spec` - provides a specification for a function * `@callback` - provides a specification for a behaviour callback * `@macrocallback` - provides a specification for a macro behaviour callback * `@optional_callbacks` - specifies which behaviour callbacks and macro behaviour callbacks are optional * `@impl` - declares an implementation of a callback function or macro ### Custom attributes In addition to the built-in attributes outlined above, custom attributes may also be added. Custom attributes are expressed using the [`@/1`](kernel#@/1) operator followed by a valid variable name. The value given to the custom attribute must be a valid Elixir value: ``` defmodule MyModule do @custom_attr [some: "stuff"] end ``` For more advanced options available when defining custom attributes, see [`register_attribute/3`](#register_attribute/3). Compile callbacks ------------------ There are three callbacks that are invoked when functions are defined, as well as before and immediately after the module bytecode is generated. ### `@after_compile` A hook that will be invoked right after the current module is compiled. Accepts a module or a `{module, function_name}` tuple. The function must take two arguments: the module environment and its bytecode. When just a module is provided, the function is assumed to be `__after_compile__/2`. Callbacks registered first will run last. #### Example ``` defmodule MyModule do @after_compile __MODULE__ def __after_compile__(env, _bytecode) do IO.inspect(env) end end ``` ### `@before_compile` A hook that will be invoked before the module is compiled. Accepts a module or a `{module, function_or_macro_name}` tuple. The function/macro must take one argument: the module environment. If it's a macro, its returned value will be injected at the end of the module definition before the compilation starts. When just a module is provided, the function/macro is assumed to be `__before_compile__/1`. Callbacks registered first will run last. Any overridable definition will be made concrete before the first callback runs. A definition may be made overridable again in another before compile callback and it will be made concrete one last time after after all callbacks run. *Note*: unlike `@after_compile`, the callback function/macro must be placed in a separate module (because when the callback is invoked, the current module does not yet exist). #### Example ``` defmodule A do defmacro __before_compile__(_env) do quote do def hello, do: "world" end end end defmodule B do @before_compile A end B.hello() #=> "world" ``` ### `@on_definition` A hook that will be invoked when each function or macro in the current module is defined. Useful when annotating functions. Accepts a module or a `{module, function_name}` tuple. The function must take 6 arguments: * the module environment * the kind of the function/macro: `:def`, `:defp`, `:defmacro`, or `:defmacrop` * the function/macro name * the list of quoted arguments * the list of quoted guards * the quoted function body Note the hook receives the quoted arguments and it is invoked before the function is stored in the module. So [`Module.defines?/2`](module#defines?/2) will return `false` for the first clause of every function. If the function/macro being defined has multiple clauses, the hook will be called for each clause. Unlike other hooks, `@on_definition` will only invoke functions and never macros. This is to avoid `@on_definition` callbacks from redefining functions that have just been defined in favor of more explicit approaches. When just a module is provided, the function is assumed to be `__on_definition__/6`. #### Example ``` defmodule Hooks do def on_def(_env, kind, name, args, guards, body) do IO.puts("Defining #{kind} named #{name} with args:") IO.inspect(args) IO.puts("and guards") IO.inspect(guards) IO.puts("and body") IO.puts(Macro.to_string(body)) end end defmodule MyModule do @on_definition {Hooks, :on_def} def hello(arg) when is_binary(arg) or is_list(arg) do "Hello" <> to_string(arg) end def hello(_) do :ok end end ``` Compile options ---------------- The `@compile` attribute accepts different options that are used by both Elixir and Erlang compilers. Some of the common use cases are documented below: * `@compile :debug_info` - includes `:debug_info` regardless of the corresponding setting in [`Code.compiler_options/1`](code#compiler_options/1) * `@compile {:debug_info, false}` - disables `:debug_info` regardless of the corresponding setting in [`Code.compiler_options/1`](code#compiler_options/1) * `@compile {:inline, some_fun: 2, other_fun: 3}` - inlines the given name/arity pairs. Inlining is applied locally, calls from another module are not affected by this option * `@compile {:autoload, false}` - disables automatic loading of modules after compilation. Instead, the module will be loaded after it is dispatched to You can see a handful more options used by the Erlang compiler in the documentation for the [`:compile` module](http://www.erlang.org/doc/man/compile.html). Summary ======== Functions ---------- [concat(list)](#concat/1) Concatenates a list of aliases and returns a new alias. [concat(left, right)](#concat/2) Concatenates two aliases and returns a new alias. [create(module, quoted, opts)](#create/3) Creates a module with the given name and defined by the given quoted expressions. [defines?(module, tuple)](#defines?/2) Checks if the module defines the given function or macro. [defines?(module, tuple, def\_kind)](#defines?/3) Checks if the module defines a function or macro of the given `kind`. [defines\_type?(module, definition)](#defines_type?/2) Checks if the current module defines the given type (private, opaque or not). [definitions\_in(module)](#definitions_in/1) Returns all functions and macros defined in `module`. [definitions\_in(module, def\_kind)](#definitions_in/2) Returns all functions defined in `module`, according to its kind. [delete\_attribute(module, key)](#delete_attribute/2) Deletes the module attribute that matches the given key. [eval\_quoted(module\_or\_env, quoted, binding \\ [], opts \\ [])](#eval_quoted/4) Evaluates the quoted contents in the given module's context. [get\_attribute(module, key, default \\ nil)](#get_attribute/3) Gets the given attribute from a module. [make\_overridable(module, tuples)](#make_overridable/2) Makes the given functions in `module` overridable. [open?(module)](#open?/1) Checks if a module is open. [overridable?(module, tuple)](#overridable?/2) Returns `true` if `tuple` in `module` is marked as overridable. [put\_attribute(module, key, value)](#put_attribute/3) Puts a module attribute with `key` and `value` in the given `module`. [register\_attribute(module, attribute, options)](#register_attribute/3) Registers an attribute. [safe\_concat(list)](#safe_concat/1) Concatenates a list of aliases and returns a new alias only if the alias was already referenced. [safe\_concat(left, right)](#safe_concat/2) Concatenates two aliases and returns a new alias only if the alias was already referenced. [spec\_to\_callback(module, definition)](#spec_to_callback/2) Copies the given spec as a callback. [split(module)](#split/1) Splits the given module name into binary parts. Callbacks ---------- [\_\_info\_\_(atom)](#c:__info__/1) Provides runtime information about functions, macros, and other information defined by the module. Functions ========== ### concat(list) #### Specs ``` concat([binary() | atom()]) :: atom() ``` Concatenates a list of aliases and returns a new alias. #### Examples ``` iex> Module.concat([Foo, Bar]) Foo.Bar iex> Module.concat([Foo, "Bar"]) Foo.Bar ``` ### concat(left, right) #### Specs ``` concat(binary() | atom(), binary() | atom()) :: atom() ``` Concatenates two aliases and returns a new alias. #### Examples ``` iex> Module.concat(Foo, Bar) Foo.Bar iex> Module.concat(Foo, "Bar") Foo.Bar ``` ### create(module, quoted, opts) #### Specs ``` create(module(), Macro.t(), Macro.Env.t() | keyword()) :: {:module, module(), binary(), term()} ``` Creates a module with the given name and defined by the given quoted expressions. The line where the module is defined and its file **must** be passed as options. It returns a tuple of shape `{:module, module, binary, term}` where `module` is the module name, `binary` is the module byte code and `term` is the result of the last expression in `quoted`. Similar to [`Kernel.defmodule/2`](kernel#defmodule/2), the binary will only be written to disk as a `.beam` file if [`Module.create/3`](module#create/3) is invoked in a file that is currently being compiled. #### Examples ``` contents = quote do def world, do: true end Module.create(Hello, contents, Macro.Env.location(__ENV__)) Hello.world() #=> true ``` #### Differences from `defmodule` [`Module.create/3`](module#create/3) works similarly to [`Kernel.defmodule/2`](kernel#defmodule/2) and return the same results. While one could also use `defmodule` to define modules dynamically, this function is preferred when the module body is given by a quoted expression. Another important distinction is that [`Module.create/3`](module#create/3) allows you to control the environment variables used when defining the module, while [`Kernel.defmodule/2`](kernel#defmodule/2) automatically uses the environment it is invoked at. ### defines?(module, tuple) #### Specs ``` defines?(module(), definition()) :: boolean() ``` Checks if the module defines the given function or macro. Use [`defines?/3`](#defines?/3) to assert for a specific type. This function can only be used on modules that have not yet been compiled. Use [`Kernel.function_exported?/3`](kernel#function_exported?/3) and [`Kernel.macro_exported?/3`](kernel#macro_exported?/3) to check for public functions and macros respectively in compiled modules. Note that `defines?` returns false for functions and macros that have been defined but then marked as overridable and no other implementation has been provided. You can check the overridable status by calling [`overridable?/2`](#overridable?/2). #### Examples ``` defmodule Example do Module.defines?(__MODULE__, {:version, 0}) #=> false def version, do: 1 Module.defines?(__MODULE__, {:version, 0}) #=> true end ``` ### defines?(module, tuple, def\_kind) #### Specs ``` defines?(module(), definition(), def_kind()) :: boolean() ``` Checks if the module defines a function or macro of the given `kind`. `kind` can be any of `:def`, `:defp`, `:defmacro`, or `:defmacrop`. This function can only be used on modules that have not yet been compiled. Use [`Kernel.function_exported?/3`](kernel#function_exported?/3) and [`Kernel.macro_exported?/3`](kernel#macro_exported?/3) to check for public functions and macros respectively in compiled modules. #### Examples ``` defmodule Example do Module.defines?(__MODULE__, {:version, 0}, :def) #=> false def version, do: 1 Module.defines?(__MODULE__, {:version, 0}, :def) #=> true end ``` ### defines\_type?(module, definition) #### Specs ``` defines_type?(module(), definition()) :: boolean() ``` Checks if the current module defines the given type (private, opaque or not). This function is only available for modules being compiled. ### definitions\_in(module) #### Specs ``` definitions_in(module()) :: [definition()] ``` Returns all functions and macros defined in `module`. It returns a list with all defined functions and macros, public and private, in the shape of `[{name, arity}, ...]`. This function can only be used on modules that have not yet been compiled. Use the [`Module.__info__/1`](module#c:__info__/1) callback to get the public functions and macros in compiled modules. #### Examples ``` defmodule Example do def version, do: 1 defmacrop test(arg), do: arg Module.definitions_in(__MODULE__) #=> [{:version, 0}, {:test, 1}] end ``` ### definitions\_in(module, def\_kind) #### Specs ``` definitions_in(module(), def_kind()) :: [definition()] ``` Returns all functions defined in `module`, according to its kind. This function can only be used on modules that have not yet been compiled. Use the [`Module.__info__/1`](module#c:__info__/1) callback to get the public functions and macros in compiled modules. #### Examples ``` defmodule Example do def version, do: 1 Module.definitions_in(__MODULE__, :def) #=> [{:version, 0}] Module.definitions_in(__MODULE__, :defp) #=> [] end ``` ### delete\_attribute(module, key) #### Specs ``` delete_attribute(module(), atom()) :: term() ``` Deletes the module attribute that matches the given key. It returns the deleted attribute value (or `nil` if nothing was set). #### Examples ``` defmodule MyModule do Module.put_attribute(__MODULE__, :custom_threshold_for_lib, 10) Module.delete_attribute(__MODULE__, :custom_threshold_for_lib) end ``` ### eval\_quoted(module\_or\_env, quoted, binding \\ [], opts \\ []) #### Specs ``` eval_quoted( module() | Macro.Env.t(), Macro.t(), list(), keyword() | Macro.Env.t() ) :: term() ``` Evaluates the quoted contents in the given module's context. A list of environment options can also be given as argument. See [`Code.eval_string/3`](code#eval_string/3) for more information. Raises an error if the module was already compiled. #### Examples ``` defmodule Foo do contents = quote do def sum(a, b), do: a + b end Module.eval_quoted(__MODULE__, contents) end Foo.sum(1, 2) #=> 3 ``` For convenience, you can pass any [`Macro.Env`](macro.env) struct, such as [`__ENV__/0`](kernel.specialforms#__ENV__/0), as the first argument or as options. Both the module and all options will be automatically extracted from the environment: ``` defmodule Foo do contents = quote do def sum(a, b), do: a + b end Module.eval_quoted(__ENV__, contents) end Foo.sum(1, 2) #=> 3 ``` Note that if you pass a [`Macro.Env`](macro.env) struct as first argument while also passing `opts`, they will be merged with `opts` having precedence. ### get\_attribute(module, key, default \\ nil) #### Specs ``` get_attribute(module(), atom(), term()) :: term() ``` Gets the given attribute from a module. If the attribute was marked with `accumulate` with [`Module.register_attribute/3`](module#register_attribute/3), a list is always returned. `nil` is returned if the attribute has not been marked with `accumulate` and has not been set to any value. The `@` macro compiles to a call to this function. For example, the following code: ``` @foo ``` Expands to something akin to: ``` Module.get_attribute(__MODULE__, :foo) ``` This function can only be used on modules that have not yet been compiled. Use the [`Module.__info__/1`](module#c:__info__/1) callback to get all persisted attributes, or [`Code.fetch_docs/1`](code#fetch_docs/1) to retrieve all documentation related attributes in compiled modules. #### Examples ``` defmodule Foo do Module.put_attribute(__MODULE__, :value, 1) Module.get_attribute(__MODULE__, :value) #=> 1 Module.get_attribute(__MODULE__, :value, :default) #=> 1 Module.get_attribute(__MODULE__, :not_found, :default) #=> :default Module.register_attribute(__MODULE__, :value, accumulate: true) Module.put_attribute(__MODULE__, :value, 1) Module.get_attribute(__MODULE__, :value) #=> [1] end ``` ### make\_overridable(module, tuples) #### Specs ``` make_overridable(module(), [definition()]) :: :ok ``` ``` make_overridable(module(), module()) :: :ok ``` Makes the given functions in `module` overridable. An overridable function is lazily defined, allowing a developer to customize it. See [`Kernel.defoverridable/1`](kernel#defoverridable/1) for more information and documentation. Once a function or a macro is marked as overridable, it will no longer be listed under [`definitions_in/1`](#definitions_in/1) or return true when given to [`defines?/2`](#defines?/2) until another implementation is given. ### open?(module) #### Specs ``` open?(module()) :: boolean() ``` Checks if a module is open. A module is "open" if it is currently being defined and its attributes and functions can be modified. ### overridable?(module, tuple) #### Specs ``` overridable?(module(), definition()) :: boolean() ``` Returns `true` if `tuple` in `module` is marked as overridable. ### put\_attribute(module, key, value) #### Specs ``` put_attribute(module(), atom(), term()) :: :ok ``` Puts a module attribute with `key` and `value` in the given `module`. #### Examples ``` defmodule MyModule do Module.put_attribute(__MODULE__, :custom_threshold_for_lib, 10) end ``` ### register\_attribute(module, attribute, options) #### Specs ``` register_attribute(module(), atom(), accumulate: boolean(), persist: boolean()) :: :ok ``` Registers an attribute. By registering an attribute, a developer is able to customize how Elixir will store and accumulate the attribute values. #### Options When registering an attribute, two options can be given: * `:accumulate` - several calls to the same attribute will accumulate instead of overriding the previous one. New attributes are always added to the top of the accumulated list. * `:persist` - the attribute will be persisted in the Erlang Abstract Format. Useful when interfacing with Erlang libraries. By default, both options are `false`. #### Examples ``` defmodule MyModule do Module.register_attribute(__MODULE__, :custom_threshold_for_lib, accumulate: true) @custom_threshold_for_lib 10 @custom_threshold_for_lib 20 @custom_threshold_for_lib #=> [20, 10] end ``` ### safe\_concat(list) #### Specs ``` safe_concat([binary() | atom()]) :: atom() ``` Concatenates a list of aliases and returns a new alias only if the alias was already referenced. If the alias was not referenced yet, fails with [`ArgumentError`](argumenterror). It handles charlists, binaries and atoms. #### Examples ``` iex> Module.safe_concat([Module, Unknown]) ** (ArgumentError) argument error iex> Module.safe_concat([List, Chars]) List.Chars ``` ### safe\_concat(left, right) #### Specs ``` safe_concat(binary() | atom(), binary() | atom()) :: atom() ``` Concatenates two aliases and returns a new alias only if the alias was already referenced. If the alias was not referenced yet, fails with [`ArgumentError`](argumenterror). It handles charlists, binaries and atoms. #### Examples ``` iex> Module.safe_concat(Module, Unknown) ** (ArgumentError) argument error iex> Module.safe_concat(List, Chars) List.Chars ``` ### spec\_to\_callback(module, definition) #### Specs ``` spec_to_callback(module(), definition()) :: boolean() ``` Copies the given spec as a callback. Returns `true` if there is such a spec and it was copied as a callback. If the function associated to the spec has documentation defined prior to invoking this function, the docs are copied too. ### split(module) #### Specs ``` split(module() | String.t()) :: [String.t(), ...] ``` Splits the given module name into binary parts. `module` has to be an Elixir module, as [`split/1`](#split/1) won't work with Erlang-style modules (for example, `split(:lists)` raises an error). [`split/1`](#split/1) also supports splitting the string representation of Elixir modules (that is, the result of calling [`Atom.to_string/1`](atom#to_string/1) with the module name). #### Examples ``` iex> Module.split(Very.Long.Module.Name.And.Even.Longer) ["Very", "Long", "Module", "Name", "And", "Even", "Longer"] iex> Module.split("Elixir.String.Chars") ["String", "Chars"] ``` Callbacks ========== ### \_\_info\_\_(atom) #### Specs ``` __info__(:attributes) :: keyword() ``` ``` __info__(:compile) :: [term()] ``` ``` __info__(:functions) :: keyword() ``` ``` __info__(:macros) :: keyword() ``` ``` __info__(:md5) :: binary() ``` ``` __info__(:module) :: module() ``` Provides runtime information about functions, macros, and other information defined by the module. Each module gets an `__info__/1` function when it's compiled. The function takes one of the following items: * `:attributes` - a keyword list with all persisted attributes * `:compile` - a list with compiler metadata * `:functions` - a keyword list of public functions and their arities * `:macros` - a keyword list of public macros and their arities * `:md5` - the MD5 of the module * `:module` - the module atom name
programming_docs
elixir Mix.Shell.IO Mix.Shell.IO ============= This is Mix's default shell. It simply prints messages to stdio and stderr. Summary ======== Functions ---------- [cmd(command, opts \\ [])](#cmd/2) Executes the given command and prints its output to stdout as it comes. [error(message)](#error/1) Prints the given ANSI error to the shell followed by a newline. [info(message)](#info/1) Prints the given ANSI message to the shell followed by a newline. [print\_app()](#print_app/0) Prints the current application to the shell if it was not printed yet. [prompt(message)](#prompt/1) Prints a message and prompts the user for input. [yes?(message)](#yes?/1) Prints a message and asks the user if they want to proceed. Functions ========== ### cmd(command, opts \\ []) Executes the given command and prints its output to stdout as it comes. ### error(message) Prints the given ANSI error to the shell followed by a newline. ### info(message) Prints the given ANSI message to the shell followed by a newline. ### print\_app() Prints the current application to the shell if it was not printed yet. ### prompt(message) Prints a message and prompts the user for input. Input will be consumed until Enter is pressed. ### yes?(message) Prints a message and asks the user if they want to proceed. The user must press Enter or type one of "y", "yes", "Y", "YES" or "Yes". elixir List List ===== Functions that work on (linked) lists. Many of the functions provided for lists, which implement the [`Enumerable`](enumerable) protocol, are found in the [`Enum`](enum) module. Additionally, the following functions and operators for lists are found in [`Kernel`](kernel): * [`++/2`](kernel#++/2) * [`--/2`](kernel#--/2) * [`hd/1`](kernel#hd/1) * [`tl/1`](kernel#tl/1) * [`in/2`](kernel#in/2) * [`length/1`](kernel#length/1) Lists in Elixir are specified between square brackets: ``` iex> [1, "two", 3, :four] [1, "two", 3, :four] ``` Two lists can be concatenated and subtracted using the [`Kernel.++/2`](kernel#++/2) and [`Kernel.--/2`](kernel#--/2) operators: ``` iex> [1, 2, 3] ++ [4, 5, 6] [1, 2, 3, 4, 5, 6] iex> [1, true, 2, false, 3, true] -- [true, false] [1, 2, 3, true] ``` Lists in Elixir are effectively linked lists, which means they are internally represented in pairs containing the head and the tail of a list: ``` iex> [head | tail] = [1, 2, 3] iex> head 1 iex> tail [2, 3] ``` Similarly, we could write the list `[1, 2, 3]` using only such pairs (called cons cells): ``` iex> [1 | [2 | [3 | []]]] [1, 2, 3] ``` Some lists, called improper lists, do not have an empty list as the second element in the last cons cell: ``` iex> [1 | [2 | [3 | 4]]] [1, 2, 3 | 4] ``` Although improper lists are generally avoided, they are used in some special circumstances like iodata and chardata entities (see the [`IO`](io) module). Due to their cons cell based representation, prepending an element to a list is always fast (constant time), while appending becomes slower as the list grows in size (linear time): ``` iex> list = [1, 2, 3] iex> [0 | list] # fast [0, 1, 2, 3] iex> list ++ [4] # slow [1, 2, 3, 4] ``` Most of the functions in this module work in linear time. This means that, that the time it takes to perform an operation grows at the same rate as the length of the list. For example [`length/1`](kernel#length/1) and [`last/1`](#last/1) will run in linear time because they need to iterate through every element of the list, but [`first/1`](#first/1) will run in constant time because it only needs the first element. Charlists ---------- If a list is made of non-negative integers, where each integer represents a Unicode code point, the list can also be called a charlist. These integers must: * be within the range `0..0x10FFFF` (`0..1_114_111`); * and be out of the range `0xD800..0xDFFF` (`55_296..57_343`), which is reserved in Unicode for UTF-16 surrogate pairs. Elixir uses single quotes to define charlists: ``` iex> 'héllo' [104, 233, 108, 108, 111] ``` In particular, charlists will be printed back by default in single quotes if they contain only printable ASCII characters: ``` iex> 'abc' 'abc' ``` The rationale behind this behaviour is to better support Erlang libraries which may return text as charlists instead of Elixir strings. One example of such functions is [`Application.loaded_applications/0`](application#loaded_applications/0): ``` Application.loaded_applications() #=> [ #=> {:stdlib, 'ERTS CXC 138 10', '2.6'}, #=> {:compiler, 'ERTS CXC 138 10', '6.0.1'}, #=> {:elixir, 'elixir', '1.0.0'}, #=> {:kernel, 'ERTS CXC 138 10', '4.1'}, #=> {:logger, 'logger', '1.0.0'} #=> ] ``` A list can be checked if it is made of only printable ASCII characters with [`ascii_printable?/2`](#ascii_printable?/2). Improper lists are never deemed as charlists. Summary ======== Functions ---------- [ascii\_printable?(list, limit \\ :infinity)](#ascii_printable?/2) Checks if `list` is a charlist made only of printable ASCII characters. [delete(list, element)](#delete/2) Deletes the given `element` from the `list`. Returns a new list without the element. [delete\_at(list, index)](#delete_at/2) Produces a new list by removing the value at the specified `index`. [duplicate(elem, n)](#duplicate/2) Duplicates the given element `n` times in a list. [first(list)](#first/1) Returns the first element in `list` or `nil` if `list` is empty. [flatten(list)](#flatten/1) Flattens the given `list` of nested lists. [flatten(list, tail)](#flatten/2) Flattens the given `list` of nested lists. The list `tail` will be added at the end of the flattened list. [foldl(list, acc, fun)](#foldl/3) Folds (reduces) the given list from the left with a function. Requires an accumulator. [foldr(list, acc, fun)](#foldr/3) Folds (reduces) the given list from the right with a function. Requires an accumulator. [improper?(list)](#improper?/1) Returns `true` if `list` is an improper list. Otherwise returns `false`. [insert\_at(list, index, value)](#insert_at/3) Returns a list with `value` inserted at the specified `index`. [keydelete(list, key, position)](#keydelete/3) Receives a `list` of tuples and deletes the first tuple where the element at `position` matches the given `key`. Returns the new list. [keyfind(list, key, position, default \\ nil)](#keyfind/4) Receives a list of tuples and returns the first tuple where the element at `position` in the tuple matches the given `key`. [keymember?(list, key, position)](#keymember?/3) Receives a list of tuples and returns `true` if there is a tuple where the element at `position` in the tuple matches the given `key`. [keyreplace(list, key, position, new\_tuple)](#keyreplace/4) Receives a list of tuples and if the identified element by `key` at `position` exists, it is replaced with `new_tuple`. [keysort(list, position)](#keysort/2) Receives a list of tuples and sorts the elements at `position` of the tuples. The sort is stable. [keystore(list, key, position, new\_tuple)](#keystore/4) Receives a `list` of tuples and replaces the element identified by `key` at `position` with `new_tuple`. [keytake(list, key, position)](#keytake/3) Receives a `list` of tuples and returns the first tuple where the element at `position` in the tuple matches the given `key`, as well as the `list` without found tuple. [last(list)](#last/1) Returns the last element in `list` or `nil` if `list` is empty. [myers\_difference(list1, list2)](#myers_difference/2) Returns a keyword list that represents an *edit script*. [myers\_difference(list1, list2, diff\_script)](#myers_difference/3) Returns a keyword list that represents an *edit script* with nested diffs. [pop\_at(list, index, default \\ nil)](#pop_at/3) Returns and removes the value at the specified `index` in the `list`. [replace\_at(list, index, value)](#replace_at/3) Returns a list with a replaced value at the specified `index`. [starts\_with?(list, prefix)](#starts_with?/2) Returns `true` if `list` starts with the given `prefix` list; otherwise returns `false`. [to\_atom(charlist)](#to_atom/1) Converts a charlist to an atom. [to\_charlist(list)](#to_charlist/1) Converts a list of integers representing code points, lists or strings into a charlist. [to\_existing\_atom(charlist)](#to_existing_atom/1) Converts a charlist to an existing atom. Raises an [`ArgumentError`](argumenterror) if the atom does not exist. [to\_float(charlist)](#to_float/1) Returns the float whose text representation is `charlist`. [to\_integer(charlist)](#to_integer/1) Returns an integer whose text representation is `charlist`. [to\_integer(charlist, base)](#to_integer/2) Returns an integer whose text representation is `charlist` in base `base`. [to\_string(list)](#to_string/1) Converts a list of integers representing code points, lists or strings into a string. [to\_tuple(list)](#to_tuple/1) Converts a list to a tuple. [update\_at(list, index, fun)](#update_at/3) Returns a list with an updated value at the specified `index`. [wrap(term)](#wrap/1) Wraps `term` in a list if this is not list. [zip(list\_of\_lists)](#zip/1) Zips corresponding elements from each list in `list_of_lists`. Functions ========== ### ascii\_printable?(list, limit \\ :infinity) #### Specs ``` ascii_printable?(list(), 0) :: true ``` ``` ascii_printable?([], limit) :: true when limit: :infinity | pos_integer() ``` ``` ascii_printable?([...], limit) :: boolean() when limit: :infinity | pos_integer() ``` Checks if `list` is a charlist made only of printable ASCII characters. Takes an optional `limit` as a second argument. [`ascii_printable?/2`](#ascii_printable?/2) only checks the printability of the list up to the `limit`. A printable charlist in Elixir contains only the printable characters in the standard seven-bit ASCII character encoding, which are characters ranging from 32 to 126 in decimal notation, plus the following control characters: * `?\a` - Bell * `?\b` - Backspace * `?\t` - Horizontal tab * `?\n` - Line feed * `?\v` - Vertical tab * `?\f` - Form feed * `?\r` - Carriage return * `?\e` - Escape For more information read the [Character groups](https://en.wikipedia.org/wiki/ASCII#Character_groups) section in the Wikipedia article of the [ASCII](https://en.wikipedia.org/wiki/ASCII) standard. #### Examples ``` iex> List.ascii_printable?('abc') true iex> List.ascii_printable?('abc' ++ [0]) false iex> List.ascii_printable?('abc' ++ [0], 2) true ``` Improper lists are not printable, even if made only of ASCII characters: ``` iex> List.ascii_printable?('abc' ++ ?d) false ``` ### delete(list, element) #### Specs ``` delete([], any()) :: [] ``` ``` delete([...], any()) :: list() ``` Deletes the given `element` from the `list`. Returns a new list without the element. If the `element` occurs more than once in the `list`, just the first occurrence is removed. #### Examples ``` iex> List.delete([:a, :b, :c], :a) [:b, :c] iex> List.delete([:a, :b, :c], :d) [:a, :b, :c] iex> List.delete([:a, :b, :b, :c], :b) [:a, :b, :c] iex> List.delete([], :b) [] ``` ### delete\_at(list, index) #### Specs ``` delete_at(list(), integer()) :: list() ``` Produces a new list by removing the value at the specified `index`. Negative indices indicate an offset from the end of the `list`. If `index` is out of bounds, the original `list` is returned. #### Examples ``` iex> List.delete_at([1, 2, 3], 0) [2, 3] iex> List.delete_at([1, 2, 3], 10) [1, 2, 3] iex> List.delete_at([1, 2, 3], -1) [1, 2] ``` ### duplicate(elem, n) #### Specs ``` duplicate(any(), 0) :: [] ``` ``` duplicate(elem, pos_integer()) :: [elem, ...] when elem: var ``` Duplicates the given element `n` times in a list. `n` is an integer greater than or equal to `0`. If `n` is `0`, an empty list is returned. #### Examples ``` iex> List.duplicate("hello", 0) [] iex> List.duplicate("hi", 1) ["hi"] iex> List.duplicate("bye", 2) ["bye", "bye"] iex> List.duplicate([1, 2], 3) [[1, 2], [1, 2], [1, 2]] ``` ### first(list) #### Specs ``` first([]) :: nil ``` ``` first([elem, ...]) :: elem when elem: var ``` Returns the first element in `list` or `nil` if `list` is empty. #### Examples ``` iex> List.first([]) nil iex> List.first([1]) 1 iex> List.first([1, 2, 3]) 1 ``` ### flatten(list) #### Specs ``` flatten(deep_list) :: list() when deep_list: [any() | deep_list] ``` Flattens the given `list` of nested lists. Empty list elements are discarded. #### Examples ``` iex> List.flatten([1, [[2], 3]]) [1, 2, 3] iex> List.flatten([[], [[], []]]) [] ``` ### flatten(list, tail) #### Specs ``` flatten(deep_list, [elem]) :: [elem] when deep_list: [elem | deep_list], elem: var ``` Flattens the given `list` of nested lists. The list `tail` will be added at the end of the flattened list. Empty list elements from `list` are discarded, but not the ones from `tail`. #### Examples ``` iex> List.flatten([1, [[2], 3]], [4, 5]) [1, 2, 3, 4, 5] iex> List.flatten([1, [], 2], [3, [], 4]) [1, 2, 3, [], 4] ``` ### foldl(list, acc, fun) #### Specs ``` foldl([elem], acc, (elem, acc -> acc)) :: acc when elem: var, acc: var ``` Folds (reduces) the given list from the left with a function. Requires an accumulator. #### Examples ``` iex> List.foldl([5, 5], 10, fn x, acc -> x + acc end) 20 iex> List.foldl([1, 2, 3, 4], 0, fn x, acc -> x - acc end) 2 ``` ### foldr(list, acc, fun) #### Specs ``` foldr([elem], acc, (elem, acc -> acc)) :: acc when elem: var, acc: var ``` Folds (reduces) the given list from the right with a function. Requires an accumulator. #### Examples ``` iex> List.foldr([1, 2, 3, 4], 0, fn x, acc -> x - acc end) -2 ``` ### improper?(list) #### Specs ``` improper?(maybe_improper_list()) :: boolean() ``` Returns `true` if `list` is an improper list. Otherwise returns `false`. #### Examples ``` iex> List.improper?([1, 2 | 3]) true iex> List.improper?([1, 2, 3]) false ``` ### insert\_at(list, index, value) #### Specs ``` insert_at(list(), integer(), any()) :: list() ``` Returns a list with `value` inserted at the specified `index`. Note that `index` is capped at the list length. Negative indices indicate an offset from the end of the `list`. #### Examples ``` iex> List.insert_at([1, 2, 3, 4], 2, 0) [1, 2, 0, 3, 4] iex> List.insert_at([1, 2, 3], 10, 0) [1, 2, 3, 0] iex> List.insert_at([1, 2, 3], -1, 0) [1, 2, 3, 0] iex> List.insert_at([1, 2, 3], -10, 0) [0, 1, 2, 3] ``` ### keydelete(list, key, position) #### Specs ``` keydelete([tuple()], any(), non_neg_integer()) :: [tuple()] ``` Receives a `list` of tuples and deletes the first tuple where the element at `position` matches the given `key`. Returns the new list. #### Examples ``` iex> List.keydelete([a: 1, b: 2], :a, 0) [b: 2] iex> List.keydelete([a: 1, b: 2], 2, 1) [a: 1] iex> List.keydelete([a: 1, b: 2], :c, 0) [a: 1, b: 2] ``` ### keyfind(list, key, position, default \\ nil) #### Specs ``` keyfind([tuple()], any(), non_neg_integer(), any()) :: any() ``` Receives a list of tuples and returns the first tuple where the element at `position` in the tuple matches the given `key`. If no matching tuple is found, `default` is returned. #### Examples ``` iex> List.keyfind([a: 1, b: 2], :a, 0) {:a, 1} iex> List.keyfind([a: 1, b: 2], 2, 1) {:b, 2} iex> List.keyfind([a: 1, b: 2], :c, 0) nil ``` ### keymember?(list, key, position) #### Specs ``` keymember?([tuple()], any(), non_neg_integer()) :: boolean() ``` Receives a list of tuples and returns `true` if there is a tuple where the element at `position` in the tuple matches the given `key`. #### Examples ``` iex> List.keymember?([a: 1, b: 2], :a, 0) true iex> List.keymember?([a: 1, b: 2], 2, 1) true iex> List.keymember?([a: 1, b: 2], :c, 0) false ``` ### keyreplace(list, key, position, new\_tuple) #### Specs ``` keyreplace([tuple()], any(), non_neg_integer(), tuple()) :: [tuple()] ``` Receives a list of tuples and if the identified element by `key` at `position` exists, it is replaced with `new_tuple`. #### Examples ``` iex> List.keyreplace([a: 1, b: 2], :a, 0, {:a, 3}) [a: 3, b: 2] iex> List.keyreplace([a: 1, b: 2], :a, 1, {:a, 3}) [a: 1, b: 2] ``` ### keysort(list, position) #### Specs ``` keysort([tuple()], non_neg_integer()) :: [tuple()] ``` Receives a list of tuples and sorts the elements at `position` of the tuples. The sort is stable. #### Examples ``` iex> List.keysort([a: 5, b: 1, c: 3], 1) [b: 1, c: 3, a: 5] iex> List.keysort([a: 5, c: 1, b: 3], 0) [a: 5, b: 3, c: 1] ``` ### keystore(list, key, position, new\_tuple) #### Specs ``` keystore([tuple()], any(), non_neg_integer(), tuple()) :: [tuple(), ...] ``` Receives a `list` of tuples and replaces the element identified by `key` at `position` with `new_tuple`. If the element does not exist, it is added to the end of the `list`. #### Examples ``` iex> List.keystore([a: 1, b: 2], :a, 0, {:a, 3}) [a: 3, b: 2] iex> List.keystore([a: 1, b: 2], :c, 0, {:c, 3}) [a: 1, b: 2, c: 3] ``` ### keytake(list, key, position) #### Specs ``` keytake([tuple()], any(), non_neg_integer()) :: {tuple(), [tuple()]} | nil ``` Receives a `list` of tuples and returns the first tuple where the element at `position` in the tuple matches the given `key`, as well as the `list` without found tuple. If such a tuple is not found, `nil` will be returned. #### Examples ``` iex> List.keytake([a: 1, b: 2], :a, 0) {{:a, 1}, [b: 2]} iex> List.keytake([a: 1, b: 2], 2, 1) {{:b, 2}, [a: 1]} iex> List.keytake([a: 1, b: 2], :c, 0) nil ``` ### last(list) #### Specs ``` last([]) :: nil ``` ``` last([elem, ...]) :: elem when elem: var ``` Returns the last element in `list` or `nil` if `list` is empty. #### Examples ``` iex> List.last([]) nil iex> List.last([1]) 1 iex> List.last([1, 2, 3]) 3 ``` ### myers\_difference(list1, list2) #### Specs ``` myers_difference(list(), list()) :: [{:eq | :ins | :del, list()}] ``` Returns a keyword list that represents an *edit script*. The algorithm is outlined in the "An O(ND) Difference Algorithm and Its Variations" paper by E. Myers. An *edit script* is a keyword list. Each key describes the "editing action" to take in order to bring `list1` closer to being equal to `list2`; a key can be `:eq`, `:ins`, or `:del`. Each value is a sublist of either `list1` or `list2` that should be inserted (if the corresponding key `:ins`), deleted (if the corresponding key is `:del`), or left alone (if the corresponding key is `:eq`) in `list1` in order to be closer to `list2`. See [`myers_difference/3`](#myers_difference/3) if you want to handle nesting in the diff scripts. #### Examples ``` iex> List.myers_difference([1, 4, 2, 3], [1, 2, 3, 4]) [eq: [1], del: [4], eq: [2, 3], ins: [4]] ``` ### myers\_difference(list1, list2, diff\_script) #### Specs ``` myers_difference(list(), list(), (term(), term() -> script | nil)) :: script when script: [{:eq | :ins | :del | :diff, list()}] ``` Returns a keyword list that represents an *edit script* with nested diffs. This is an extension of [`myers_difference/2`](#myers_difference/2) where a `diff_script` function can be given in case it is desired to compute nested differences. The function may return a list with the inner edit script or `nil` in case there is no such script. The returned inner edit script will be under the `:diff` key. #### Examples ``` iex> List.myers_difference(["a", "db", "c"], ["a", "bc"], &String.myers_difference/2) [eq: ["a"], diff: [del: "d", eq: "b", ins: "c"], del: ["c"]] ``` ### pop\_at(list, index, default \\ nil) #### Specs ``` pop_at(list(), integer(), any()) :: {any(), list()} ``` Returns and removes the value at the specified `index` in the `list`. Negative indices indicate an offset from the end of the `list`. If `index` is out of bounds, the original `list` is returned. #### Examples ``` iex> List.pop_at([1, 2, 3], 0) {1, [2, 3]} iex> List.pop_at([1, 2, 3], 5) {nil, [1, 2, 3]} iex> List.pop_at([1, 2, 3], 5, 10) {10, [1, 2, 3]} iex> List.pop_at([1, 2, 3], -1) {3, [1, 2]} ``` ### replace\_at(list, index, value) #### Specs ``` replace_at(list(), integer(), any()) :: list() ``` Returns a list with a replaced value at the specified `index`. Negative indices indicate an offset from the end of the `list`. If `index` is out of bounds, the original `list` is returned. #### Examples ``` iex> List.replace_at([1, 2, 3], 0, 0) [0, 2, 3] iex> List.replace_at([1, 2, 3], 10, 0) [1, 2, 3] iex> List.replace_at([1, 2, 3], -1, 0) [1, 2, 0] iex> List.replace_at([1, 2, 3], -10, 0) [1, 2, 3] ``` ### starts\_with?(list, prefix) #### Specs ``` starts_with?([...], [...]) :: boolean() ``` ``` starts_with?(list(), []) :: true ``` ``` starts_with?([], [...]) :: false ``` Returns `true` if `list` starts with the given `prefix` list; otherwise returns `false`. If `prefix` is an empty list, it returns `true`. ### Examples ``` iex> List.starts_with?([1, 2, 3], [1, 2]) true iex> List.starts_with?([1, 2], [1, 2, 3]) false iex> List.starts_with?([:alpha], []) true iex> List.starts_with?([], [:alpha]) false ``` ### to\_atom(charlist) #### Specs ``` to_atom(charlist()) :: atom() ``` Converts a charlist to an atom. Elixir supports conversions from charlists which contains any Unicode code point. Inlined by the compiler. #### Examples ``` iex> List.to_atom('Elixir') :Elixir iex> List.to_atom('🌢 Elixir') :"🌢 Elixir" ``` ### to\_charlist(list) #### Specs ``` to_charlist(:unicode.charlist()) :: charlist() ``` Converts a list of integers representing code points, lists or strings into a charlist. Notice that this function expects a list of integers representing UTF-8 code points. If you have a list of bytes, you must instead use the [`:binary` module](http://www.erlang.org/doc/man/binary.html). #### Examples ``` iex> List.to_charlist([0x00E6, 0x00DF]) 'æß' iex> List.to_charlist([0x0061, "bc"]) 'abc' iex> List.to_charlist([0x0064, "ee", ['p']]) 'deep' ``` ### to\_existing\_atom(charlist) #### Specs ``` to_existing_atom(charlist()) :: atom() ``` Converts a charlist to an existing atom. Raises an [`ArgumentError`](argumenterror) if the atom does not exist. Elixir supports conversions from charlists which contains any Unicode code point. Inlined by the compiler. #### Examples ``` iex> _ = :my_atom iex> List.to_existing_atom('my_atom') :my_atom iex> _ = :"🌢 Elixir" iex> List.to_existing_atom('🌢 Elixir') :"🌢 Elixir" iex> List.to_existing_atom('this_atom_will_never_exist') ** (ArgumentError) argument error ``` ### to\_float(charlist) #### Specs ``` to_float(charlist()) :: float() ``` Returns the float whose text representation is `charlist`. Inlined by the compiler. #### Examples ``` iex> List.to_float('2.2017764e+0') 2.2017764 ``` ### to\_integer(charlist) #### Specs ``` to_integer(charlist()) :: integer() ``` Returns an integer whose text representation is `charlist`. Inlined by the compiler. #### Examples ``` iex> List.to_integer('123') 123 ``` ### to\_integer(charlist, base) #### Specs ``` to_integer(charlist(), 2..36) :: integer() ``` Returns an integer whose text representation is `charlist` in base `base`. Inlined by the compiler. #### Examples ``` iex> List.to_integer('3FF', 16) 1023 ``` ### to\_string(list) #### Specs ``` to_string(:unicode.charlist()) :: String.t() ``` Converts a list of integers representing code points, lists or strings into a string. To be converted to a string, a list must either be empty or only contain the following elements: * strings * integers representing Unicode code points * a list containing one of these three elements Notice that this function expects a list of integers representing UTF-8 code points. If you have a list of bytes, you must instead use the [`:binary` module](http://www.erlang.org/doc/man/binary.html). #### Examples ``` iex> List.to_string([0x00E6, 0x00DF]) "æß" iex> List.to_string([0x0061, "bc"]) "abc" iex> List.to_string([0x0064, "ee", ['p']]) "deep" iex> List.to_string([]) "" ``` ### to\_tuple(list) #### Specs ``` to_tuple(list()) :: tuple() ``` Converts a list to a tuple. Inlined by the compiler. #### Examples ``` iex> List.to_tuple([:share, [:elixir, 163]]) {:share, [:elixir, 163]} ``` ### update\_at(list, index, fun) #### Specs ``` update_at([elem], integer(), (elem -> any())) :: list() when elem: var ``` Returns a list with an updated value at the specified `index`. Negative indices indicate an offset from the end of the `list`. If `index` is out of bounds, the original `list` is returned. #### Examples ``` iex> List.update_at([1, 2, 3], 0, &(&1 + 10)) [11, 2, 3] iex> List.update_at([1, 2, 3], 10, &(&1 + 10)) [1, 2, 3] iex> List.update_at([1, 2, 3], -1, &(&1 + 10)) [1, 2, 13] iex> List.update_at([1, 2, 3], -10, &(&1 + 10)) [1, 2, 3] ``` ### wrap(term) #### Specs ``` wrap(term()) :: maybe_improper_list() ``` Wraps `term` in a list if this is not list. If `term` is already a list, it returns the list. If `term` is `nil`, it returns an empty list. #### Examples ``` iex> List.wrap("hello") ["hello"] iex> List.wrap([1, 2, 3]) [1, 2, 3] iex> List.wrap(nil) [] ``` ### zip(list\_of\_lists) #### Specs ``` zip([list()]) :: [tuple()] ``` Zips corresponding elements from each list in `list_of_lists`. The zipping finishes as soon as any list terminates. #### Examples ``` iex> List.zip([[1, 2], [3, 4], [5, 6]]) [{1, 3, 5}, {2, 4, 6}] iex> List.zip([[1, 2], [3], [5, 6]]) [{1, 3, 5}] ```
programming_docs
elixir Debugging Getting Started Debugging ========= There are a number of ways to debug code in Elixir. In this chapter we will cover some of the more common ways of doing so. IO.inspect/2 ------------ What makes `IO.inspect(item, opts \\ [])` really useful in debugging is that it returns the `item` argument passed to it without affecting the behavior of the original code. Let’s see an example. ``` (1..10) |> IO.inspect |> Enum.map(fn x -> x * 2 end) |> IO.inspect |> Enum.sum |> IO.inspect ``` Prints: ``` 1..10 [2, 4, 6, 8, 10, 12, 14, 16, 18, 20] 110 ``` As you can see `IO.inspect/2` makes it possible to “spy” on values almost anywhere in your code without altering the result, making it very helpful inside of a pipeline like in the above case. `IO.inspect/2` also provides the ability to decorate the output with a `label` option. The label will be printed before the inspected `item`: ``` [1, 2, 3] |> IO.inspect(label: "before") |> Enum.map(&(&1 * 2)) |> IO.inspect(label: "after") |> Enum.sum ``` Prints: ``` before: [1, 2, 3] after: [2, 4, 6] ``` It is also very common to use `IO.inspect/2` with [`binding()`](https://hexdocs.pm/elixir/Kernel.html#binding/0), which returns all variable names and their values: ``` def some_fun(a, b, c) do IO.inspect binding() ... end ``` When `some_fun/3` is invoked with `:foo`, `"bar"`, `:baz` it prints: ``` [a: :foo, b: "bar", c: :baz] ``` Please see [IO.inspect/2](https://hexdocs.pm/elixir/IO.html#inspect/2) to read more about other ways in which one could use this function. Also, in order to find a full list of other formatting options that one can use alongside `IO.inspect/2`, see [Inspect.Opts](https://hexdocs.pm/elixir/Inspect.Opts.html). `IEx.pry/0` and `IEx.break!/2` ------------------------------- While `IO.inspect/2` is static, Elixir’s interactive shell provides more dynamic ways to interact with debugged code. The first one is with [`IEx.pry/0`](https://hexdocs.pm/iex/IEx.html#pry/0) which we can use instead of `IO.inspect binding()`: ``` def some_fun(a, b, c) do require IEx; IEx.pry ... end ``` Once the code above is executed inside an `iex` session, IEx will ask if we want to pry into the current code. If accepted, we will be able to access all variables, as well as imports and aliases from the code, directly From IEx. While pry is running, the code execution stops, until `continue` is called. Remember you can always run `iex` in the context of a project with `iex -S mix TASK`. Unfortunately, similar to `IO.inspect/2`, `IEx.pry/0` also requires us to change the code we intend to debug. Luckily IEx also provides a [`break!/2`](https://hexdocs.pm/iex/IEx.html#break!/2) function which allows you set and manage breakpoints on any Elixir code without modifying its source: [See the example in asciinema](https://asciinema.org/a/0h3po0AmTcBAorc5GBNU97nrs) Similar to `IEx.pry/0`, once a breakpoint is reached code execution stops until `continue` is invoked. However, note `break!/2` does not have access to aliases and imports from the debugged code as it works on the compiled artifact rather than on source. Debugger -------- For those who enjoy breakpoints but are rather interested in a visual debugger, Erlang/OTP ships with a graphical debugger conveniently named `:debugger`. Let’s define a module in a file named `example.ex`: ``` defmodule Example do def double_sum(x, y) do hard_work(x, y) end defp hard_work(x, y) do x = 2 * x y = 2 * y x + y end end ``` Now let’s start an IEx session to compile the file and start the debugger: ``` $ iex iex(1)> c "example.ex" [Example] iex(2)> :debugger.start() {:ok, #PID<0.87.0>} iex(3)> :int.ni(Example) {:module, Example} iex(4)> :int.break(Example, 3) :ok iex(5)> Example.double_sum(1,2) ``` > If the `debugger` does not start, here is what may have happened: some package managers default to installing a minimized Erlang without WX bindings for GUI support. In some package managers, you may be able to replace the headless Erlang with a more complete package (look for packages named `erlang` vs `erlang-nox` on Debian/Ubuntu/Arch). In others managers, you may need to install a separate `erlang-wx` (or similarly named) package. > > When you start the debugger, a Graphical User Interface will open in your machine. We call `:int.ni(Example)` to prepare our module for debugging and then add a breakpoint to line 3 with `:int.break(Example, 3)`. After we call our function, we can see our process with break status in the debugger: ![Debugger GUI GIF](https://elixir-lang.org/images/contents/debugger-elixir.gif) Observer -------- For debugging complex systems, jumping at the code is not enough. It is necessary to have an understanding of the whole virtual machine, processes, applications, as well as set up tracing mechanisms. Luckily this can be achieved in Erlang with `:observer`. In your application: ``` $ iex iex(1)> :observer.start() ``` > Similar to the `debugger` note above, your package manager may require a separate installation in order to run Observer. > > The above will open another Graphical User Interface that provides many panes to fully understand and navigate the runtime and your project: We explore the Observer in the context of an actual project [in the Dynamic Supervisor chapter of the Mix & OTP guide](mix-otp/dynamic-supervisor). This is one of the debugging techniques [the Phoenix framework used to achieve 2 million connections on a single machine](https://phoenixframework.org/blog/the-road-to-2-million-websocket-connections). Finally, remember you can also get a mini-overview of the runtime info by calling `runtime_info/0` directly in IEx. Other tools and community ------------------------- We have just scratched the surface of what the Erlang VM has to offer, for example: * Alongside the observer application, Erlang also includes a `:crashdump_viewer` to view crash dumps * Integration with OS level tracers, such as [Linux Trace Toolkit,](http://erlang.org/doc/apps/runtime_tools/LTTng.html) [DTRACE,](http://erlang.org/doc/apps/runtime_tools/DTRACE.html) and [SystemTap](http://erlang.org/doc/apps/runtime_tools/SYSTEMTAP.html) * [Microstate accounting](http://erlang.org/doc/man/msacc.html) measures how much time the runtime spends in several low-level tasks in a short time interval * Mix ships with many tasks under the `profile` namespace, such as `cprof` and `fprof` * And more The community has also created its own tools, often to aid in production, other times in development: * [wObserver](https://github.com/shinyscorpion/wObserver) observes production nodes through a web interface. * [visualixir](https://github.com/koudelka/visualixir) is a development-time process message visualizer. * [erlyberly](https://github.com/andytill/erlyberly) is a GUI for tracing during development. There are probably many more to come too! elixir mix deps mix deps ========= Lists all dependencies and their status. Dependencies must be specified in the `mix.exs` file in one of the following formats: ``` {app, requirement} {app, opts} {app, requirement, opts} ``` Where: * app is an atom * requirement is a [`Version`](https://hexdocs.pm/elixir/Version.html) requirement or a regular expression * opts is a keyword list of options For example: ``` {:plug, ">= 0.4.0"} {:gettext, git: "https://github.com/elixir-lang/gettext.git", tag: "0.1"} {:local_dependency, path: "path/to/local_dependency"} ``` By default, dependencies are fetched using the [Hex package manager](https://hex.pm/): ``` {:plug, ">= 0.4.0"} ``` By specifying such dependencies, Mix will automatically install Hex (if it wasn't previously installed) and download a package suitable to your project. Note Hex expects the dependency requirement to always be given and it will warn otherwise. Mix also supports Git and path dependencies: ``` {:foobar, git: "https://github.com/elixir-lang/foobar.git", tag: "0.1"} {:foobar, path: "path/to/foobar"} ``` And also in umbrella dependencies: ``` {:my_app, in_umbrella: true} ``` Path and in umbrella dependencies are automatically recompiled by the parent project whenever they change. While fetchable dependencies, like the ones using `:git`, are recompiled only when fetched/updated. The dependencies' versions are expected to be formatted according to Semantic Versioning and the requirements must be specified as defined in the [`Version`](https://hexdocs.pm/elixir/Version.html) module. Options -------- Below we provide a more detailed look into the available options. ### Dependency definition options * `:app` - when set to `false`, does not read the app file for this dependency. By default, the app file is read * `:env` - the environment (as an atom) to run the dependency on; defaults to `:prod` * `:compile` - a command (string) to compile the dependency; defaults to a `mix`, `rebar` or `make` command * `:optional` - marks the dependency as optional. In such cases, the current project will always include the optional dependency but any other project that depends on the current project won't be forced to use the optional dependency. However, if the other project includes the optional dependency on its own, the requirements and options specified here will also be applied. * `:only` - the dependency is made available only in the given environments, useful when declaring dev- or test-only dependencies; by default the dependency will be available in all environments. The value of this option can either be a single environment (like `:dev`) or a list of environments (like `[:dev, :test]`) * `:targets` - the dependency is made available only for the given targets. By default the dependency will be available in all environments. The value of this option can either be a single target (like `:host`) or a list of environments (like `[:host, :rpi3]`) * `:override` - if set to `true` the dependency will override any other definitions of itself by other dependencies * `:manager` - Mix can also compile Rebar, Rebar3 and makefile projects and can fetch sub dependencies of Rebar and Rebar3 projects. Mix will try to infer the type of project but it can be overridden with this option by setting it to `:mix`, `:rebar3`, `:rebar` or `:make`. In case there are conflicting definitions, the first manager in the list above will be picked up. For example, if a dependency is found with `:rebar3` and `:rebar` managers in different part of the trees, `:rebar3` will be automatically picked. You can find the manager by running [`mix deps`](#content) and override it by setting the `:override` option in a top-level project. * `:runtime` - whether the dependency is part of runtime applications. If the `:applications` key is not provided in `def application` in your `mix.exs` file, Mix will automatically include all dependencies as a runtime application, except if `runtime: false` is given. Defaults to true. * `:system_env` - an enumerable of key-value tuples of binaries to be set as environment variables when loading or compiling the dependency ### Git options (`:git`) * `:git` - the Git repository URI * `:github` - a shortcut for specifying Git repos from GitHub, uses `:git` * `:ref` - the reference to checkout (may be a branch, a commit SHA or a tag) * `:branch` - the Git branch to checkout * `:tag` - the Git tag to checkout * `:submodules` - when `true`, initialize submodules for the repo * `:sparse` - checkout a single directory inside the Git repository and use it as your Mix dependency. Search "sparse git checkouts" for more information. If your Git repository requires authentication, such as basic username:password HTTP authentication via URLs, it can be achieved via Git configuration, keeping the access rules outside of source control. ``` git config --global url."https://YOUR_USER:[email protected]/".insteadOf "https://example.com/" ``` For more information, see the `git config` documentation: <https://git-scm.com/docs/git-config#git-config-urlltbasegtinsteadOf> ### Path options (`:path`) * `:path` - the path for the dependency * `:in_umbrella` - when `true`, sets a path dependency pointing to "../#{app}", sharing the same environment as the current application ### Hex options (`:hex`) See the [Hex usage documentation](https://hex.pm/docs/usage) for Hex options. Deps task ---------- [`mix deps`](#content) task lists all dependencies in the following format: ``` APP VERSION (SCM) (MANAGER) [locked at REF] STATUS ``` It supports the following options: * `--all` - lists all dependencies, regardless of specified environment elixir Protocol Protocol ========= Reference and functions for working with protocols. A protocol specifies an API that should be defined by its implementations. A protocol is defined with [`Kernel.defprotocol/2`](kernel#defprotocol/2) and its implementations with [`Kernel.defimpl/2`](kernel#defimpl/2). Examples --------- In Elixir, we have two verbs for checking how many items there are in a data structure: `length` and `size`. `length` means the information must be computed. For example, `length(list)` needs to traverse the whole list to calculate its length. On the other hand, `tuple_size(tuple)` and `byte_size(binary)` do not depend on the tuple and binary size as the size information is precomputed in the data structure. Although Elixir includes specific functions such as `tuple_size`, `binary_size` and `map_size`, sometimes we want to be able to retrieve the size of a data structure regardless of its type. In Elixir we can write polymorphic code, i.e. code that works with different shapes/types, by using protocols. A size protocol could be implemented as follows: ``` defprotocol Size do @doc "Calculates the size (and not the length!) of a data structure" def size(data) end ``` Now that the protocol can be implemented for every data structure the protocol may have a compliant implementation for: ``` defimpl Size, for: BitString do def size(binary), do: byte_size(binary) end defimpl Size, for: Map do def size(map), do: map_size(map) end defimpl Size, for: Tuple do def size(tuple), do: tuple_size(tuple) end ``` Notice we didn't implement it for lists as we don't have the `size` information on lists, rather its value needs to be computed with `length`. It is possible to implement protocols for all Elixir types: * Structs (see below) * [`Tuple`](tuple) * [`Atom`](atom) * [`List`](list) * `BitString` * [`Integer`](integer) * [`Float`](float) * [`Function`](function) * `PID` * [`Map`](map) * [`Port`](port) * `Reference` * `Any` (see below) Protocols and Structs ---------------------- The real benefit of protocols comes when mixed with structs. For instance, Elixir ships with many data types implemented as structs, like [`MapSet`](mapset). We can implement the `Size` protocol for those types as well: ``` defimpl Size, for: MapSet do def size(map_set), do: MapSet.size(map_set) end ``` When implementing a protocol for a struct, the `:for` option can be omitted if the `defimpl` call is inside the module that defines the struct: ``` defmodule User do defstruct [:email, :name] defimpl Size do # two fields def size(%User{}), do: 2 end end ``` If a protocol implementation is not found for a given type, invoking the protocol will raise unless it is configured to fall back to `Any`. Conveniences for building implementations on top of existing ones are also available, look at [`defstruct/1`](kernel#defstruct/1) for more information about deriving protocols. Fallback to `Any` ------------------ In some cases, it may be convenient to provide a default implementation for all types. This can be achieved by setting the `@fallback_to_any` attribute to `true` in the protocol definition: ``` defprotocol Size do @fallback_to_any true def size(data) end ``` The `Size` protocol can now be implemented for `Any`: ``` defimpl Size, for: Any do def size(_), do: 0 end ``` Although the implementation above is arguably not a reasonable one. For example, it makes no sense to say a PID or an integer have a size of `0`. That's one of the reasons why `@fallback_to_any` is an opt-in behaviour. For the majority of protocols, raising an error when a protocol is not implemented is the proper behaviour. Multiple implementations ------------------------- Protocols can also be implemented for multiple types at once: ``` defprotocol Reversible do def reverse(term) end defimpl Reversible, for: [Map, List] do def reverse(term), do: Enum.reverse(term) end ``` Inside [`defimpl/2`](kernel#defimpl/2), you can use `@protocol` to access the protocol being implemented and `@for` to access the module it is being defined for. Types ------ Defining a protocol automatically defines a type named `t`, which can be used as follows: ``` @spec print_size(Size.t()) :: :ok def print_size(data) do result = case Size.size(data) do 0 -> "data has no items" 1 -> "data has one item" n -> "data has #{n} items" end IO.puts(result) end ``` The `@spec` above expresses that all types allowed to implement the given protocol are valid argument types for the given function. Reflection ----------- Any protocol module contains three extra functions: * `__protocol__/1` - returns the protocol information. The function takes one of the following atoms: + `:consolidated?` - returns whether the protocol is consolidated + `:functions` - returns keyword list of protocol functions and their arities + `:impls` - if consolidated, returns `{:consolidated, modules}` with the list of modules implementing the protocol, otherwise `:not_consolidated` + `:module` - the protocol module atom name * `impl_for/1` - receives a structure and returns the module that implements the protocol for the structure, `nil` otherwise * `impl_for!/1` - same as above but raises an error if an implementation is not found For example, for the [`Enumerable`](enumerable) protocol we have: ``` iex> Enumerable.__protocol__(:functions) [count: 1, member?: 2, reduce: 3, slice: 1] iex> Enumerable.impl_for([]) Enumerable.List iex> Enumerable.impl_for(42) nil ``` Consolidation -------------- In order to cope with code loading in development, protocols in Elixir provide a slow implementation of protocol dispatching specific to development. In order to speed up dispatching in production environments, where all implementations are known up-front, Elixir provides a feature called protocol consolidation. Consolidation directly links protocols to their implementations in a way that invoking a function from a consolidated protocol is equivalent to invoking two remote functions. Protocol consolidation is applied by default to all Mix projects during compilation. This may be an issue during test. For instance, if you want to implement a protocol during test, the implementation will have no effect, as the protocol has already been consolidated. One possible solution is to include compilation directories that are specific to your test environment in your mix.exs: ``` def project do ... elixirc_paths: elixirc_paths(Mix.env()) ... end defp elixirc_paths(:test), do: ["lib", "test/support"] defp elixirc_paths(_), do: ["lib"] ``` And then you can define the implementations specific to the test environment inside `test/support/some_file.ex`. Another approach is to disable protocol consolidation during tests in your mix.exs: ``` def project do ... consolidate_protocols: Mix.env() != :test ... end ``` Although doing so is not recommended as it may affect your test suite performance. Finally note all protocols are compiled with `debug_info` set to `true`, regardless of the option set by `elixirc` compiler. The debug info is used for consolidation and it may be removed after consolidation. Summary ======== Functions ---------- [assert\_impl!(protocol, base)](#assert_impl!/2) Checks if the given module is loaded and is an implementation of the given protocol. [assert\_protocol!(module)](#assert_protocol!/1) Checks if the given module is loaded and is protocol. [consolidate(protocol, types)](#consolidate/2) Receives a protocol and a list of implementations and consolidates the given protocol. [consolidated?(protocol)](#consolidated?/1) Returns `true` if the protocol was consolidated. [derive(protocol, module, options \\ [])](#derive/3) Derives the `protocol` for `module` with the given options. [extract\_impls(protocol, paths)](#extract_impls/2) Extracts all types implemented for the given protocol from the given paths. [extract\_protocols(paths)](#extract_protocols/1) Extracts all protocols from the given paths. Functions ========== ### assert\_impl!(protocol, base) #### Specs ``` assert_impl!(module(), module()) :: :ok ``` Checks if the given module is loaded and is an implementation of the given protocol. Returns `:ok` if so, otherwise raises [`ArgumentError`](argumenterror). ### assert\_protocol!(module) #### Specs ``` assert_protocol!(module()) :: :ok ``` Checks if the given module is loaded and is protocol. Returns `:ok` if so, otherwise raises [`ArgumentError`](argumenterror). ### consolidate(protocol, types) #### Specs ``` consolidate(module(), [module()]) :: {:ok, binary()} | {:error, :not_a_protocol} | {:error, :no_beam_info} ``` Receives a protocol and a list of implementations and consolidates the given protocol. Consolidation happens by changing the protocol `impl_for` in the abstract format to have fast lookup rules. Usually the list of implementations to use during consolidation are retrieved with the help of [`extract_impls/2`](#extract_impls/2). It returns the updated version of the protocol bytecode. If the first element of the tuple is `:ok`, it means the protocol was consolidated. A given bytecode or protocol implementation can be checked to be consolidated or not by analyzing the protocol attribute: ``` Protocol.consolidated?(Enumerable) ``` This function does not load the protocol at any point nor loads the new bytecode for the compiled module. However each implementation must be available and it will be loaded. ### consolidated?(protocol) #### Specs ``` consolidated?(module()) :: boolean() ``` Returns `true` if the protocol was consolidated. ### derive(protocol, module, options \\ []) Derives the `protocol` for `module` with the given options. If your implementation passes options or if you are generating custom code based on the struct, you will also need to implement a macro defined as `__deriving__(module, struct, options)` to get the options that were passed. #### Examples ``` defprotocol Derivable do def ok(arg) end defimpl Derivable, for: Any do defmacro __deriving__(module, struct, options) do quote do defimpl Derivable, for: unquote(module) do def ok(arg) do {:ok, arg, unquote(Macro.escape(struct)), unquote(options)} end end end end def ok(arg) do {:ok, arg} end end defmodule ImplStruct do @derive [Derivable] defstruct a: 0, b: 0 end Derivable.ok(%ImplStruct{}) {:ok, %ImplStruct{a: 0, b: 0}, %ImplStruct{a: 0, b: 0}, []} ``` Explicit derivations can now be called via `__deriving__`: ``` # Explicitly derived via `__deriving__` Derivable.ok(%ImplStruct{a: 1, b: 1}) # Explicitly derived by API via `__deriving__` require Protocol Protocol.derive(Derivable, ImplStruct, :oops) Derivable.ok(%ImplStruct{a: 1, b: 1}) ``` ### extract\_impls(protocol, paths) #### Specs ``` extract_impls(module(), [charlist() | String.t()]) :: [atom()] ``` Extracts all types implemented for the given protocol from the given paths. The paths can be either a charlist or a string. Internally they are worked on as charlists, so passing them as lists avoid extra conversion. Does not load any of the implementations. #### Examples ``` # Get Elixir's ebin directory path and retrieve all protocols iex> path = :code.lib_dir(:elixir, :ebin) iex> mods = Protocol.extract_impls(Enumerable, [path]) iex> List in mods true ``` ### extract\_protocols(paths) #### Specs ``` extract_protocols([charlist() | String.t()]) :: [atom()] ``` Extracts all protocols from the given paths. The paths can be either a charlist or a string. Internally they are worked on as charlists, so passing them as lists avoid extra conversion. Does not load any of the protocols. #### Examples ``` # Get Elixir's ebin directory path and retrieve all protocols iex> path = :code.lib_dir(:elixir, :ebin) iex> mods = Protocol.extract_protocols([path]) iex> Enumerable in mods true ```
programming_docs
elixir File.Stream File.Stream ============ Defines a [`File.Stream`](#content) struct returned by [`File.stream!/3`](file#stream!/3). The following fields are public: * `path` - the file path * `modes` - the file modes * `raw` - a boolean indicating if bin functions should be used * `line_or_bytes` - if reading should read lines or a given number of bytes Summary ======== Types ------ [t()](#t:t/0) Types ====== ### t() #### Specs ``` t() :: %File.Stream{ line_or_bytes: term(), modes: term(), path: term(), raw: term() } ``` elixir Binaries, strings, and charlists Getting Started Binaries, strings, and charlists ================================ In “Basic types”, we learned a little bit about strings and we used the `is_binary/1` function for checks: ``` iex> string = "hello" "hello" iex> is_binary(string) true ``` In this chapter, we will gain clarity on what exactly binaries are, how they relate to strings, and what single-quoted values, `'like this'`, mean in Elixir. Although strings are one of the most common data types in computer languages, they are subtly complex and are often misunderstood. To understand strings in Elixir, we have to educate ourselves about [Unicode](https://en.wikipedia.org/wiki/Unicode) and character encodings, specifically the [UTF-8](https://en.wikipedia.org/wiki/UTF-8) encoding. Unicode and Code Points ----------------------- In order to facilitate meaningful communication between computers across multiple languages, a standard is required so that the ones and zeros on one machine mean the same thing when they are transmitted to another. The [Unicode Standard](https://unicode.org/standard/standard.html) acts as an official registry of virtually all the characters we know: this includes characters from classical and historical texts, emoji, and formatting and control characters as well. Unicode organizes all of the characters in its repertoire into code charts, and each character is given a unique numerical index. This numerical index is known as a [Code Point](https://en.wikipedia.org/wiki/Code_point). In Elixir you can use a `?` in front of a character literal to reveal its code point: ``` iex> ?a 97 iex> ?ł 322 ``` Note that most Unicode code charts will refer to a code point by its hexadecimal representation, e.g. `97` translates to `0061` in hex, and we can represent any Unicode character in an Elixir string by using the `\u` notation and the hex representation of its code point number: ``` iex> "\u0061" === "a" true iex> 0x0061 = 97 = ?a 97 ``` The hex representation will also help you look up information about a code point, e.g. <https://codepoints.net/U+0061> has a data sheet all about the lower case `a`, a.k.a. code point 97. UTF-8 and Encodings ------------------- Now that we understand what the Unicode standard is and what code points are, we can finally talk about encodings. Whereas the code point is **what** we store, an encoding deals with **how** we store it: encoding is an implementation. In other words, we need a mechanism to convert the code point numbers into bytes so they can be stored in memory, written to disk, etc. Elixir uses UTF-8 to encode its strings, which means that code points are encoded as a series of 8-bit bytes. UTF-8 is a **variable width** character encoding that uses one to four bytes to store each code point; it is capable of encoding all valid Unicode code points. Besides defining characters, UTF-8 also provides a notion of graphemes. Graphemes may consist of multiple characters that are often perceived as one. For example, `é` can be represented in Unicode as a single character. It can also be represented as the combination of the character `e` and the acute accent character `´` into a single grapheme. In other words, what we would expect to be a single character, such as `é` or `ł`, can in practice be multiple characters, each represented by potentially multiple bytes. Consider the following: ``` iex> string = "hełło" "hełło" iex> String.length(string) 5 iex> byte_size(string) 7 ``` `String.length/1` counts graphemes, but `byte_size/1` reveals the number of underlying raw bytes needed to store the string when using UTF-8 encoding. UTF-8 requires one byte to represent the characters `h`, `e`, and `o`, but two bytes to represent `ł`. > Note: if you are running on Windows, there is a chance your terminal does not use UTF-8 by default. You can change the encoding of your current session by running `chcp 65001` before entering `iex` (`iex.bat`). > > A common trick in Elixir when you want to see the inner binary representation of a string is to concatenate the null byte `<<0>>` to it: ``` iex> "hełło" <> <<0>> <<104, 101, 197, 130, 197, 130, 111, 0>> ``` Alternatively, you can view a string’s binary representation by using [IO.inspect/2](https://hexdocs.pm/elixir/IO.html#inspect/2): ``` iex> IO.inspect("hełło", binaries: :as_binaries) <<104, 101, 197, 130, 197, 130, 111>> ``` We are getting a little bit ahead of ourselves. Let’s talk about bitstrings to learn about what exactly the `<<>>` constructor means. Bitstrings ---------- Although we have covered code points and UTF-8 encoding, we still need to go a bit deeper into how exactly we store the encoded bytes, and this is where we introduce the **bitstring**. A bitstring is a fundamental data type in Elixir, denoted with the `<<>>` syntax. **A bitstring is a contiguous sequence of bits in memory.** A complete reference about the binary / bitstring constructor `<<>>` can be found [in the Elixir documentation](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#%3C%3C%3E%3E/1). By default, 8 bits (i.e. 1 byte) is used to store each number in a bitstring, but you can manually specify the number of bits via a `::n` modifier to denote the size in `n` bits, or you can use the more verbose declaration `::size(n)`: ``` iex> <<42>> === <<42::8>> true iex> <<3::4>> <<3::size(4)>> ``` For example, the decimal number `3` when represented with 4 bits in base 2 would be `0011`, which is equivalent to the values `0`, `0`, `1`, `1`, each stored using 1 bit: ``` iex> <<0::1, 0::1, 1::1, 1::1>> == <<3::4>> true ``` Any value that exceeds what can be stored by the number of bits provisioned is truncated: ``` iex> <<1>> === <<257>> true ``` Here, 257 in base 2 would be represented as `100000001`, but since we have reserved only 8 bits for its representation (by default), the left-most bit is ignored and the value becomes truncated to `00000001`, or simply `1` in decimal. Binaries -------- **A binary is a bitstring where the number of bits is divisible by 8.** That means that every binary is a bitstring, but not every bitstring is a binary. We can use the `is_bitstring/1` and `is_binary/1` functions to demonstrate this. ``` iex> is_bitstring(<<3::4>>) true iex> is_binary(<<3::4>>) false iex> is_bitstring(<<0, 255, 42>>) true iex> is_binary(<<0, 255, 42>>) true iex> is_binary(<<42::16>>) true ``` We can pattern match on binaries / bitstrings: ``` iex> <<0, 1, x>> = <<0, 1, 2>> <<0, 1, 2>> iex> x 2 iex> <<0, 1, x>> = <<0, 1, 2, 3>> ** (MatchError) no match of right hand side value: <<0, 1, 2, 3>> ``` Note that unless you explicitly use `::` modifiers, each entry in the binary pattern is expected to match a single byte (exactly 8 bits). If we want to match on a binary of unknown size, we can use the `binary` modifier at the end of the pattern: ``` iex> <<0, 1, x :: binary>> = <<0, 1, 2, 3>> <<0, 1, 2, 3>> iex> x <<2, 3>> ``` There are a couple other modifiers that can be useful when doing pattern matches on binaries. The `binary-size(n)` modifier will match `n` bytes in a binary: ``` iex> <<head::binary-size(2), rest::binary>> = <<0, 1, 2, 3>> <<0, 1, 2, 3>> iex> head <<0, 1>> iex> rest <<2, 3>> ``` **A string is a UTF-8 encoded binary**, where the code point for each character is encoded using 1 to 4 bytes. Thus every string is a binary, but due to the UTF-8 standard encoding rules, not every binary is a valid string. ``` iex> is_binary("hello") true iex> is_binary(<<239, 191, 19>>) true iex> String.valid?(<<239, 191, 19>>) false ``` The string concatenation operator `<>` is actually a binary concatenation operator: ``` iex> "a" <> "ha" "aha" iex> <<0, 1>> <> <<2, 3>> <<0, 1, 2, 3>> ``` Given that strings are binaries, we can also pattern match on strings: ``` iex> <<head, rest::binary>> = "banana" "banana" iex> head == ?b true iex> rest "anana" ``` However, remember that binary pattern matching works on *bytes*, so matching on the string like “über” with multibyte characters won’t match on the *character*, it will match on the *first byte of that character*: ``` iex> "ü" <> <<0>> <<195, 188, 0>> iex> <<x, rest::binary>> = "über" "über" iex> x == ?ü false iex> rest <<188, 98, 101, 114>> ``` Above, `x` matched on only the first byte of the multibyte `ü` character. Therefore, when pattern matching on strings, it is important to use the `utf8` modifier: ``` iex> <<x::utf8, rest::binary>> = "über" "über" iex> x == ?ü true iex> rest "ber" ``` You will see that Elixir has excellent support for working with strings. It also supports many of the Unicode operations. In fact, Elixir passes all the tests showcased in the article [“The string type is broken”](http://mortoray.com/2013/11/27/the-string-type-is-broken/). Charlists --------- Our tour of our bitstrings, binaries, and strings is nearly complete, but we have one more data type to explain: the charlist. **A charlist is a list of integers where all the integers are valid code points.** In practice, you will not come across them often, except perhaps when interfacing with Erlang, in particular when using older libraries that do not accept binaries as arguments. Whereas strings (i.e. binaries) are created using double-quotes, charlists are created with single-quoted literals: ``` iex> 'hełło' [104, 101, 322, 322, 111] iex> is_list 'hełło' true iex> 'hello' 'hello' iex> List.first('hello') 104 ``` You can see that instead of containing bytes, a charlist contains integer code points. By default, IEx will only output code points if any of the integers falls outside the ASCII range of 0 to 127: ``` iex> 'hello' 'hello' iex> 'hełło' [104, 101, 322, 322, 111] ``` If you wish to inspect the code points in a single-quoted literal, you can force this by passing the `charlists` option to `IO.inspect/2`: ``` iex> IO.inspect('hello', charlists: :as_lists) [104, 101, 108, 108, 111] 'hello' ``` Interpreting integers as codepoints may lead to some surprising behavior. For example, if you are storing a list of integers that happen to range between 0 and 127, by default IEx will interpret this as a charlist and it will display the corresponding ASCII characters. ``` iex> heartbeats_per_minute = [99, 97, 116] 'cat' ``` You can convert a charlist to a string and back by using the `to_string/1` and `to_charlist/1` functions: ``` iex> to_charlist "hełło" [104, 101, 322, 322, 111] iex> to_string 'hełło' "hełło" iex> to_string :hello "hello" iex> to_string 1 "1" ``` Note that those functions are polymorphic - not only do they convert charlists to strings, they also operate on integers, atoms, and so on. String (binary) concatenation uses the `<>` operator but charlists, being lists, use the list concatenation operator `++`: ``` iex> 'this ' <> 'fails' ** (ArgumentError) expected binary argument in <> operator but got: 'this ' (elixir) lib/kernel.ex:1821: Kernel.wrap_concatenation/3 (elixir) lib/kernel.ex:1808: Kernel.extract_concatenations/2 (elixir) expanding macro: Kernel.<>/2 iex:1: (file) iex> 'this ' ++ 'works' 'this works' iex> "he" ++ "llo" ** (ArgumentError) argument error :erlang.++("he", "llo") iex> "he" <> "llo" "hello" ``` With binaries, strings, and charlists out of the way, it is time to talk about key-value data structures. elixir IO IO === Functions handling input/output (IO). Many functions in this module expect an IO device as an argument. An IO device must be a PID or an atom representing a process. For convenience, Elixir provides `:stdio` and `:stderr` as shortcuts to Erlang's `:standard_io` and `:standard_error`. The majority of the functions expect chardata. In case another type is given, functions will convert those types to string via the [`String.Chars`](string.chars) protocol (as shown in typespecs). For more information on chardata, see the "IO data" section below. IO devices ----------- An IO device may be an atom or a PID. In case it is an atom, the atom must be the name of a registered process. In addition, Elixir provides two shortcuts: * `:stdio` - a shortcut for `:standard_io`, which maps to the current [`Process.group_leader/0`](process#group_leader/0) in Erlang * `:stderr` - a shortcut for the named process `:standard_error` provided in Erlang IO devices maintain their position, which means subsequent calls to any reading or writing functions will start from the place where the device was last accessed. The position of files can be changed using the [`:file.position/2`](http://www.erlang.org/doc/man/file.html#position-2) function. IO data -------- IO data is a data type that can be used as a more efficient alternative to binaries in certain situations. A term of type **IO data** is a binary or a list containing bytes (integers in `0..255`) or nested IO data. The type is recursive. Let's see an example of one of the possible IO data representing the binary `"hello"`: ``` [?h, "el", ["l", [?o]]] ``` The built-in [`iodata/0`](typespecs#built-in-types) type is defined in terms of [`iolist/0`](typespecs#built-in-types). An IO list is the same as IO data but it doesn't allow for a binary at the top level (but binaries are still allowed in the list itself). ### Use cases for IO data IO data exists because often you need to do many append operations on smaller chunks of binaries in order to create a bigger binary. However, in Erlang and Elixir concatenating binaries will copy the concatenated binaries into a new binary. ``` def email(username, domain) do username <> "@" <> domain end ``` In this function, creating the email address will copy the `username` and `domain` binaries. Now imagine you want to use the resulting email inside another binary: ``` def welcome_message(name, username, domain) do "Welcome #{name}, your email is: #{email(username, domain)}" end IO.puts(welcome_message("Meg", "meg", "example.com")) #=> "Welcome Meg, your email is: [email protected]" ``` Every time you concatenate binaries or use interpolation (`#{}`) you are making copies of those binaries. However, in many cases you don't need the complete binary while you create it, but only at the end to print it out or send it somewhere. In such cases, you can construct the binary by creating IO data: ``` def email(username, domain) do [username, ?@, domain] end def welcome_message(name, username, domain) do ["Welcome ", name, ", your email is: ", email(username, domain)] end IO.puts(welcome_message("Meg", "meg", "example.com")) #=> "Welcome Meg, your email is: [email protected]" ``` Building IO data is cheaper than concatenating binaries. Concatenating multiple pieces of IO data just means putting them together inside a list since IO data can be arbitrarily nested, and that's a cheap and efficient operation. Most of the IO-based APIs, such as `:gen_tcp`, [`IO`](#content), etc, receive IO data and write it to the socket directly without converting it to binary. One drawback of IO data is that you can't do things like pattern match on the first part of a piece of IO data like you can with a binary, because you usually don't know the shape of the IO data. In those cases, you may need to convert it to a binary by calling [`iodata_to_binary/1`](#iodata_to_binary/1), which is reasonably efficient since it's implemented natively in C. Other functionality, like computing the length of IO data, can be computed directly on the iodata by calling [`iodata_length/1`](#iodata_length/1). ### Chardata Erlang and Elixir also have the idea of [`chardata/0`](#t:chardata/0). Chardata is very similar to IO data: the only difference is that integers in IO data represent bytes while integers in chardata represent Unicode codepoints. Bytes ([`byte/0`](typespecs#built-in-types)) are integers in the `0..255` range, while Unicode codepoints ([`char/0`](typespecs#built-in-types)) are integers in the range `0..0x10FFFF`. The [`IO`](#content) module provides the [`chardata_to_string/1`](#chardata_to_string/1) function for chardata as the "counter-part" of the [`iodata_to_binary/1`](#iodata_to_binary/1) function for IO data. If you try to use [`iodata_to_binary/1`](#iodata_to_binary/1) on chardata, it will result in an argument error. For example, let's try to put a codepoint that is not representable with one byte, like `?π`, inside IO data: ``` iex> IO.iodata_to_binary(["The symbol for pi is: ", ?π]) ** (ArgumentError) argument error ``` If we use chardata instead, it will work as expected: ``` iex> IO.chardata_to_string(["The symbol for pi is: ", ?π]) "The symbol for pi is: π" ``` Summary ======== Types ------ [chardata()](#t:chardata/0) [device()](#t:device/0) [nodata()](#t:nodata/0) Functions ---------- [binread(device \\ :stdio, line\_or\_chars)](#binread/2) Reads from the IO `device`. The operation is Unicode unsafe. [binstream(device, line\_or\_bytes)](#binstream/2) Converts the IO `device` into an [`IO.Stream`](io.stream). The operation is Unicode unsafe. [binwrite(device \\ :stdio, iodata)](#binwrite/2) Writes `iodata` to the given `device`. [chardata\_to\_string(string)](#chardata_to_string/1) Converts chardata into a string. [getn(prompt, count \\ 1)](#getn/2) Gets a number of bytes from IO device `:stdio`. [getn(device, prompt, count)](#getn/3) Gets a number of bytes from the IO `device`. [gets(device \\ :stdio, prompt)](#gets/2) Reads a line from the IO `device`. [inspect(item, opts \\ [])](#inspect/2) Inspects and writes the given `item` to the device. [inspect(device, item, opts)](#inspect/3) Inspects `item` according to the given options using the IO `device`. [iodata\_length(iodata)](#iodata_length/1) Returns the size of an IO data. [iodata\_to\_binary(iodata)](#iodata_to_binary/1) Converts IO data into a binary [puts(device \\ :stdio, item)](#puts/2) Writes `item` to the given `device`, similar to [`write/2`](#write/2), but adds a newline at the end. [read(device \\ :stdio, line\_or\_chars)](#read/2) Reads from the IO `device`. [stream(device, line\_or\_codepoints)](#stream/2) Converts the IO `device` into an [`IO.Stream`](io.stream). [warn(message)](#warn/1) Writes a `message` to stderr, along with the current stacktrace. [warn(message, stacktrace)](#warn/2) Writes a `message` to stderr, along with the given `stacktrace`. [write(device \\ :stdio, chardata)](#write/2) Writes `chardata` to the given `device`. Types ====== ### chardata() #### Specs ``` chardata() :: String.t() | maybe_improper_list(char() | chardata(), String.t() | []) ``` ### device() #### Specs ``` device() :: atom() | pid() ``` ### nodata() #### Specs ``` nodata() :: {:error, term()} | :eof ``` Functions ========== ### binread(device \\ :stdio, line\_or\_chars) #### Specs ``` binread(device(), :all | :line | non_neg_integer()) :: iodata() | nodata() ``` Reads from the IO `device`. The operation is Unicode unsafe. The `device` is iterated by the given number of bytes or line by line if `:line` is given. Alternatively, if `:all` is given, then whole `device` is returned. It returns: * `data` - the output bytes * `:eof` - end of file was encountered * `{:error, reason}` - other (rare) error condition; for instance, `{:error, :estale}` if reading from an NFS volume If `:all` is given, `:eof` is never returned, but an empty string in case the device has reached EOF. Note: do not use this function on IO devices in Unicode mode as it will return the wrong result. ### binstream(device, line\_or\_bytes) #### Specs ``` binstream(device(), :line | pos_integer()) :: Enumerable.t() ``` Converts the IO `device` into an [`IO.Stream`](io.stream). The operation is Unicode unsafe. An [`IO.Stream`](io.stream) implements both [`Enumerable`](enumerable) and [`Collectable`](collectable), allowing it to be used for both read and write. The `device` is iterated by the given number of bytes or line by line if `:line` is given. This reads from the IO device as a raw binary. Note that an IO stream has side effects and every time you go over the stream you may get different results. Finally, do not use this function on IO devices in Unicode mode as it will return the wrong result. ### binwrite(device \\ :stdio, iodata) #### Specs ``` binwrite(device(), iodata()) :: :ok | {:error, term()} ``` Writes `iodata` to the given `device`. This operation is meant to be used with "raw" devices that are started without an encoding. The given `iodata` is written as is to the device, without conversion. For more information on IO data, see the "IO data" section in the module documentation. Use [`write/2`](#write/2) for devices with encoding. Important: do **not** use this function on IO devices in Unicode mode as it will write the wrong data. In particular, the standard IO device is set to Unicode by default, so writing to stdio with this function will likely result in the wrong data being sent down the wire. ### chardata\_to\_string(string) #### Specs ``` chardata_to_string(chardata()) :: String.t() ``` Converts chardata into a string. For more information about chardata, see the ["Chardata"](#module-chardata) section in the module documentation. In case the conversion fails, it raises an [`UnicodeConversionError`](unicodeconversionerror). If a string is given, it returns the string itself. #### Examples ``` iex> IO.chardata_to_string([0x00E6, 0x00DF]) "æß" iex> IO.chardata_to_string([0x0061, "bc"]) "abc" iex> IO.chardata_to_string("string") "string" ``` ### getn(prompt, count \\ 1) #### Specs ``` getn(chardata() | String.Chars.t(), pos_integer()) :: chardata() | nodata() ``` ``` getn(device(), chardata() | String.Chars.t()) :: chardata() | nodata() ``` Gets a number of bytes from IO device `:stdio`. If `:stdio` is a Unicode device, `count` implies the number of Unicode code points to be retrieved. Otherwise, `count` is the number of raw bytes to be retrieved. See [`IO.getn/3`](io#getn/3) for a description of return values. ### getn(device, prompt, count) #### Specs ``` getn(device(), chardata() | String.Chars.t(), pos_integer()) :: chardata() | nodata() ``` Gets a number of bytes from the IO `device`. If the IO `device` is a Unicode device, `count` implies the number of Unicode code points to be retrieved. Otherwise, `count` is the number of raw bytes to be retrieved. It returns: * `data` - the input characters * `:eof` - end of file was encountered * `{:error, reason}` - other (rare) error condition; for instance, `{:error, :estale}` if reading from an NFS volume ### gets(device \\ :stdio, prompt) #### Specs ``` gets(device(), chardata() | String.Chars.t()) :: chardata() | nodata() ``` Reads a line from the IO `device`. It returns: * `data` - the characters in the line terminated by a line-feed (LF) or end of file (EOF) * `:eof` - end of file was encountered * `{:error, reason}` - other (rare) error condition; for instance, `{:error, :estale}` if reading from an NFS volume #### Examples To display "What is your name?" as a prompt and await user input: ``` IO.gets("What is your name?\n") ``` ### inspect(item, opts \\ []) #### Specs ``` inspect(item, keyword()) :: item when item: var ``` Inspects and writes the given `item` to the device. It's important to note that it returns the given `item` unchanged. This makes it possible to "spy" on values by inserting an [`IO.inspect/2`](io#inspect/2) call almost anywhere in your code, for example, in the middle of a pipeline. It enables pretty printing by default with width of 80 characters. The width can be changed by explicitly passing the `:width` option. The output can be decorated with a label, by providing the `:label` option to easily distinguish it from other [`IO.inspect/2`](io#inspect/2) calls. The label will be printed before the inspected `item`. See [`Inspect.Opts`](inspect.opts) for a full list of remaining formatting options. #### Examples ``` IO.inspect(<<0, 1, 2>>, width: 40) ``` Prints: ``` <<0, 1, 2>> ``` We can use the `:label` option to decorate the output: ``` IO.inspect(1..100, label: "a wonderful range") ``` Prints: ``` a wonderful range: 1..100 ``` The `:label` option is especially useful with pipelines: ``` [1, 2, 3] |> IO.inspect(label: "before") |> Enum.map(&(&1 * 2)) |> IO.inspect(label: "after") |> Enum.sum() ``` Prints: ``` before: [1, 2, 3] after: [2, 4, 6] ``` ### inspect(device, item, opts) #### Specs ``` inspect(device(), item, keyword()) :: item when item: var ``` Inspects `item` according to the given options using the IO `device`. See [`inspect/2`](#inspect/2) for a full list of options. ### iodata\_length(iodata) #### Specs ``` iodata_length(iodata()) :: non_neg_integer() ``` Returns the size of an IO data. For more information about IO data, see the ["IO data"](#module-io-data) section in the module documentation. Inlined by the compiler. #### Examples ``` iex> IO.iodata_length([1, 2 | <<3, 4>>]) 4 ``` ### iodata\_to\_binary(iodata) #### Specs ``` iodata_to_binary(iodata()) :: binary() ``` Converts IO data into a binary The operation is Unicode unsafe. Notice that this function treats integers in the given IO data as raw bytes and does not perform any kind of encoding conversion. If you want to convert from a charlist to a UTF-8-encoded string, use [`chardata_to_string/1`](#chardata_to_string/1) instead. For more information about IO data and chardata, see the ["IO data"](#module-io-data) section in the module documentation. If this function receives a binary, the same binary is returned. Inlined by the compiler. #### Examples ``` iex> bin1 = <<1, 2, 3>> iex> bin2 = <<4, 5>> iex> bin3 = <<6>> iex> IO.iodata_to_binary([bin1, 1, [2, 3, bin2], 4 | bin3]) <<1, 2, 3, 1, 2, 3, 4, 5, 4, 6>> iex> bin = <<1, 2, 3>> iex> IO.iodata_to_binary(bin) <<1, 2, 3>> ``` ### puts(device \\ :stdio, item) #### Specs ``` puts(device(), chardata() | String.Chars.t()) :: :ok ``` Writes `item` to the given `device`, similar to [`write/2`](#write/2), but adds a newline at the end. By default, the `device` is the standard output. It returns `:ok` if it succeeds. #### Examples ``` IO.puts("Hello World!") #=> Hello World! IO.puts(:stderr, "error") #=> error ``` ### read(device \\ :stdio, line\_or\_chars) #### Specs ``` read(device(), :all | :line | non_neg_integer()) :: chardata() | nodata() ``` Reads from the IO `device`. The `device` is iterated by the given number of characters or line by line if `:line` is given. Alternatively, if `:all` is given, then whole `device` is returned. It returns: * `data` - the output characters * `:eof` - end of file was encountered * `{:error, reason}` - other (rare) error condition; for instance, `{:error, :estale}` if reading from an NFS volume If `:all` is given, `:eof` is never returned, but an empty string in case the device has reached EOF. ### stream(device, line\_or\_codepoints) #### Specs ``` stream(device(), :line | pos_integer()) :: Enumerable.t() ``` Converts the IO `device` into an [`IO.Stream`](io.stream). An [`IO.Stream`](io.stream) implements both [`Enumerable`](enumerable) and [`Collectable`](collectable), allowing it to be used for both read and write. The `device` is iterated by the given number of characters or line by line if `:line` is given. This reads from the IO as UTF-8. Check out [`IO.binstream/2`](io#binstream/2) to handle the IO as a raw binary. Note that an IO stream has side effects and every time you go over the stream you may get different results. #### Examples Here is an example on how we mimic an echo server from the command line: ``` Enum.each(IO.stream(:stdio, :line), &IO.write(&1)) ``` ### warn(message) #### Specs ``` warn(chardata() | String.Chars.t()) :: :ok ``` Writes a `message` to stderr, along with the current stacktrace. It returns `:ok` if it succeeds. #### Examples ``` IO.warn("variable bar is unused") #=> warning: variable bar is unused #=> (iex) evaluator.ex:108: IEx.Evaluator.eval/4 ``` ### warn(message, stacktrace) #### Specs ``` warn(chardata() | String.Chars.t(), Exception.stacktrace()) :: :ok ``` Writes a `message` to stderr, along with the given `stacktrace`. This function also notifies the compiler a warning was printed (in case --warnings-as-errors was enabled). It returns `:ok` if it succeeds. An empty list can be passed to avoid stacktrace printing. #### Examples ``` stacktrace = [{MyApp, :main, 1, [file: 'my_app.ex', line: 4]}] IO.warn("variable bar is unused", stacktrace) #=> warning: variable bar is unused #=> my_app.ex:4: MyApp.main/1 ``` ### write(device \\ :stdio, chardata) #### Specs ``` write(device(), chardata() | String.Chars.t()) :: :ok ``` Writes `chardata` to the given `device`. By default, the `device` is the standard output. #### Examples ``` IO.write("sample") #=> sample IO.write(:stderr, "error") #=> error ```
programming_docs
elixir Mix.Task behaviour Mix.Task behaviour =================== A simple module that provides conveniences for creating, loading and manipulating tasks. A Mix task can be defined by simply using [`Mix.Task`](#content) in a module starting with `Mix.Tasks.` and defining the [`run/1`](#run/1) function: ``` defmodule Mix.Tasks.Echo do use Mix.Task @impl Mix.Task def run(args) do Mix.shell().info(Enum.join(args, " ")) end end ``` The [`run/1`](#run/1) function will receive a list of all arguments passed to the command line. Attributes ----------- There are a few attributes available in Mix tasks to configure them in Mix: * `@shortdoc` - makes the task public with a short description that appears on [`mix help`](mix.tasks.help) * `@recursive` - runs the task recursively in umbrella projects * `@preferred_cli_env` - recommends environment to run task. It is used in absence of a Mix project recommendation, or explicit `MIX_ENV`, and it only works for tasks in the current project. `@preferred_cli_env` is not loaded from dependencies as we need to know the environment before dependencies are loaded. Documentation -------------- Users can read the documentation for public Mix tasks by running `mix help my_task`. The documentation that will be shown is the `@moduledoc` of the task's module. Summary ======== Types ------ [task\_module()](#t:task_module/0) [task\_name()](#t:task_name/0) Functions ---------- [alias?(task)](#alias?/1) Checks if an alias called `task` exists. [all\_modules()](#all_modules/0) Returns all loaded task modules. [clear()](#clear/0) Clears all invoked tasks, allowing them to be reinvoked. [get(task)](#get/1) Receives a task name and returns the task module if found. [get!(task)](#get!/1) Receives a task name and retrieves the task module. [load\_all()](#load_all/0) Loads all tasks in all code paths. [load\_tasks(dirs)](#load_tasks/1) Loads all tasks in the given `paths`. [moduledoc(module)](#moduledoc/1) Gets the moduledoc for the given task `module`. [preferred\_cli\_env(task)](#preferred_cli_env/1) Gets preferred CLI environment for the task. [recursing?()](#recursing?/0) Indicates if the current task is recursing. [recursive(module)](#recursive/1) Checks if the task should be run recursively for all sub-apps in umbrella projects. [reenable(task)](#reenable/1) Reenables a given task so it can be executed again down the stack. [rerun(task, args \\ [])](#rerun/2) Reruns `task` with the given arguments. [run(task, args \\ [])](#run/2) Runs a `task` with the given `args`. [shortdoc(module)](#shortdoc/1) Gets the shortdoc for the given task `module`. [task?(module)](#task?/1) Returns `true` if given module is a task. [task\_name(module)](#task_name/1) Returns the task name for the given `module`. Callbacks ---------- [run(command\_line\_args)](#c:run/1) A task needs to implement `run` which receives a list of command line args. Types ====== ### task\_module() #### Specs ``` task_module() :: atom() ``` ### task\_name() #### Specs ``` task_name() :: String.t() | atom() ``` Functions ========== ### alias?(task) #### Specs ``` alias?(task_name()) :: boolean() ``` Checks if an alias called `task` exists. For more information about task aliasing, take a look at the "Aliasing" section in the docs for [`Mix`](mix). ### all\_modules() #### Specs ``` all_modules() :: [task_module()] ``` Returns all loaded task modules. Modules that are not yet loaded won't show up. Check [`load_all/0`](#load_all/0) if you want to preload all tasks. ### clear() #### Specs ``` clear() :: :ok ``` Clears all invoked tasks, allowing them to be reinvoked. This operation is not recursive. ### get(task) #### Specs ``` get(task_name()) :: task_module() | nil ``` Receives a task name and returns the task module if found. Otherwise returns `nil` in case the module exists, but it isn't a task or cannot be found. ### get!(task) #### Specs ``` get!(task_name()) :: task_module() ``` Receives a task name and retrieves the task module. #### Exceptions * [`Mix.NoTaskError`](mix.notaskerror) - raised if the task could not be found * [`Mix.InvalidTaskError`](mix.invalidtaskerror) - raised if the task is not a valid [`Mix.Task`](#content) ### load\_all() #### Specs ``` load_all() :: [task_module()] ``` Loads all tasks in all code paths. ### load\_tasks(dirs) #### Specs ``` load_tasks([List.Chars.t()]) :: [task_module()] ``` Loads all tasks in the given `paths`. ### moduledoc(module) #### Specs ``` moduledoc(task_module()) :: String.t() | nil | false ``` Gets the moduledoc for the given task `module`. Returns the moduledoc or `nil`. ### preferred\_cli\_env(task) #### Specs ``` preferred_cli_env(task_name()) :: atom() | nil ``` Gets preferred CLI environment for the task. Returns environment (for example, `:test`, or `:prod`), or `nil`. ### recursing?() #### Specs ``` recursing?() :: boolean() ``` Indicates if the current task is recursing. This returns true if a task is marked as recursive and it is being executed inside an umbrella project. ### recursive(module) #### Specs ``` recursive(task_module()) :: boolean() ``` Checks if the task should be run recursively for all sub-apps in umbrella projects. Returns `true` or `false`. ### reenable(task) #### Specs ``` reenable(task_name()) :: :ok ``` Reenables a given task so it can be executed again down the stack. Both alias and the regular stack are reenabled when this function is called. If an umbrella project reenables a task, it is reenabled for all child projects. ### rerun(task, args \\ []) #### Specs ``` rerun(task_name(), [any()]) :: any() ``` Reruns `task` with the given arguments. This function reruns the given task; to do that, it first re-enables the task and then runs it as normal. ### run(task, args \\ []) #### Specs ``` run(task_name(), [any()]) :: any() ``` Runs a `task` with the given `args`. If the task was not yet invoked, it runs the task and returns the result. If there is an alias with the same name, the alias will be invoked instead of the original task. If the task or alias were already invoked, it does not run them again and simply aborts with `:noop`. It may raise an exception if an alias or a task can't be found or the task is invalid. Check [`get!/1`](#get!/1) for more information. ### shortdoc(module) #### Specs ``` shortdoc(task_module()) :: String.t() | nil ``` Gets the shortdoc for the given task `module`. Returns the shortdoc or `nil`. ### task?(module) #### Specs ``` task?(task_module()) :: boolean() ``` Returns `true` if given module is a task. ### task\_name(module) #### Specs ``` task_name(task_module()) :: task_name() ``` Returns the task name for the given `module`. Callbacks ========== ### run(command\_line\_args) #### Specs ``` run(command_line_args :: [binary()]) :: any() ``` A task needs to implement `run` which receives a list of command line args. elixir Supervisor.Spec Supervisor.Spec ================ This module is deprecated. Use the new child specifications outlined in the Supervisor module instead. Outdated functions for building child specifications. The functions in this module are deprecated and they do not work with the module-based child specs introduced in Elixir v1.5. Please see the [`Supervisor`](supervisor) documentation instead. Convenience functions for defining supervisor specifications. Example -------- By using the functions in this module one can specify the children to be used under a supervisor, started with [`Supervisor.start_link/2`](supervisor#start_link/2): ``` import Supervisor.Spec children = [ worker(MyWorker, [arg1, arg2, arg3]), supervisor(MySupervisor, [arg1]) ] Supervisor.start_link(children, strategy: :one_for_one) ``` Sometimes, it may be handy to define supervisors backed by a module: ``` defmodule MySupervisor do use Supervisor def start_link(arg) do Supervisor.start_link(__MODULE__, arg) end def init(arg) do children = [ worker(MyWorker, [arg], restart: :temporary) ] supervise(children, strategy: :simple_one_for_one) end end ``` Notice in this case we don't have to explicitly import [`Supervisor.Spec`](#content) as `use Supervisor` automatically does so. Defining a module-based supervisor can be useful, for example, to perform initialization tasks in the `c:init/1` callback. Supervisor and worker options ------------------------------ In the example above, we defined specs for workers and supervisors. These specs (both for workers as well as supervisors) accept the following options: * `:id` - a name used to identify the child specification internally by the supervisor; defaults to the given module name for the child worker/supervisor * `:function` - the function to invoke on the child to start it * `:restart` - an atom that defines when a terminated child process should be restarted (see the "Restart values" section below) * `:shutdown` - an atom that defines how a child process should be terminated (see the "Shutdown values" section below) * `:modules` - it should be a list with one element `[module]`, where module is the name of the callback module only if the child process is a [`Supervisor`](supervisor) or [`GenServer`](genserver); if the child process is a [`GenEvent`](genevent), `:modules` should be `:dynamic` ### Restart values (:restart) The following restart values are supported in the `:restart` option: * `:permanent` - the child process is always restarted * `:temporary` - the child process is never restarted (not even when the supervisor's strategy is `:rest_for_one` or `:one_for_all`) * `:transient` - the child process is restarted only if it terminates abnormally, i.e., with an exit reason other than `:normal`, `:shutdown` or `{:shutdown, term}` Notice that supervisor that reached maximum restart intensity will exit with `:shutdown` reason. In this case the supervisor will only be restarted if its child specification was defined with the `:restart` option is set to `:permanent` (the default). ### Shutdown values (`:shutdown`) The following shutdown values are supported in the `:shutdown` option: * `:brutal_kill` - the child process is unconditionally terminated using `Process.exit(child, :kill)` * `:infinity` - if the child process is a supervisor, this is a mechanism to give the subtree enough time to shut down; it can also be used with workers with care * a non-negative integer - the amount of time in milliseconds that the supervisor tells the child process to terminate by calling `Process.exit(child, :shutdown)` and then waits for an exit signal back. If no exit signal is received within the specified time, the child process is unconditionally terminated using `Process.exit(child, :kill)` Summary ======== Types ------ [child\_id()](#t:child_id/0) Supported ID values [modules()](#t:modules/0) Supported module values [restart()](#t:restart/0) Supported restart values [shutdown()](#t:shutdown/0) Supported shutdown values [spec()](#t:spec/0) The supervisor specification [strategy()](#t:strategy/0) Supported strategies [worker()](#t:worker/0) Supported worker values Functions ---------- [supervise(children, options)](#supervise/2) Receives a list of `children` (workers or supervisors) to supervise and a set of `options`. [supervisor(module, args, options \\ [])](#supervisor/3) Defines the given `module` as a supervisor which will be started with the given arguments. [worker(module, args, options \\ [])](#worker/3) Defines the given `module` as a worker which will be started with the given arguments. Types ====== ### child\_id() #### Specs ``` child_id() :: term() ``` Supported ID values ### modules() #### Specs ``` modules() :: :dynamic | [module()] ``` Supported module values ### restart() #### Specs ``` restart() :: :permanent | :transient | :temporary ``` Supported restart values ### shutdown() #### Specs ``` shutdown() :: timeout() | :brutal_kill ``` Supported shutdown values ### spec() #### Specs ``` spec() :: {child_id(), start_fun :: {module(), atom(), [term()]}, restart(), shutdown(), worker(), modules()} ``` The supervisor specification ### strategy() #### Specs ``` strategy() :: :simple_one_for_one | :one_for_one | :one_for_all | :rest_for_one ``` Supported strategies ### worker() #### Specs ``` worker() :: :worker | :supervisor ``` Supported worker values Functions ========== ### supervise(children, options) #### Specs ``` supervise([spec()], strategy: strategy(), max_restarts: non_neg_integer(), max_seconds: pos_integer() ) :: {:ok, tuple()} ``` Receives a list of `children` (workers or supervisors) to supervise and a set of `options`. Returns a tuple containing the supervisor specification. This tuple can be used as the return value of the `c:init/1` callback when implementing a module-based supervisor. #### Examples ``` supervise(children, strategy: :one_for_one) ``` #### Options * `:strategy` - the restart strategy option. It can be either `:one_for_one`, `:rest_for_one`, `:one_for_all`, or `:simple_one_for_one`. You can learn more about strategies in the [`Supervisor`](supervisor) module docs. * `:max_restarts` - the maximum number of restarts allowed in a time frame. Defaults to `3`. * `:max_seconds` - the time frame in which `:max_restarts` applies. Defaults to `5`. The `:strategy` option is required and by default a maximum of 3 restarts is allowed within 5 seconds. Check the [`Supervisor`](supervisor) module for a detailed description of the available strategies. ### supervisor(module, args, options \\ []) #### Specs ``` supervisor(module(), [term()], restart: restart(), shutdown: shutdown(), id: term(), function: atom(), modules: modules() ) :: spec() ``` Defines the given `module` as a supervisor which will be started with the given arguments. ``` supervisor(module, [], restart: :permanent) ``` By default, the function `start_link` is invoked on the given module. Overall, the default values for the options are: ``` [ id: module, function: :start_link, restart: :permanent, shutdown: :infinity, modules: [module] ] ``` See the "Supervisor and worker options" section in the [`Supervisor.Spec`](#content) module for more information on the available options. ### worker(module, args, options \\ []) #### Specs ``` worker(module(), [term()], restart: restart(), shutdown: shutdown(), id: term(), function: atom(), modules: modules() ) :: spec() ``` Defines the given `module` as a worker which will be started with the given arguments. ``` worker(ExUnit.Runner, [], restart: :permanent) ``` By default, the function `start_link` is invoked on the given module. Overall, the default values for the options are: ``` [ id: module, function: :start_link, restart: :permanent, shutdown: 5000, modules: [module] ] ``` See the "Supervisor and worker options" section in the [`Supervisor.Spec`](#content) module for more information on the available options. elixir Port Port ===== Functions for interacting with the external world through ports. Ports provide a mechanism to start operating system processes external to the Erlang VM and communicate with them via message passing. Example -------- ``` iex> port = Port.open({:spawn, "cat"}, [:binary]) iex> send(port, {self(), {:command, "hello"}}) iex> send(port, {self(), {:command, "world"}}) iex> flush() {#Port<0.1444>, {:data, "hello"}} {#Port<0.1444>, {:data, "world"}} iex> send(port, {self(), :close}) :ok iex> flush() {#Port<0.1464>, :closed} :ok ``` In the example above, we have created a new port that executes the program `cat`. `cat` is a program available on UNIX systems that receives data from multiple inputs and concatenates them in the output. After the port was created, we sent it two commands in the form of messages using [`Kernel.send/2`](kernel#send/2). The first command has the binary payload of "hello" and the second has "world". After sending those two messages, we invoked the IEx helper `flush()`, which printed all messages received from the port, in this case we got "hello" and "world" back. Notice the messages are in binary because we passed the `:binary` option when opening the port in [`Port.open/2`](port#open/2). Without such option, it would have yielded a list of bytes. Once everything was done, we closed the port. Elixir provides many conveniences for working with ports and some drawbacks. We will explore those below. Message and function APIs -------------------------- There are two APIs for working with ports. It can be either asynchronous via message passing, as in the example above, or by calling the functions on this module. The messages supported by ports and their counterpart function APIs are listed below: * `{pid, {:command, binary}}` - sends the given data to the port. See [`command/3`](#command/3). * `{pid, :close}` - closes the port. Unless the port is already closed, the port will reply with `{port, :closed}` message once it has flushed its buffers and effectively closed. See [`close/1`](#close/1). * `{pid, {:connect, new_pid}}` - sets the `new_pid` as the new owner of the port. Once a port is opened, the port is linked and connected to the caller process and communication to the port only happens through the connected process. This message makes `new_pid` the new connected processes. Unless the port is dead, the port will reply to the old owner with `{port, :connected}`. See [`connect/2`](#connect/2). On its turn, the port will send the connected process the following messages: * `{port, {:data, data}}` - data sent by the port * `{port, :closed}` - reply to the `{pid, :close}` message * `{port, :connected}` - reply to the `{pid, {:connect, new_pid}}` message * `{:EXIT, port, reason}` - exit signals in case the port crashes. If reason is not `:normal`, this message will only be received if the owner process is trapping exits Open mechanisms ---------------- The port can be opened through four main mechanisms. As a short summary, prefer to using the `:spawn` and `:spawn_executable` options mentioned below. The other two options, `:spawn_driver` and `:fd` are for advanced usage within the VM. Also consider using [`System.cmd/3`](system#cmd/3) if all you want is to execute a program and retrieve its return value. ### spawn The `:spawn` tuple receives a binary that is going to be executed as a full invocation. For example, we can use it to invoke "echo hello" directly: ``` iex> port = Port.open({:spawn, "echo hello"}, [:binary]) iex> flush() {#Port<0.1444>, {:data, "hello\n"}} ``` `:spawn` will retrieve the program name from the argument and traverse your operating system `$PATH` environment variable looking for a matching program. Although the above is handy, it means it is impossible to invoke an executable that has whitespaces on its name or in any of its arguments. For those reasons, most times it is preferable to execute `:spawn_executable`. ### spawn\_executable Spawn executable is a more restricted and explicit version of spawn. It expects full file paths to the executable you want to execute. If they are in your `$PATH`, they can be retrieved by calling [`System.find_executable/1`](system#find_executable/1): ``` iex> path = System.find_executable("echo") iex> port = Port.open({:spawn_executable, path}, [:binary, args: ["hello world"]]) iex> flush() {#Port<0.1380>, {:data, "hello world\n"}} ``` When using `:spawn_executable`, the list of arguments can be passed via the `:args` option as done above. For the full list of options, see the documentation for the Erlang function [`:erlang.open_port/2`](http://www.erlang.org/doc/man/erlang.html#open_port-2). ### fd The `:fd` name option allows developers to access `in` and `out` file descriptors used by the Erlang VM. You would use those only if you are reimplementing core part of the Runtime System, such as the `:user` and `:shell` processes. Zombie operating system processes ---------------------------------- A port can be closed via the [`close/1`](#close/1) function or by sending a `{pid, :close}` message. However, if the VM crashes, a long-running program started by the port will have its stdin and stdout channels closed but **it won't be automatically terminated**. While most UNIX command line tools will exit once its communication channels are closed, not all command line applications will do so. While we encourage graceful termination by detecting if stdin/stdout has been closed, we do not always have control over how third-party software terminates. In those cases, you can wrap the application in a script that checks for stdin. Here is such script in Bash: ``` #!/bin/bash "$@" & pid=$! while read line ; do : done kill -KILL $pid ``` Now instead of: ``` Port.open( {:spawn_executable, "/path/to/program"}, args: ["a", "b", "c"] ) ``` You may invoke: ``` Port.open( {:spawn_executable, "/path/to/wrapper"}, args: ["/path/to/program", "a", "b", "c"] ) ``` Summary ======== Types ------ [name()](#t:name/0) Functions ---------- [close(port)](#close/1) Closes the `port`. [command(port, data, options \\ [])](#command/3) Sends `data` to the port driver `port`. [connect(port, pid)](#connect/2) Associates the `port` identifier with a `pid`. [demonitor(monitor\_ref, options \\ [])](#demonitor/2) Demonitors the monitor identified by the given `reference`. [info(port)](#info/1) Returns information about the `port` or `nil` if the port is closed. [info(port, spec)](#info/2) Returns information about the `port` or `nil` if the port is closed. [list()](#list/0) Returns a list of all ports in the current node. [monitor(port)](#monitor/1) Starts monitoring the given `port` from the calling process. [open(name, options)](#open/2) Opens a port given a tuple `name` and a list of `options`. Types ====== ### name() #### Specs ``` name() :: {:spawn, charlist() | binary()} | {:spawn_driver, charlist() | binary()} | {:spawn_executable, charlist() | atom()} | {:fd, non_neg_integer(), non_neg_integer()} ``` Functions ========== ### close(port) #### Specs ``` close(port()) :: true ``` Closes the `port`. For more information, see [`:erlang.port_close/1`](http://www.erlang.org/doc/man/erlang.html#port_close-1). Inlined by the compiler. ### command(port, data, options \\ []) #### Specs ``` command(port(), iodata(), [:force | :nosuspend]) :: boolean() ``` Sends `data` to the port driver `port`. For more information, see [`:erlang.port_command/2`](http://www.erlang.org/doc/man/erlang.html#port_command-2). Inlined by the compiler. ### connect(port, pid) #### Specs ``` connect(port(), pid()) :: true ``` Associates the `port` identifier with a `pid`. For more information, see [`:erlang.port_connect/2`](http://www.erlang.org/doc/man/erlang.html#port_connect-2). Inlined by the compiler. ### demonitor(monitor\_ref, options \\ []) #### Specs ``` demonitor(reference(), options :: [:flush | :info]) :: boolean() ``` Demonitors the monitor identified by the given `reference`. If `monitor_ref` is a reference which the calling process obtained by calling [`monitor/1`](#monitor/1), that monitoring is turned off. If the monitoring is already turned off, nothing happens. See [`:erlang.demonitor/2`](http://www.erlang.org/doc/man/erlang.html#demonitor-2) for more information. Inlined by the compiler. ### info(port) #### Specs ``` info(port()) :: keyword() | nil ``` Returns information about the `port` or `nil` if the port is closed. For more information, see [`:erlang.port_info/1`](http://www.erlang.org/doc/man/erlang.html#port_info-1). ### info(port, spec) #### Specs ``` info(port(), atom()) :: {atom(), term()} | nil ``` Returns information about the `port` or `nil` if the port is closed. For more information, see [`:erlang.port_info/2`](http://www.erlang.org/doc/man/erlang.html#port_info-2). ### list() #### Specs ``` list() :: [port()] ``` Returns a list of all ports in the current node. Inlined by the compiler. ### monitor(port) #### Specs ``` monitor(port() | {name, node()} | name) :: reference() when name: atom() ``` Starts monitoring the given `port` from the calling process. Once the monitored port process dies, a message is delivered to the monitoring process in the shape of: ``` {:DOWN, ref, :port, object, reason} ``` where: * `ref` is a monitor reference returned by this function; * `object` is either the `port` being monitored (when monitoring by port ID) or `{name, node}` (when monitoring by a port name); * `reason` is the exit reason. See [`:erlang.monitor/2`](http://www.erlang.org/doc/man/erlang.html#monitor-2) for more information. Inlined by the compiler. ### open(name, options) #### Specs ``` open(name(), list()) :: port() ``` Opens a port given a tuple `name` and a list of `options`. The module documentation above contains documentation and examples for the supported `name` values, summarized below: * `{:spawn, command}` - runs an external program. `command` must contain the program name and optionally a list of arguments separated by space. If passing programs or arguments with space in their name, use the next option. * `{:spawn_executable, filename}` - runs the executable given by the absolute file name `filename`. Arguments can be passed via the `:args` option. * `{:spawn_driver, command}` - spawns so-called port drivers. * `{:fd, fd_in, fd_out}` - accesses file descriptors, `fd_in` and `fd_out` opened by the VM. For more information and the list of options, see [`:erlang.open_port/2`](http://www.erlang.org/doc/man/erlang.html#open_port-2). Inlined by the compiler.
programming_docs
elixir Logger Logger ======= A logger for Elixir applications. It includes many features: * Provides debug, info, warn, and error levels. * Supports multiple backends which are automatically supervised when plugged into [`Logger`](#content). * Formats and truncates messages on the client to avoid clogging [`Logger`](#content) backends. * Alternates between sync and async modes to remain performant when required but also apply backpressure when under stress. * Plugs into Erlang's [`:logger`](http://erlang.org/doc/man/logger.html) (from Erlang/OTP 21) to convert terms to Elixir syntax or wraps Erlang's [`:error_logger`](http://erlang.org/doc/man/error_logger.html) in earlier Erlang/OTP versions to prevent it from overflowing. Logging is useful for tracking when an event of interest happens in your system. For example, it may be helpful to log whenever a user is deleted. ``` def delete_user(user) do Logger.info("Deleting user from the system: #{inspect(user)}") # ... end ``` The [`Logger.info/2`](logger#info/2) macro emits the provided message at the `:info` level. Note the arguments given to [`info/2`](#info/2) will only be evaluated if a message is logged. For instance, if the Logger level is set to `:warn`, `:info` messages are never logged and therefore the arguments given above won't even be executed. There are additional macros for other levels. Logger also allows log commands to be removed altogether via the `:compile_time_purge_matching` option (see below). For dynamically logging messages, see [`bare_log/3`](#bare_log/3). But note that [`bare_log/3`](#bare_log/3) always evaluates its arguments (unless the argument is an anonymous function). Levels ------- The supported levels, ordered by precedence, are: * `:debug` - for debug-related messages * `:info` - for information of any kind * `:warn` - for warnings * `:error` - for errors For example, `:info` takes precedence over `:debug`. If your log level is set to `:info`, `:info`, `:warn`, and `:error` will be printed to the console. If your log level is set to `:warn`, only `:warn` and `:error` will be printed. Configuration -------------- [`Logger`](#content) supports a wide range of configurations. This configuration is split in three categories: * Application configuration - must be set before the `:logger` application is started * Runtime configuration - can be set before the `:logger` application is started, but may be changed during runtime * Erlang configuration - options that handle integration with Erlang's logging facilities ### Application configuration The following configuration must be set via config files (such as `config/config.exs`) before the `:logger` application is started. * `:backends` - the backends to be used. Defaults to `[:console]`. See the "Backends" section for more information. * `:compile_time_application` - sets the `:application` metadata value to the configured value at compilation time. This configuration is usually only useful for build tools to automatically add the application to the metadata for [`Logger.debug/2`](logger#debug/2), [`Logger.info/2`](logger#info/2), etc. style of calls. * `:compile_time_purge_matching` - purges *at compilation time* all calls that match the given conditions. This means that [`Logger`](#content) calls with level lower than this option will be completely removed at compile time, accruing no overhead at runtime. This configuration expects a list of keyword lists. Each keyword list contains a metadata key and the matching value that should be purged. A special key named `:level_lower_than` can be used to purge all messages with a lower logger level. Remember that if you want to purge log calls from a dependency, the dependency must be recompiled. For example, to configure the `:backends` and purge all calls that happen at compile time with level lower than `:info` in a `config/config.exs` file: ``` config :logger, backends: [:console], compile_time_purge_matching: [ [level_lower_than: :info] ] ``` If you want to purge all log calls from an application named `:foo` and only keep errors from `Bar.foo/3`, you can set up two different matches: ``` config :logger, compile_time_purge_matching: [ [application: :foo], [module: Bar, function: "foo/3", level_lower_than: :error] ] ``` ### Runtime Configuration All configuration below can be set via config files (such as `config/config.exs`) but also changed dynamically during runtime via [`Logger.configure/1`](logger#configure/1). * `:level` - the logging level. Attempting to log any message with severity less than the configured level will simply cause the message to be ignored. Keep in mind that each backend may have its specific level, too. * `:utc_log` - when `true`, uses UTC in logs. By default it uses local time (i.e., it defaults to `false`). * `:truncate` - the maximum message size to be logged (in bytes). Defaults to 8192 bytes. Note this configuration is approximate. Truncated messages will have `" (truncated)"` at the end. The atom `:infinity` can be passed to disable this behavior. * `:sync_threshold` - if the [`Logger`](#content) manager has more than `:sync_threshold` messages in its queue, [`Logger`](#content) will change to *sync mode*, to apply backpressure to the clients. [`Logger`](#content) will return to *async mode* once the number of messages in the queue is reduced to one below the `sync_threshold`. Defaults to 20 messages. `:sync_threshold` can be set to `0` to force *sync mode*. * `:discard_threshold` - if the [`Logger`](#content) manager has more than `:discard_threshold` messages in its queue, [`Logger`](#content) will change to *discard mode* and messages will be discarded directly in the clients. [`Logger`](#content) will return to *sync mode* once the number of messages in the queue is reduced to one below the `discard_threshold`. Defaults to 500 messages. * `:discard_threshold_periodic_check` - a periodic check that checks and reports if logger is discarding messages. It logs a warn message whenever the system is (or continues) in discard mode and it logs a warn message whenever if the system was discarding messages but stopped doing so after the previous check. By default it runs every `30_000` milliseconds. * `:translator_inspect_opts` - when translating OTP reports and errors, the last message and state must be inspected in the error reports. This configuration allow developers to change how much and how the data should be inspected. For example, to configure the `:level` and `:truncate` options in a `config/config.exs` file: ``` config :logger, level: :warn, truncate: 4096 ``` ### Error logger configuration The following configuration applies to [`Logger`](#content)'s wrapper around Erlang's logging functionalities. All the configurations below must be set before the `:logger` application starts. * `:handle_otp_reports` - redirects OTP reports to [`Logger`](#content) so they are formatted in Elixir terms. This effectively disables Erlang standard logger. Defaults to `true`. * `:handle_sasl_reports` - redirects supervisor, crash and progress reports to [`Logger`](#content) so they are formatted in Elixir terms. Your application must guarantee `:sasl` is started before `:logger`. This means you may see some initial reports written in Erlang syntax until the Logger application kicks in. Defaults to `false`. From Erlang/OTP 21, `:handle_sasl_reports` only has an effect if `:handle_otp_reports` is true. The following configurations apply only for Erlang/OTP 20 and earlier: * `:discard_threshold_for_error_logger` - if `:error_logger` has more than `discard_threshold` messages in its inbox, messages will be dropped until the message queue goes down to `discard_threshold * 0.75` entries. The threshold will be checked once again after 10% of threshold messages are processed, to avoid messages from being constantly dropped. For example, if the threshold is 500 (the default) and the inbox has 600 messages, 225 messages will dropped, bringing the inbox down to 375 (0.75 *threshold) entries and 50 (0.1* threshold) messages will be processed before the threshold is checked once again. For example, to configure [`Logger`](#content) to redirect all Erlang messages using a `config/config.exs` file: ``` config :logger, handle_otp_reports: true, handle_sasl_reports: true ``` Furthermore, [`Logger`](#content) allows messages sent by Erlang to be translated into an Elixir format via translators. Translators can be added at any time with the [`add_translator/1`](#add_translator/1) and [`remove_translator/1`](#remove_translator/1) APIs. Check [`Logger.Translator`](logger.translator) for more information. Backends --------- [`Logger`](#content) supports different backends where log messages are written to. The available backends by default are: * `:console` - logs messages to the console (enabled by default) Developers may also implement their own backends, an option that is explored in more detail below. The initial backends are loaded via the `:backends` configuration, which must be set before the `:logger` application is started. ### Console backend The console backend logs messages by printing them to the console. It supports the following options: * `:level` - the level to be logged by this backend. Note that messages are filtered by the general `:level` configuration for the `:logger` application first. * `:format` - the format message used to print logs. Defaults to: `"\n$time $metadata[$level] $levelpad$message\n"`. It may also be a `{module, function}` tuple that is invoked with the log level, the message, the current timestamp and the metadata. * `:metadata` - the metadata to be printed by `$metadata`. Defaults to an empty list (no metadata). Setting `:metadata` to `:all` prints all metadata. See the "Metadata" section for more information. * `:colors` - a keyword list of coloring options. * `:device` - the device to log error messages to. Defaults to `:user` but can be changed to something else such as `:standard_error`. * `:max_buffer` - maximum events to buffer while waiting for a confirmation from the IO device (default: 32). Once the buffer is full, the backend will block until a confirmation is received. The supported keys in the `:colors` keyword list are: * `:enabled` - boolean value that allows for switching the coloring on and off. Defaults to: [`IO.ANSI.enabled?/0`](https://hexdocs.pm/elixir/IO.ANSI.html#enabled?/0) * `:debug` - color for debug messages. Defaults to: `:cyan` * `:info` - color for info messages. Defaults to: `:normal` * `:warn` - color for warn messages. Defaults to: `:yellow` * `:error` - color for error messages. Defaults to: `:red` See the [`IO.ANSI`](https://hexdocs.pm/elixir/IO.ANSI.html) module for a list of colors and attributes. Here is an example of how to configure the `:console` backend in a `config/config.exs` file: ``` config :logger, :console, format: "\n$time $metadata[$level] $levelpad$message\n", metadata: [:user_id] ``` Metadata --------- In addition to the keys provided by the user via [`Logger.metadata/1`](logger#metadata/1), the following extra keys are available to the `:metadata` list: * `:application` - the current application * `:module` - the current module * `:function` - the current function * `:file` - the current file * `:line` - the current line * `:pid` - the current process identifier * `:crash_reason` - a two-element tuple with the throw/error/exit reason as first argument and the stacktrace as second. A throw will always be `{:nocatch, term}`. An error is always an [`Exception`](https://hexdocs.pm/elixir/Exception.html) struct. All other entries are exits. The console backend ignores this metadata by default but it can be useful to other backends, such as the ones that report errors to third-party services * `:initial_call` - the initial call that started the process * `:registered_name` - the process registered name as an atom Note that all metadata is optional and may not always be available. The `:module`, `:function`, `:line`, and similar metadata are automatically included when using [`Logger`](#content) macros. [`Logger.bare_log/3`](logger#bare_log/3) does not include any metadata beyond the `:pid` by default. Other metadata, such as `:crash_reason`, `:initial_call`, and `:registered_name` are extracted from Erlang/OTP crash reports and available only in those cases. ### Custom formatting The console backend allows you to customize the format of your log messages with the `:format` option. You may set `:format` to either a string or a `{module, function}` tuple if you wish to provide your own format function. Here is an example of how to configure the `:console` backend in a `config/config.exs` file: ``` config :logger, :console, format: {MyConsoleLogger, :format} ``` And here is an example of how you can define `MyConsoleLogger.format/4` from the above configuration: ``` defmodule MyConsoleLogger do def format(level, message, timestamp, metadata) do # Custom formatting logic... end end ``` It is extremely important that **the formatting function does not fail**, as it will bring that particular logger instance down, causing your system to temporarily lose messages. If necessary, wrap the function in a `rescue` and log a default message instead: ``` defmodule MyConsoleLogger do def format(level, message, timestamp, metadata) do # Custom formatting logic... rescue _ -> "could not format: #{inspect({level, message, metadata})}" end end ``` The `{module, function}` will be invoked with four arguments: * the log level: an atom * the message: this is usually chardata, but in some cases it may not be. Since the formatting function should *never* fail, you need to prepare for the message being anything (and do something like the `rescue` in the example above) * the current timestamp: a term of type [`Logger.Formatter.time/0`](logger.formatter#t:time/0) * the metadata: a keyword list You can read more about formatting in [`Logger.Formatter`](logger.formatter). ### Custom backends Any developer can create their own [`Logger`](#content) backend. Since [`Logger`](#content) is an event manager powered by `:gen_event`, writing a new backend is a matter of creating an event handler, as described in the [`:gen_event`](http://erlang.org/doc/man/gen_event.html) documentation. From now on, we will be using the term "event handler" to refer to your custom backend, as we head into implementation details. Once the `:logger` application starts, it installs all event handlers listed under the `:backends` configuration into the [`Logger`](#content) event manager. The event manager and all added event handlers are automatically supervised by [`Logger`](#content). Once initialized, the handler should be designed to handle events in the following format: ``` {level, group_leader, {Logger, message, timestamp, metadata}} | :flush ``` where: * `level` is one of `:debug`, `:info`, `:warn`, or `:error`, as previously described * `group_leader` is the group leader of the process which logged the message * `{Logger, message, timestamp, metadata}` is a tuple containing information about the logged message: + the first element is always the atom [`Logger`](#content) + `message` is the actual message (as chardata) + `timestamp` is the timestamp for when the message was logged, as a `{{year, month, day}, {hour, minute, second, millisecond}}` tuple + `metadata` is a keyword list of metadata used when logging the message It is recommended that handlers ignore messages where the group leader is in a different node than the one where the handler is installed. For example: ``` def handle_event({_level, gl, {Logger, _, _, _}}, state) when node(gl) != node() do {:ok, state} end ``` In the case of the event `:flush` handlers should flush any pending data. This event is triggered by [`flush/0`](#flush/0). Furthermore, backends can be configured via the [`configure_backend/2`](#configure_backend/2) function which requires event handlers to handle calls of the following format: ``` {:configure, options} ``` where `options` is a keyword list. The result of the call is the result returned by [`configure_backend/2`](#configure_backend/2). The recommended return value for successful configuration is `:ok`. It is recommended that backends support at least the following configuration options: * `:level` - the logging level for that backend * `:format` - the logging format for that backend * `:metadata` - the metadata to include in that backend Check the implementation for [`Logger.Backends.Console`](https://hexdocs.pm/logger/Logger.Backends.Console.html), for examples on how to handle the recommendations in this section and how to process the existing options. Summary ======== Types ------ [backend()](#t:backend/0) [level()](#t:level/0) [message()](#t:message/0) [metadata()](#t:metadata/0) Functions ---------- [add\_backend(backend, opts \\ [])](#add_backend/2) Adds a new backend. [add\_translator(translator)](#add_translator/1) Adds a new translator. [bare\_log(level, chardata\_or\_fun, metadata \\ [])](#bare_log/3) Logs a message dynamically. [compare\_levels(left, right)](#compare_levels/2) Compares log levels. [configure(options)](#configure/1) Configures the logger. [configure\_backend(backend, options)](#configure_backend/2) Configures the given backend. [debug(chardata\_or\_fun, metadata \\ [])](#debug/2) Logs a debug message. [disable(pid)](#disable/1) Disables logging for the current process. [enable(pid)](#enable/1) Enables logging for the current process. [error(chardata\_or\_fun, metadata \\ [])](#error/2) Logs an error message. [flush()](#flush/0) Flushes the logger. [info(chardata\_or\_fun, metadata \\ [])](#info/2) Logs an info message. [level()](#level/0) Retrieves the [`Logger`](#content) level. [log(level, chardata\_or\_fun, metadata \\ [])](#log/3) Logs a message with the given `level`. [metadata()](#metadata/0) Reads the current process metadata. [metadata(keyword)](#metadata/1) Alters the current process metadata according the given keyword list. [remove\_backend(backend, opts \\ [])](#remove_backend/2) Removes a backend. [remove\_translator(translator)](#remove_translator/1) Removes a translator. [reset\_metadata(keywords \\ [])](#reset_metadata/1) Resets the current process metadata to the given keyword list. [warn(chardata\_or\_fun, metadata \\ [])](#warn/2) Logs a warning message. Types ====== ### backend() #### Specs ``` backend() :: :gen_event.handler() ``` ### level() #### Specs ``` level() :: :error | :info | :warn | :debug ``` ### message() #### Specs ``` message() :: IO.chardata() | String.Chars.t() ``` ### metadata() #### Specs ``` metadata() :: keyword() ``` Functions ========== ### add\_backend(backend, opts \\ []) #### Specs ``` add_backend(backend(), keyword()) :: Supervisor.on_start_child() ``` Adds a new backend. Backends added by this function are not persisted. Therefore if the Logger application or supervision tree is restarted, the backend won't be available. If you need this guarantee, then configure the backend via the application environment. #### Options * `:flush` - when `true`, guarantees all messages currently sent to [`Logger`](#content) are processed before the backend is added ### add\_translator(translator) #### Specs ``` add_translator({module(), function :: atom()}) :: :ok ``` Adds a new translator. ### bare\_log(level, chardata\_or\_fun, metadata \\ []) #### Specs ``` bare_log( level(), message() | (() -> message() | {message(), keyword()}), keyword() ) :: :ok | {:error, :noproc} | {:error, term()} ``` Logs a message dynamically. Opposite to [`log/3`](#log/3), [`debug/2`](#debug/2), [`info/2`](#info/2), and friends, the arguments given to [`bare_log/3`](#bare_log/3) are always evaluated. However, you can pass anonymous functions to [`bare_log/3`](#bare_log/3) and they will only be evaluated if there is something to be logged. ### compare\_levels(left, right) #### Specs ``` compare_levels(level(), level()) :: :lt | :eq | :gt ``` Compares log levels. Receives two log levels and compares the `left` level against the `right` level and returns: * `:lt` if `left` is less than `right` * `:eq` if `left` and `right` are equal * `:gt` if `left` is greater than `right` #### Examples ``` iex> Logger.compare_levels(:debug, :warn) :lt iex> Logger.compare_levels(:error, :info) :gt ``` ### configure(options) #### Specs ``` configure(keyword()) :: :ok ``` Configures the logger. See the "Runtime Configuration" section in the [`Logger`](#content) module documentation for the available options. The changes done here are automatically persisted to the `:logger` application environment. ### configure\_backend(backend, options) #### Specs ``` configure_backend(backend(), keyword()) :: term() ``` Configures the given backend. The backend needs to be started and running in order to be configured at runtime. ### debug(chardata\_or\_fun, metadata \\ []) Logs a debug message. Returns `:ok` or an `{:error, reason}` tuple. #### Examples ``` Logger.debug("hello?") Logger.debug(fn -> "dynamically calculated debug" end) Logger.debug(fn -> {"dynamically calculated debug", [additional: :metadata]} end) ``` ### disable(pid) #### Specs ``` disable(pid()) :: :ok ``` Disables logging for the current process. Currently the only accepted PID is `self()`. ### enable(pid) #### Specs ``` enable(pid()) :: :ok ``` Enables logging for the current process. Currently the only accepted PID is `self()`. ### error(chardata\_or\_fun, metadata \\ []) Logs an error message. Returns `:ok` or an `{:error, reason}` tuple. #### Examples ``` Logger.error("oops") Logger.error(fn -> "dynamically calculated error" end) Logger.error(fn -> {"dynamically calculated error", [additional: :metadata]} end) ``` ### flush() #### Specs ``` flush() :: :ok ``` Flushes the logger. This guarantees all messages sent to [`Logger`](#content) prior to this call will be processed. This is useful for testing and it should not be called in production code. ### info(chardata\_or\_fun, metadata \\ []) Logs an info message. Returns `:ok` or an `{:error, reason}` tuple. #### Examples ``` Logger.info("mission accomplished") Logger.info(fn -> "dynamically calculated info" end) Logger.info(fn -> {"dynamically calculated info", [additional: :metadata]} end) ``` ### level() #### Specs ``` level() :: level() ``` Retrieves the [`Logger`](#content) level. The [`Logger`](#content) level can be changed via [`configure/1`](#configure/1). ### log(level, chardata\_or\_fun, metadata \\ []) Logs a message with the given `level`. Returns `:ok` or an `{:error, reason}` tuple. The macros [`debug/2`](#debug/2), [`warn/2`](#warn/2), [`info/2`](#info/2), and [`error/2`](#error/2) are preferred over this macro as they can automatically eliminate the call to [`Logger`](#content) altogether at compile time if desired (see the documentation for the [`Logger`](#content) module). ### metadata() #### Specs ``` metadata() :: metadata() ``` Reads the current process metadata. ### metadata(keyword) #### Specs ``` metadata(metadata()) :: :ok ``` Alters the current process metadata according the given keyword list. This function will merge the given keyword list into the existing metadata, with the exception of setting a key to `nil`, which will remove that key from the metadata. ### remove\_backend(backend, opts \\ []) #### Specs ``` remove_backend(backend(), keyword()) :: :ok | {:error, term()} ``` Removes a backend. #### Options * `:flush` - when `true`, guarantees all messages currently sent to [`Logger`](#content) are processed before the backend is removed ### remove\_translator(translator) #### Specs ``` remove_translator({module(), function :: atom()}) :: :ok ``` Removes a translator. ### reset\_metadata(keywords \\ []) #### Specs ``` reset_metadata(metadata()) :: :ok ``` Resets the current process metadata to the given keyword list. ### warn(chardata\_or\_fun, metadata \\ []) Logs a warning message. Returns `:ok` or an `{:error, reason}` tuple. #### Examples ``` Logger.warn("knob turned too far to the right") Logger.warn(fn -> "dynamically calculated warning" end) Logger.warn(fn -> {"dynamically calculated warning", [additional: :metadata]} end) ```
programming_docs
elixir Naming Conventions Naming Conventions ================== This document covers some naming conventions in Elixir code, from casing to punctuation characters. Casing ------- Elixir developers must use `snake_case` when defining variables, function names, module attributes, etc.: ``` some_map = %{this_is_a_key: "and a value"} is_map(some_map) ``` Aliases, commonly used as module names, are an exception as they must be capitalized and written in `CamelCase`, like [`OptionParser`](optionparser). For aliases, capital letters are kept in acronyms, like [`ExUnit.CaptureIO`](https://hexdocs.pm/ex_unit/ExUnit.CaptureIO.html) or [`Mix.SCM`](https://hexdocs.pm/mix/Mix.SCM.html). Atoms can be written either in `:snake_case` or `:CamelCase`, although the convention is to use the snake case version throughout Elixir. Generally speaking, filenames follow the `snake_case` convention of the module they define. For example, `MyApp` should be defined inside the `my_app.ex` file. However, this is only a convention. At the end of the day, any filename can be used as they do not affect the compiled code in any way. Underscore (`_foo`) -------------------- Elixir relies on underscores in different situations. For example, a value that is not meant to be used must be assigned to `_` or to a variable starting with underscore: ``` iex> {:ok, _contents} = File.read("README.md") ``` Function names may also start with an underscore. Such functions are never imported by default: ``` iex> defmodule Example do ...> def _wont_be_imported do ...> :oops ...> end ...> end iex> import Example iex> _wont_be_imported() ** (CompileError) iex:1: undefined function _wont_be_imported/0 ``` Due to this property, Elixir relies on functions starting with underscore to attach compile-time metadata to modules. Such functions are most often in the `__foo__` format. For example, every module in Elixir has an [`__info__/1`](module#c:__info__/1) function: ``` iex> String.__info__(:functions) [at: 2, capitalize: 1, chunk: 2, ...] ``` Elixir also includes five special forms that follow the double underscore format: [`__CALLER__/0`](kernel.specialforms#__CALLER__/0), [`__DIR__/0`](kernel.specialforms#__DIR__/0), [`__ENV__/0`](kernel.specialforms#__ENV__/0)and [`__MODULE__/0`](kernel.specialforms#__MODULE__/0) retrieve compile-time information about the current environment, while [`__STACKTRACE__/0`](kernel.specialforms#__STACKTRACE__/0) retrieves the stacktrace for the current exception. Trailing bang (`foo!`) ----------------------- A trailing bang (exclamation mark) signifies a function or macro where failure cases raise an exception. Many functions come in pairs, such as [`File.read/1`](file#read/1) and [`File.read!/1`](file#read!/1). [`File.read/1`](file#read/1) will return a success or failure tuple, whereas [`File.read!/1`](file#read!/1) will return a plain value or else raise an exception: ``` iex> File.read("file.txt") {:ok, "file contents"} iex> File.read("no_such_file.txt") {:error, :enoent} iex> File.read!("file.txt") "file contents" iex> File.read!("no_such_file.txt") ** (File.Error) could not read file no_such_file.txt: no such file or directory ``` The version without `!` is preferred when you want to handle different outcomes using pattern matching: ``` case File.read(file) do {:ok, body} -> # do something with the `body` {:error, reason} -> # handle the error caused by `reason` end ``` However, if you expect the outcome to always to be successful (e.g. if you expect the file always to exist), the bang variation can be more convenient and will raise a more helpful error message (than a failed pattern match) on failure. More examples of paired functions: [`Base.decode16/2`](base#decode16/2) and [`Base.decode16!/2`](base#decode16!/2), [`File.cwd/0`](file#cwd/0) and [`File.cwd!/0`](file#cwd!/0). There are also some non-paired functions, with no non-bang variant. The bang still signifies that it will raise an exception on failure. Example: [`Protocol.assert_protocol!/1`](protocol#assert_protocol!/1). In macro code, the bang on [`Kernel.alias!/1`](kernel#alias!/1) and [`Kernel.var!/2`](kernel#var!/2) signifies that [macro hygiene](https://elixir-lang.org/getting-started/meta/macros.html#macros-hygiene) is set aside. Trailing question mark (`foo?`) -------------------------------- Functions that return a boolean are named with a trailing question mark. Examples: [`Keyword.keyword?/1`](keyword#keyword?/1), [`Mix.debug?/0`](https://hexdocs.pm/mix/Mix.html#debug?/0), [`String.contains?/2`](string#contains?/2) However, functions that return booleans and are valid in guards follow another convention, described next. `is_` prefix (`is_foo`) ------------------------ Type checks and other boolean checks that are allowed in guard clauses are named with an `is_` prefix. Examples: [`Integer.is_even/1`](integer#is_even/1), [`Kernel.is_list/1`](kernel#is_list/1) These functions and macros follow the Erlang convention of an `is_` prefix, instead of a trailing question mark, precisely to indicate that they are allowed in guard clauses. Note that type checks that are not valid in guard clauses do not follow this convention. Examples: [`Keyword.keyword?/1`](keyword#keyword?/1), [`Regex.regex?/1`](regex#regex?/1) Special names -------------- Some names have specific meaning in Elixir. We detail those cases below. ### length and size When you see `size` in a function name, it means the operation runs in constant time (also written as "O(1) time") because the size is stored alongside the data structure. Examples: [`Kernel.map_size/1`](kernel#map_size/1), [`Kernel.tuple_size/1`](kernel#tuple_size/1) When you see `length`, the operation runs in linear time ("O(n) time") because the entire data structure has to be traversed. Examples: [`Kernel.length/1`](kernel#length/1), [`String.length/1`](string#length/1) In other words, functions using the word "size" in its name will take the same amount of time whether the data structure is tiny or huge. Conversely, functions having "length" in its name will take more time as the data structure grows in size. elixir Mix Mix ==== Mix is a build tool that provides tasks for creating, compiling, and testing Elixir projects, managing its dependencies, and more. Mix.Project ------------ The foundation of Mix is a project. A project can be defined by using [`Mix.Project`](mix.project) in a module, usually placed in a file named `mix.exs`: ``` defmodule MyApp.MixProject do use Mix.Project def project do [ app: :my_app, version: "1.0.0" ] end end ``` See the [`Mix.Project`](mix.project) module for detailed documentation on Mix projects. Once the project is defined, a number of default Mix tasks can be run directly from the command line: * [`mix compile`](mix.tasks.compile) - compiles the current project * [`mix test`](mix.tasks.test) - runs tests for the given project * [`mix run`](mix.tasks.run) - runs a particular command inside the project Each task has its own options and sometimes specific configuration to be defined in the `project/0` function. You can use [`mix help`](mix.tasks.help) to list all available tasks and `mix help NAME` to show help for a particular task. The best way to get started with your first project is by calling `mix new my_project` from the command line. Mix.Task --------- Tasks are what make Mix extensible. Projects can extend Mix behaviour by adding their own tasks. For example, adding the task below inside your project will make it available to everyone that uses your project: ``` defmodule Mix.Tasks.Hello do use Mix.Task def run(_) do Mix.shell().info("Hello world") end end ``` The task can now be invoked with `mix hello`. See the [`Mix.Task`](mix.task) behaviour for detailed documentation on Mix tasks. Dependencies ------------- Mix also manages your dependencies and integrates nicely with the [Hex package manager](https://hex.pm). In order to use dependencies, you need to add a `:deps` key to your project configuration. We often extract the list of dependencies into its own function: ``` defmodule MyApp.MixProject do use Mix.Project def project do [ app: :my_app, version: "1.0.0", deps: deps() ] end defp deps do [ {:ecto, "~> 2.0"}, {:plug, github: "elixir-lang/plug"} ] end end ``` You can run [`mix help deps`](mix.tasks.deps) to learn more about dependencies in Mix. Environments ------------- Mix supports different environments. Environments allow developers to prepare and organize their project specifically for different scenarios. By default, Mix provides three environments: * `:dev` - the default environment * `:test` - the environment [`mix test`](mix.tasks.test) runs on * `:prod` - the environment your dependencies run on The environment can be changed via the command line by setting the `MIX_ENV` environment variable, for example: ``` $ MIX_ENV=prod mix run server.exs ``` You can also specify that certain dependencies are available only for certain environments: ``` {:some_test_dependency, "~> 1.0", only: :test} ``` The environment can be read via [`Mix.env/0`](mix#env/0). Targets -------- Besides environments, Mix supports targets. Targets are useful when a project needs to compile to different architectures and some of the dependencies are only available to some of them. By default, the target is `:host` but it can be set via the `MIX_TARGET` environment variable. The target can be read via [`Mix.target/0`](mix#target/0). This feature is considered experimental and may change in future releases. Aliases -------- Aliases are shortcuts or tasks specific to the current project. In the "Mix.Task" section, we have defined a task that would be available to everyone using our project as a dependency. What if we wanted the task to only be available for our project? Just define an alias: ``` defmodule MyApp.MixProject do use Mix.Project def project do [ app: :my_app, version: "1.0.0", aliases: aliases() ] end defp aliases do [ c: "compile", hello: &hello/1 ] end defp hello(_) do Mix.shell().info("Hello world") end end ``` In the example above, we have defined two aliases. One is `mix c` which is a shortcut for [`mix compile`](mix.tasks.compile). The other is named `mix hello`, which is the equivalent to the `Mix.Tasks.Hello` we have defined in the "Mix.Task" section. Aliases may also be lists, specifying multiple tasks to be run consecutively: ``` [all: [&hello/1, "deps.get --only #{Mix.env()}", "compile"]] ``` In the example above, we have defined an alias named `mix all`, that prints "Hello world", then fetches dependencies specific to the current environment, and compiles the project. Arguments given to the alias will be appended to the arguments of the last task in the list, if the last task is a function they will be given as a list of strings to the function. Finally, aliases can also be used to augment existing tasks. Let's suppose you want to augment [`mix clean`](mix.tasks.clean) to clean another directory Mix does not know about: ``` [clean: ["clean", &clean_extra/1]] ``` Where `&clean_extra/1` would be a function in your `mix.exs` with extra cleanup logic. Aliases defined in the current project do not affect its dependencies and aliases defined in dependencies are not accessible from the current project. Aliases can be used very powerfully to also run Elixir scripts and shell commands, for example: ``` # priv/hello1.exs IO.puts("Hello One") # priv/hello2.exs IO.puts("Hello Two") # priv/world.sh #!/bin/sh echo "world!" # mix.exs defp aliases do [ some_alias: ["hex.info", "run priv/hello1.exs", "cmd priv/world.sh"] ] end ``` In the example above we have created the alias `some_alias` that will run the task `mix hex.info`, then [`mix run`](mix.tasks.run) to run an Elixir script, then [`mix cmd`](mix.tasks.cmd) to execute a command line shell script. This shows how powerful aliases mixed with Mix tasks can be. Mix tasks are designed to run only once. This prevents the same task to be executed multiple times. For example, if there are several tasks depending on [`mix compile`](mix.tasks.compile), the code will be compiled once. Tasks can be executed again if they are explicitly reenabled using [`Mix.Task.reenable/1`](mix.task#reenable/1): ``` another_alias: [ "format --check-formatted priv/hello1.exs", "cmd priv/world.sh", fn _ -> Mix.Task.reenable("format") end, "format --check-formatted priv/hello2.exs" ] ``` The following tasks are automatically reenabled: [`mix cmd`](mix.tasks.cmd), [`mix do`](mix.tasks.do), [`mix loadconfig`](mix.tasks.loadconfig), [`mix profile.cprof`](mix.tasks.profile.cprof), [`mix profile.eprof`](mix.tasks.profile.eprof), [`mix profile.fprof`](mix.tasks.profile.fprof), [`mix run`](mix.tasks.run), and [`mix xref`](mix.tasks.xref). It is worth mentioning that some tasks, such as in the case of the `format` command in the example above, can accept multiple files so it could be rewritten as: ``` another_alias: ["format --check-formatted priv/hello1.exs priv/hello2.exs"] ``` Environment variables ---------------------- Several environment variables can be used to modify Mix's behaviour. Mix responds to the following variables: * `MIX_ARCHIVES` - specifies the directory into which the archives should be installed * `MIX_BUILD_PATH` - sets the project build\_path config * `MIX_DEBUG` - outputs debug information about each task before running it * `MIX_ENV` - specifies which environment should be used. See [Environments](#module-environments) * `MIX_TARGET` - specifies which target should be used. See [Targets](#module-targets) * `MIX_EXS` - changes the full path to the `mix.exs` file * `MIX_HOME` - path to Mix's home directory, stores configuration files and scripts used by Mix * `MIX_PATH` - appends extra code paths * `MIX_QUIET` - does not print information messages to the terminal * `MIX_REBAR` - path to rebar command that overrides the one Mix installs * `MIX_REBAR3` - path to rebar3 command that overrides the one Mix installs Mix also falls back to the `XDG_DATA_HOME` and `XDG_CONFIG_HOME` environment variables when storing its contents and configuration. Environment variables that are not meant to hold a value (and act basically as flags) should be set to either `1` or `true`, for example: ``` $ MIX_DEBUG=1 mix compile ``` Summary ======== Functions ---------- [compilers()](#compilers/0) Returns the default compilers used by Mix. [debug(debug)](#debug/1) Sets Mix debug mode. [debug?()](#debug?/0) Returns `true` if Mix is in debug mode, `false` otherwise. [env()](#env/0) Returns the current Mix environment. [env(env)](#env/1) Changes the current Mix environment to `env`. [raise(message)](#raise/1) Raises a Mix error that is nicely formatted. [shell()](#shell/0) Returns the current shell. [shell(shell)](#shell/1) Sets the current shell. [target()](#target/0) Returns the Mix target. [target(target)](#target/1) Changes the current Mix target to `target`. Functions ========== ### compilers() #### Specs ``` compilers() :: [atom()] ``` Returns the default compilers used by Mix. It can be used in your `mix.exs` to prepend or append new compilers to Mix: ``` def project do [compilers: Mix.compilers() ++ [:foo, :bar]] end ``` ### debug(debug) #### Specs ``` debug(boolean()) :: :ok ``` Sets Mix debug mode. ### debug?() #### Specs ``` debug?() :: boolean() ``` Returns `true` if Mix is in debug mode, `false` otherwise. ### env() #### Specs ``` env() :: atom() ``` Returns the current Mix environment. This function should not be used at runtime in application code (as opposed to infrastructure and build code like Mix tasks). Mix is a build tool and may not be available after the code is compiled (for example in a release). To differentiate the program behavior depending on the environment, it is recommended to use application environment through [`Application.get_env/3`](https://hexdocs.pm/elixir/Application.html#get_env/3). Proper configuration can be set in config files, often per-environment (see the [`Config`](https://hexdocs.pm/elixir/Config.html) module for more information). ### env(env) #### Specs ``` env(atom()) :: :ok ``` Changes the current Mix environment to `env`. Be careful when invoking this function as any project configuration won't be reloaded. This function should not be used at runtime in application code (see [`env/0`](#env/0) for more information). ### raise(message) #### Specs ``` raise(binary()) :: no_return() ``` Raises a Mix error that is nicely formatted. ### shell() #### Specs ``` shell() :: module() ``` Returns the current shell. [`shell/0`](#shell/0) can be used as a wrapper for the current shell. It contains conveniences for requesting information from the user, printing to the shell and so forth. The Mix shell is swappable (see [`shell/1`](#shell/1)), allowing developers to use a test shell that simply sends messages to the current process instead of performing IO (see [`Mix.Shell.Process`](mix.shell.process)). By default, this returns [`Mix.Shell.IO`](mix.shell.io). ### shell(shell) #### Specs ``` shell(module()) :: :ok ``` Sets the current shell. After calling this function, `shell` becomes the shell that is returned by [`shell/0`](#shell/0). ### target() #### Specs ``` target() :: atom() ``` Returns the Mix target. ### target(target) #### Specs ``` target(atom()) :: :ok ``` Changes the current Mix target to `target`. Be careful when invoking this function as any project configuration won't be reloaded. elixir Access behaviour Access behaviour ================= Key-based access to data structures. The [`Access`](#content) module defines a behaviour for dynamically accessing keys of any type in a data structure via the `data[key]` syntax. [`Access`](#content) supports keyword lists ([`Keyword`](keyword)) and maps ([`Map`](map)) out of the box. The key can be of any type and it returns `nil` if the key does not exist: ``` iex> keywords = [a: 1, b: 2] iex> keywords[:a] 1 iex> keywords[:c] nil iex> map = %{a: 1, b: 2} iex> map[:a] 1 iex> star_ratings = %{1.0 => "★", 1.5 => "★☆", 2.0 => "★★"} iex> star_ratings[1.5] "★☆" ``` This syntax is very convenient as it can be nested arbitrarily: ``` iex> keywords = [a: 1, b: 2] iex> keywords[:c][:unknown] nil ``` This works because accessing anything on a `nil` value, returns `nil` itself: ``` iex> nil[:a] nil ``` The access syntax can also be used with the [`Kernel.put_in/2`](kernel#put_in/2), [`Kernel.update_in/2`](kernel#update_in/2) and [`Kernel.get_and_update_in/2`](kernel#get_and_update_in/2) macros to allow values to be set in nested data structures: ``` iex> users = %{"john" => %{age: 27}, "meg" => %{age: 23}} iex> put_in(users["john"][:age], 28) %{"john" => %{age: 28}, "meg" => %{age: 23}} ``` > > Attention! While the access syntax is allowed in maps via `map[key]`, if your map is made of predefined atom keys, you should prefer to access those atom keys with `map.key` instead of `map[key]`, as `map.key` will raise if the key is missing. This is important because, if a map has a predefined set of keys and a key is missing, it is most likely a bug in your software or a typo on the key name. For this reason, because structs are predefined in nature, they only allow the `struct.key` syntax and they do not allow the `struct[key]` access syntax. See the [`Map`](map) module for more information. > > Nested data structures ----------------------- Both key-based access syntaxes can be used with the nested update functions and macros in [`Kernel`](kernel), such as [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.put_in/3`](kernel#put_in/3), [`Kernel.update_in/3`](kernel#update_in/3), [`Kernel.pop_in/2`](kernel#pop_in/2), and [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3). For example, to update a map inside another map: ``` iex> users = %{"john" => %{age: 27}, "meg" => %{age: 23}} iex> put_in(users["john"].age, 28) %{"john" => %{age: 28}, "meg" => %{age: 23}} ``` This module provides convenience functions for traversing other structures, like tuples and lists. These functions can be used in all the [`Access`](#content)-related functions and macros in [`Kernel`](kernel). For instance, given a user map with the `:name` and `:languages` keys, here is how to deeply traverse the map and convert all language names to uppercase: ``` iex> languages = [ ...> %{name: "elixir", type: :functional}, ...> %{name: "c", type: :procedural} ...> ] iex> user = %{name: "john", languages: languages} iex> update_in(user, [:languages, Access.all(), :name], &String.upcase/1) %{ name: "john", languages: [ %{name: "ELIXIR", type: :functional}, %{name: "C", type: :procedural} ] } ``` See the functions [`key/1`](#key/1), [`key!/1`](#key!/1), [`elem/1`](#elem/1), and [`all/0`](#all/0) for some of the available accessors. Summary ======== Types ------ [access\_fun(data, get\_value)](#t:access_fun/2) [any\_container()](#t:any_container/0) [container()](#t:container/0) [get\_and\_update\_fun(data, get\_value)](#t:get_and_update_fun/2) [get\_fun(data, get\_value)](#t:get_fun/2) [key()](#t:key/0) [nil\_container()](#t:nil_container/0) [t()](#t:t/0) [value()](#t:value/0) Functions ---------- [all()](#all/0) Returns a function that accesses all the elements in a list. [at(index)](#at/1) Returns a function that accesses the element at `index` (zero based) of a list. [elem(index)](#elem/1) Returns a function that accesses the element at the given index in a tuple. [fetch(container, key)](#fetch/2) Fetches the value for the given key in a container (a map, keyword list, or struct that implements the [`Access`](#content) behaviour). [filter(func)](#filter/1) Returns a function that accesses all elements of a list that match the provided predicate. [get(container, key, default \\ nil)](#get/3) Gets the value for the given key in a container (a map, keyword list, or struct that implements the [`Access`](#content) behaviour). [get\_and\_update(container, key, fun)](#get_and_update/3) Gets and updates the given key in a `container` (a map, a keyword list, a struct that implements the [`Access`](#content) behaviour). [key(key, default \\ nil)](#key/2) Returns a function that accesses the given key in a map/struct. [key!(key)](#key!/1) Returns a function that accesses the given key in a map/struct. [pop(container, key)](#pop/2) Removes the entry with a given key from a container (a map, keyword list, or struct that implements the [`Access`](#content) behaviour). Callbacks ---------- [fetch(term, key)](#c:fetch/2) Invoked in order to access the value stored under `key` in the given term `term`. [get\_and\_update(data, key, function)](#c:get_and_update/3) Invoked in order to access the value under `key` and update it at the same time. [pop(data, key)](#c:pop/2) Invoked to "pop" the value under `key` out of the given data structure. Types ====== ### access\_fun(data, get\_value) #### Specs ``` access_fun(data, get_value) :: get_fun(data, get_value) | get_and_update_fun(data, get_value) ``` ### any\_container() #### Specs ``` any_container() :: any() ``` ### container() #### Specs ``` container() :: keyword() | struct() | map() ``` ### get\_and\_update\_fun(data, get\_value) #### Specs ``` get_and_update_fun(data, get_value) :: (:get_and_update, data, (term() -> term()) -> {get_value, new_data :: container()} | :pop) ``` ### get\_fun(data, get\_value) #### Specs ``` get_fun(data, get_value) :: (:get, data, (term() -> term()) -> {get_value, new_data :: container()}) ``` ### key() #### Specs ``` key() :: any() ``` ### nil\_container() #### Specs ``` nil_container() :: nil ``` ### t() #### Specs ``` t() :: container() | nil_container() | any_container() ``` ### value() #### Specs ``` value() :: any() ``` Functions ========== ### all() #### Specs ``` all() :: access_fun(data :: list(), get_value :: list()) ``` Returns a function that accesses all the elements in a list. The returned function is typically passed as an accessor to [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3), and friends. #### Examples ``` iex> list = [%{name: "john"}, %{name: "mary"}] iex> get_in(list, [Access.all(), :name]) ["john", "mary"] iex> get_and_update_in(list, [Access.all(), :name], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {["john", "mary"], [%{name: "JOHN"}, %{name: "MARY"}]} iex> pop_in(list, [Access.all(), :name]) {["john", "mary"], [%{}, %{}]} ``` Here is an example that traverses the list dropping even numbers and multiplying odd numbers by 2: ``` iex> require Integer iex> get_and_update_in([1, 2, 3, 4, 5], [Access.all()], fn num -> ...> if Integer.is_even(num), do: :pop, else: {num, num * 2} ...> end) {[1, 2, 3, 4, 5], [2, 6, 10]} ``` An error is raised if the accessed structure is not a list: ``` iex> get_in(%{}, [Access.all()]) ** (RuntimeError) Access.all/0 expected a list, got: %{} ``` ### at(index) #### Specs ``` at(integer()) :: access_fun(data :: list(), get_value :: term()) ``` Returns a function that accesses the element at `index` (zero based) of a list. The returned function is typically passed as an accessor to [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3), and friends. #### Examples ``` iex> list = [%{name: "john"}, %{name: "mary"}] iex> get_in(list, [Access.at(1), :name]) "mary" iex> get_in(list, [Access.at(-1), :name]) "mary" iex> get_and_update_in(list, [Access.at(0), :name], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {"john", [%{name: "JOHN"}, %{name: "mary"}]} iex> get_and_update_in(list, [Access.at(-1), :name], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {"mary", [%{name: "john"}, %{name: "MARY"}]} ``` [`at/1`](#at/1) can also be used to pop elements out of a list or a key inside of a list: ``` iex> list = [%{name: "john"}, %{name: "mary"}] iex> pop_in(list, [Access.at(0)]) {%{name: "john"}, [%{name: "mary"}]} iex> pop_in(list, [Access.at(0), :name]) {"john", [%{}, %{name: "mary"}]} ``` When the index is out of bounds, `nil` is returned and the update function is never called: ``` iex> list = [%{name: "john"}, %{name: "mary"}] iex> get_in(list, [Access.at(10), :name]) nil iex> get_and_update_in(list, [Access.at(10), :name], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {nil, [%{name: "john"}, %{name: "mary"}]} ``` An error is raised if the accessed structure is not a list: ``` iex> get_in(%{}, [Access.at(1)]) ** (RuntimeError) Access.at/1 expected a list, got: %{} ``` ### elem(index) #### Specs ``` elem(non_neg_integer()) :: access_fun(data :: tuple(), get_value :: term()) ``` Returns a function that accesses the element at the given index in a tuple. The returned function is typically passed as an accessor to [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3), and friends. The returned function raises if `index` is out of bounds. Note that popping elements out of tuples is not possible and raises an error. #### Examples ``` iex> map = %{user: {"john", 27}} iex> get_in(map, [:user, Access.elem(0)]) "john" iex> get_and_update_in(map, [:user, Access.elem(0)], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {"john", %{user: {"JOHN", 27}}} iex> pop_in(map, [:user, Access.elem(0)]) ** (RuntimeError) cannot pop data from a tuple ``` An error is raised if the accessed structure is not a tuple: ``` iex> get_in(%{}, [Access.elem(0)]) ** (RuntimeError) Access.elem/1 expected a tuple, got: %{} ``` ### fetch(container, key) #### Specs ``` fetch(container(), term()) :: {:ok, term()} | :error ``` ``` fetch(nil_container(), any()) :: :error ``` Fetches the value for the given key in a container (a map, keyword list, or struct that implements the [`Access`](#content) behaviour). Returns `{:ok, value}` where `value` is the value under `key` if there is such a key, or `:error` if `key` is not found. #### Examples ``` iex> Access.fetch(%{name: "meg", age: 26}, :name) {:ok, "meg"} iex> Access.fetch([ordered: true, on_timeout: :exit], :timeout) :error ``` ### filter(func) #### Specs ``` filter((term() -> boolean())) :: access_fun(data :: list(), get_value :: list()) ``` Returns a function that accesses all elements of a list that match the provided predicate. The returned function is typically passed as an accessor to [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3), and friends. #### Examples ``` iex> list = [%{name: "john", salary: 10}, %{name: "francine", salary: 30}] iex> get_in(list, [Access.filter(&(&1.salary > 20)), :name]) ["francine"] iex> get_and_update_in(list, [Access.filter(&(&1.salary <= 20)), :name], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {["john"], [%{name: "JOHN", salary: 10}, %{name: "francine", salary: 30}]} ``` [`filter/1`](#filter/1) can also be used to pop elements out of a list or a key inside of a list: ``` iex> list = [%{name: "john", salary: 10}, %{name: "francine", salary: 30}] iex> pop_in(list, [Access.filter(&(&1.salary >= 20))]) {[%{name: "francine", salary: 30}], [%{name: "john", salary: 10}]} iex> pop_in(list, [Access.filter(&(&1.salary >= 20)), :name]) {["francine"], [%{name: "john", salary: 10}, %{salary: 30}]} ``` When no match is found, an empty list is returned and the update function is never called ``` iex> list = [%{name: "john", salary: 10}, %{name: "francine", salary: 30}] iex> get_in(list, [Access.filter(&(&1.salary >= 50)), :name]) [] iex> get_and_update_in(list, [Access.filter(&(&1.salary >= 50)), :name], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {[], [%{name: "john", salary: 10}, %{name: "francine", salary: 30}]} ``` An error is raised if the predicate is not a function or is of the incorrect arity: ``` iex> get_in([], [Access.filter(5)]) ** (FunctionClauseError) no function clause matching in Access.filter/1 ``` An error is raised if the accessed structure is not a list: ``` iex> get_in(%{}, [Access.filter(fn a -> a == 10 end)]) ** (RuntimeError) Access.filter/1 expected a list, got: %{} ``` ### get(container, key, default \\ nil) #### Specs ``` get(container(), term(), term()) :: term() ``` ``` get(nil_container(), any(), default) :: default when default: var ``` Gets the value for the given key in a container (a map, keyword list, or struct that implements the [`Access`](#content) behaviour). Returns the value under `key` if there is such a key, or `default` if `key` is not found. #### Examples ``` iex> Access.get(%{name: "john"}, :name, "default name") "john" iex> Access.get(%{name: "john"}, :age, 25) 25 iex> Access.get([ordered: true], :timeout) nil ``` ### get\_and\_update(container, key, fun) #### Specs ``` get_and_update(data, key(), (value() -> {get_value, value()} | :pop)) :: {get_value, data} when data: container(), get_value: var ``` Gets and updates the given key in a `container` (a map, a keyword list, a struct that implements the [`Access`](#content) behaviour). The `fun` argument receives the value of `key` (or `nil` if `key` is not present in `container`) and must return a two-element tuple `{get_value, update_value}`: the "get" value `get_value` (the retrieved value, which can be operated on before being returned) and the new value to be stored under `key` (`update_value`). `fun` may also return `:pop`, which means the current value should be removed from the container and returned. The returned value is a two-element tuple with the "get" value returned by `fun` and a new container with the updated value under `key`. ### key(key, default \\ nil) #### Specs ``` key(key(), term()) :: access_fun(data :: struct() | map(), get_value :: term()) ``` Returns a function that accesses the given key in a map/struct. The returned function is typically passed as an accessor to [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3), and friends. The returned function uses the default value if the key does not exist. This can be used to specify defaults and safely traverse missing keys: ``` iex> get_in(%{}, [Access.key(:user, %{name: "meg"}), Access.key(:name)]) "meg" ``` Such is also useful when using update functions, allowing us to introduce values as we traverse the data structure for updates: ``` iex> put_in(%{}, [Access.key(:user, %{}), Access.key(:name)], "Mary") %{user: %{name: "Mary"}} ``` #### Examples ``` iex> map = %{user: %{name: "john"}} iex> get_in(map, [Access.key(:unknown, %{}), Access.key(:name, "john")]) "john" iex> get_and_update_in(map, [Access.key(:user), Access.key(:name)], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {"john", %{user: %{name: "JOHN"}}} iex> pop_in(map, [Access.key(:user), Access.key(:name)]) {"john", %{user: %{}}} ``` An error is raised if the accessed structure is not a map or a struct: ``` iex> get_in(nil, [Access.key(:foo)]) ** (BadMapError) expected a map, got: nil iex> get_in([], [Access.key(:foo)]) ** (BadMapError) expected a map, got: [] ``` ### key!(key) #### Specs ``` key!(key()) :: access_fun(data :: struct() | map(), get_value :: term()) ``` Returns a function that accesses the given key in a map/struct. The returned function is typically passed as an accessor to [`Kernel.get_in/2`](kernel#get_in/2), [`Kernel.get_and_update_in/3`](kernel#get_and_update_in/3), and friends. Similar to [`key/2`](#key/2), but the returned function raises if the key does not exist. #### Examples ``` iex> map = %{user: %{name: "john"}} iex> get_in(map, [Access.key!(:user), Access.key!(:name)]) "john" iex> get_and_update_in(map, [Access.key!(:user), Access.key!(:name)], fn prev -> ...> {prev, String.upcase(prev)} ...> end) {"john", %{user: %{name: "JOHN"}}} iex> pop_in(map, [Access.key!(:user), Access.key!(:name)]) {"john", %{user: %{}}} iex> get_in(map, [Access.key!(:user), Access.key!(:unknown)]) ** (KeyError) key :unknown not found in: %{name: "john"} ``` An error is raised if the accessed structure is not a map/struct: ``` iex> get_in([], [Access.key!(:foo)]) ** (RuntimeError) Access.key!/1 expected a map/struct, got: [] ``` ### pop(container, key) #### Specs ``` pop(data, key()) :: {value(), data} when data: container() ``` Removes the entry with a given key from a container (a map, keyword list, or struct that implements the [`Access`](#content) behaviour). Returns a tuple containing the value associated with the key and the updated container. `nil` is returned for the value if the key isn't in the container. #### Examples With a map: ``` iex> Access.pop(%{name: "Elixir", creator: "Valim"}, :name) {"Elixir", %{creator: "Valim"}} ``` A keyword list: ``` iex> Access.pop([name: "Elixir", creator: "Valim"], :name) {"Elixir", [creator: "Valim"]} ``` An unknown key: ``` iex> Access.pop(%{name: "Elixir", creator: "Valim"}, :year) {nil, %{creator: "Valim", name: "Elixir"}} ``` Callbacks ========== ### fetch(term, key) #### Specs ``` fetch(term :: t(), key()) :: {:ok, value()} | :error ``` Invoked in order to access the value stored under `key` in the given term `term`. This function should return `{:ok, value}` where `value` is the value under `key` if the key exists in the term, or `:error` if the key does not exist in the term. Many of the functions defined in the [`Access`](#content) module internally call this function. This function is also used when the square-brackets access syntax (`structure[key]`) is used: the [`fetch/2`](#fetch/2) callback implemented by the module that defines the `structure` struct is invoked and if it returns `{:ok, value}` then `value` is returned, or if it returns `:error` then `nil` is returned. See the [`Map.fetch/2`](map#fetch/2) and [`Keyword.fetch/2`](keyword#fetch/2) implementations for examples of how to implement this callback. ### get\_and\_update(data, key, function) #### Specs ``` get_and_update(data, key(), (value() -> {get_value, value()} | :pop)) :: {get_value, data} when data: container() | any_container(), get_value: var ``` Invoked in order to access the value under `key` and update it at the same time. The implementation of this callback should invoke `fun` with the value under `key` in the passed structure `data`, or with `nil` if `key` is not present in it. This function must return either `{get_value, update_value}` or `:pop`. If the passed function returns `{get_value, update_value}`, the return value of this callback should be `{get_value, new_data}`, where: * `get_value` is the retrieved value (which can be operated on before being returned) * `update_value` is the new value to be stored under `key` * `new_data` is `data` after updating the value of `key` with `update_value`. If the passed function returns `:pop`, the return value of this callback must be `{value, new_data}` where `value` is the value under `key` (or `nil` if not present) and `new_data` is `data` without `key`. See the implementations of [`Map.get_and_update/3`](map#get_and_update/3) or [`Keyword.get_and_update/3`](keyword#get_and_update/3) for more examples. ### pop(data, key) #### Specs ``` pop(data, key()) :: {value(), data} when data: container() | any_container() ``` Invoked to "pop" the value under `key` out of the given data structure. When `key` exists in the given structure `data`, the implementation should return a `{value, new_data}` tuple where `value` is the value that was under `key` and `new_data` is `term` without `key`. When `key` is not present in the given structure, a tuple `{value, data}` should be returned, where `value` is implementation-defined. See the implementations for [`Map.pop/3`](map#pop/3) or [`Keyword.pop/3`](keyword#pop/3) for more examples.
programming_docs
elixir ExUnit.Callbacks ExUnit.Callbacks ================= Defines ExUnit callbacks. This module defines the [`setup/1`](#setup/1), [`setup/2`](#setup/2), [`setup_all/1`](#setup_all/1), and [`setup_all/2`](#setup_all/2) callbacks, as well as the [`on_exit/2`](#on_exit/2), [`start_supervised/2`](#start_supervised/2) and [`stop_supervised/1`](#stop_supervised/1) functions. The setup callbacks are defined via macros and each one can optionally receive a map with test state and metadata, usually referred to as `context`. The context to be used in the tests can be optionally extended by the setup callbacks by returning a properly structured value (see below). The `setup_all` callbacks are invoked only once per module, before any test is run. All `setup` callbacks are run before each test. No callback is run if the test case has no tests or all tests have been filtered out. `setup` and `setup_all` callbacks can be defined by a block, by passing an atom naming a one-arity function, or by passing a list of such atoms. Both can opt to receive the current context by specifying it as parameter if defined by a block. Functions used to define a test setup must accept the context as single argument. A test module can define multiple `setup` and `setup_all` callbacks, and they are invoked in order of appearance. [`start_supervised/2`](#start_supervised/2) is used to start processes under a supervisor. The supervisor is linked to the current test process. The supervisor as well as all child processes are guaranteed to terminate before any [`on_exit/2`](#on_exit/2) callback runs. [`on_exit/2`](#on_exit/2) callbacks are registered on demand, usually to undo an action performed by a setup callback. [`on_exit/2`](#on_exit/2) may also take a reference, allowing the callback to be overridden in the future. A registered [`on_exit/2`](#on_exit/2) callback will always run, while failures in `setup` and `setup_all` will stop all remaining setup callbacks from executing. Finally, `setup_all` callbacks run in a separate process per module, while all `setup` callbacks run in the same process as the test itself. [`on_exit/2`](#on_exit/2) callbacks always run in a separate process, as implied by their name. The test process always exits with reason `:shutdown`, which means any process linked to the test process will also exit, although asynchronously. Therefore it is preferred to use [`start_supervised/2`](#start_supervised/2) to guarantee synchronous termination. Here is a rundown of the life-cycle of the test process: 1. the test process is spawned 2. it runs [`setup/2`](#setup/2) callbacks 3. it runs the test itself 4. it stops all supervised processes 5. the test process exits with reason `:shutdown` 6. [`on_exit/2`](#on_exit/2) callbacks are executed in a separate process Context -------- If `setup_all` or `setup` return a keyword list, a map, or `{:ok, keywords | map}`, the keyword list or map will be merged into the current context and will be available in all subsequent `setup_all`, `setup`, and the `test` itself. Returning `:ok` leaves the context unchanged (in `setup` and `setup_all` callbacks). Returning anything else from `setup_all` will force all tests to fail, while a bad response from `setup` causes the current test to fail. Examples --------- ``` defmodule AssertionTest do use ExUnit.Case, async: true # "setup_all" is called once per module before any test runs setup_all do IO.puts("Starting AssertionTest") # Context is not updated here :ok end # "setup" is called before each test setup do IO.puts("This is a setup callback for #{inspect(self())}") on_exit(fn -> IO.puts("This is invoked once the test is done. Process: #{inspect(self())}") end) # Returns extra metadata to be merged into context [hello: "world"] # Similarly, any of the following would work: # {:ok, [hello: "world"]} # %{hello: "world"} # {:ok, %{hello: "world"}} end # Same as above, but receives the context as argument setup context do IO.puts("Setting up: #{context.test}") :ok end # Setups can also invoke a local or imported function that returns a context setup :invoke_local_or_imported_function test "always pass" do assert true end test "uses metadata from setup", context do assert context[:hello] == "world" assert context[:from_named_setup] == true end defp invoke_local_or_imported_function(context) do [from_named_setup: true] end end ``` Summary ======== Functions ---------- [on\_exit(name\_or\_ref \\ make\_ref(), callback)](#on_exit/2) Defines a callback that runs once the test exits. [setup(block)](#setup/1) Defines a callback to be run before each test in a case. [setup(context, block)](#setup/2) Defines a callback to be run before each test in a case. [setup\_all(block)](#setup_all/1) Defines a callback to be run before all tests in a case. [setup\_all(context, block)](#setup_all/2) Defines a callback to be run before all tests in a case. [start\_supervised(child\_spec\_or\_module, opts \\ [])](#start_supervised/2) Starts a child process under the test supervisor. [start\_supervised!(child\_spec\_or\_module, opts \\ [])](#start_supervised!/2) Same as [`start_supervised/2`](#start_supervised/2) but returns the PID on success and raises if not started properly. [stop\_supervised(id)](#stop_supervised/1) Stops a child process started via [`start_supervised/2`](#start_supervised/2). Functions ========== ### on\_exit(name\_or\_ref \\ make\_ref(), callback) #### Specs ``` on_exit(term(), (() -> term())) :: :ok ``` Defines a callback that runs once the test exits. `callback` is a function that receives no arguments and runs in a separate process than the caller. [`on_exit/2`](#on_exit/2) is usually called from `setup` and `setup_all` callbacks, often to undo the action performed during the setup. However, [`on_exit/2`](#on_exit/2) may also be called dynamically, where a reference can be used to guarantee the callback will be invoked only once. ### setup(block) Defines a callback to be run before each test in a case. Accepts a block or the name of a one-arity function in the form of an atom, or a list of such atoms. Can return values to be merged into the context, to set up the state for tests. For more details, see the "Context" section shown above. #### Examples ``` def clean_up_tmp_directory(context) do # perform setup :ok end setup :clean_up_tmp_directory setup do [conn: Plug.Conn.build_conn()] end ``` ### setup(context, block) Defines a callback to be run before each test in a case. Accepts a block or the name of a one-arity function in the form of an atom, or a list of such atoms. Can return values to be merged into the `context`, to set up the state for tests. For more details, see the "Context" section shown above. #### Examples ``` setup context do [conn: Plug.Conn.build_conn()] end ``` ### setup\_all(block) Defines a callback to be run before all tests in a case. Accepts a block or the name of a one-arity function in the form of an atom, or a list of such atoms. Can return values to be merged into the `context`, to set up the state for tests. For more details, see the "Context" section shown above. #### Examples ``` def clean_up_tmp_directory(context) do # perform setup :ok end # block setup_all do [conn: Plug.Conn.build_conn()] end # one-arity function name setup_all :clean_up_tmp_directory ``` ### setup\_all(context, block) Defines a callback to be run before all tests in a case. Accepts a block or the name of a one-arity function in the form of an atom, or a list of such atoms. Can return values to be merged into the `context`, to set up the state for tests. For more details, see the "Context" section shown above. #### Examples ``` setup_all context do [conn: Plug.Conn.build_conn()] end ``` ### start\_supervised(child\_spec\_or\_module, opts \\ []) #### Specs ``` start_supervised( Supervisor.child_spec() | module() | {module(), term()}, keyword() ) :: Supervisor.on_start_child() ``` Starts a child process under the test supervisor. It expects a child specification or a module, similar to the ones given to [`Supervisor.start_link/2`](https://hexdocs.pm/elixir/Supervisor.html#start_link/2). For example, if your application starts a supervision tree by running: ``` Supervisor.start_link([MyServer, {OtherSupervisor, ...}], ...) ``` You can start those processes under test in isolation by running: ``` start_supervised(MyServer) start_supervised({OtherSupervisor, :initial_value}) ``` A keyword list can also be given if there is a need to change the child specification for the given child process: ``` start_supervised({MyServer, :initial_value}, restart: :temporary) ``` See the [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html) module for a discussion on child specifications and the available specification keys. The advantage of starting a process under the test supervisor is that it is guaranteed to exit before the next test starts. Therefore, you don't need to remove the process at the end of your tests via [`stop_supervised/1`](#stop_supervised/1). You only need to use [`stop_supervised/1`](#stop_supervised/1) if you want to remove a process from the supervision tree in the middle of a test, as simply shutting down the process would cause it to be restarted according to its `:restart` value. This function returns `{:ok, pid}` in case of success, otherwise it returns `{:error, reason}`. ### start\_supervised!(child\_spec\_or\_module, opts \\ []) #### Specs ``` start_supervised!( Supervisor.child_spec() | module() | {module(), term()}, keyword() ) :: pid() ``` Same as [`start_supervised/2`](#start_supervised/2) but returns the PID on success and raises if not started properly. ### stop\_supervised(id) #### Specs ``` stop_supervised(id :: term()) :: :ok | {:error, :not_found} ``` Stops a child process started via [`start_supervised/2`](#start_supervised/2). This function expects the `id` in the child specification. For example: ``` {:ok, _} = start_supervised(MyServer) :ok = stop_supervised(MyServer) ``` It returns `:ok` if there is a supervised process with such `id`, `{:error, :not_found}` otherwise. elixir Library Guidelines Library Guidelines ================== This document outlines general guidelines, anti-patterns, and rules for those writing and publishing Elixir libraries meant to be consumed by other developers. Getting started ---------------- You can create a new Elixir library by running the [`mix new`](https://hexdocs.pm/mix/Mix.Tasks.New.html) command: ``` $ mix new my_library ``` The project name is given in the `snake_case` convention where all letters are lowercase and words are separate with underscores. This is the same convention used by variables, function names and atoms in Elixir. See the [Naming Conventions](naming-conventions) document for more information. Every project has a `mix.exs` file, with instructions on how to build, compile, run tests, and so on. Libraries commonly have a `lib` directory, which includes Elixir source code, and a `test` directory. A `src` directory may also exist for Erlang sources. For more information on running your project, see the official [Mix & OTP guide](https://elixir-lang.org/getting-started/mix-otp/introduction-to-mix.html) or [Mix documentation](https://hexdocs.pm/mix/Mix.html). ### Applications with supervision tree The [`mix new`](https://hexdocs.pm/mix/Mix.Tasks.New.html) command also allows the `--sup` option to scaffold an application with a supervision tree out of the box. We talk about supervision trees later on when discussing one of the common anti-patterns when writing libraries. Publishing ----------- Writing code is only the first of many steps to publish a package. We strongly recommend developers to: * Choose a versioning schema. Elixir requires versions to be in the format `MAJOR.MINOR.PATCH` but the meaning of those numbers is up to you. Most projects choose [Semantic Versioning](https://semver.org/). * Choose a [license](https://choosealicense.com/). The most common licenses in the Elixir community are the [MIT License](https://choosealicense.com/licenses/mit/) and the [Apache License 2.0](https://choosealicense.com/licenses/apache-2.0/). The latter is also the one used by Elixir itself. * Run the [code formatter](https://hexdocs.pm/mix/Mix.Tasks.Format.html). The code formatter formats your code according to a consistent style shared by your library and the whole community, making it easier for other developers to understand your code and contribute. * Write tests. Elixir ships with a test-framework named [ExUnit](https://hexdocs.pm/ex_unit/ExUnit.html). The project generated by [`mix new`](https://hexdocs.pm/mix/Mix.Tasks.New.html) includes sample tests and doctests. * Write documentation. The Elixir community is proud of treating documentation as a first-class citizen and making documentation easily accessible. Libraries contribute to the status quo by providing complete API documentation with examples for their modules, types and functions. See the [Writing Documentation](writing-documentation) guide for more information. Projects like [ExDoc](https://github.com/elixir-lang/ex_doc) can be used to generate HTML and EPUB documents from the documentation. ExDoc also supports "extra pages", like this one that you are reading. Such pages augment the documentation with tutorials, guides and references. Projects are often made available to other developers [by publishing a Hex package](https://hex.pm/docs/publish). Hex also [supports private packages for organizations](https://hex.pm/pricing). If ExDoc is configured for the Mix project, publishing a package on Hex will also automatically publish the generated documentation to [HexDocs](https://hexdocs.pm). Anti-patterns -------------- In this section we document common anti-patterns to avoid when writing libraries. ### Avoid using exceptions for control-flow You should avoid using exceptions for control-flow. For example, instead of: ``` try do contents = File.read!("some_path_that_may_or_may_not_exist") {:it_worked, contents} rescue File.Error -> :it_failed end ``` you should prefer: ``` case File.read("some_path_that_may_or_may_not_exist") do {:ok, contents} -> {:it_worked, contents}  {:error, _} -> :it_failed end ``` As a library author, it is your responsibility to make sure users are not required to use exceptions for control-flow in their applications. You can follow the same convention as Elixir here, using the name without `!` for returning `:ok`/`:error` tuples and appending `!` for a version of the function which raises an exception. It is important to note that a name without `!` does not mean a function will never raise. For example, even [`File.read/1`](file#read/1) can fail in case of bad arguments: ``` iex> File.read(1) ** (FunctionClauseError) no function clause matching in IO.chardata_to_string/1 ``` The usage of `:ok`/`:error` tuples is about the domain that the function works on, in this case, file system access. Bad arguments, logical errors, invalid options should raise regardless of the function name. If in doubt, prefer to return tuples instead of raising, as users of your library can always match on the results and raise if necessary. ### Avoid working with invalid data Elixir programs should prefer to validate data as close to the end user as possible, so the errors are easy to locate and fix. This practice also saves you from writing defensive code in the internals of the library. For example, imagine you have an API that receives a filename as a binary. At some point you will want to write to this file. You could have a function like this: ``` def my_fun(some_arg, file_to_write_to, options \\ []) do ...some code... AnotherModuleInLib.invoke_something_that_will_eventually_write_to_file(file_to_write_to) ...more code... end ``` The problem with the code above is that, if the user supplies an invalid input, the error will be raised deep inside the library, which makes it confusing for users. Furthermore, when you don't validate the values at the boundary, the internals of your library are never quite sure which kind of values they are working with. A better function definition would be: ``` def my_fun(some_arg, file_to_write_to, options \\ []) when is_binary(file_to_write_to) do ``` Elixir also leverages pattern matching and guards in function clauses to provide clear error messages in case invalid arguments are given. This advice does not only apply to libraries but to any Elixir code. Every time you receive multiple options or work with external data, you should validate the data at the boundary and convert it to structured data. For example, if you provide a [`GenServer`](genserver) that can be started with multiple options, you want to validate those options when the server starts and rely only on structured data throughout the process life cycle. Similarly, if a database or a socket gives you a map of strings, after you receive the data, you should validate it and potentially convert it to a struct or a map of atoms. ### Avoid application configuration You should avoid using the application environment (see [`Application.get_env/2`](application#get_env/2)) as the configuration mechanism for libraries. The application environment is **global** which means it becomes impossible for two dependencies to use your library in two different ways. Let's see a simple example. Imagine that you implement a library that breaks a string in two parts based on the first occurrence of the dash `-` character: ``` defmodule DashSplitter do def split(string) when is_binary(string) do String.split(string, "-", parts: 2) end end ``` Now imagine someone wants to split the string in three parts. You decide to make the number of parts configurable via the application environment: ``` def split(string) when is_binary(string) do parts = Application.get_env(:dash_splitter, :parts, 2) String.split(string, "-", parts: parts) end ``` Now users can configure your library in their `config/config.exs` file as follows: ``` config :dash_splitter, :parts, 3 ``` Once your library is configured, it will change the behaviour of all users of your library. If a library was expecting it to split the string in 2 parts, since the configuration is global, it will now split it in 3 parts. The solution is to provide configuration as close as possible to where it is used and not via the application environment. In case of a function, you could expect keyword lists as a new argument: ``` def split(string, opts \\ []) when is_binary(string) and is_list(opts) do parts = Keyword.get(opts, :parts, 2) String.split(string, "-", parts: parts) end ``` In case you need to configure a process, the options should be passed when starting that process. The application environment should be reserved only for configurations that are truly global, for example, to control your application boot process and its supervision tree. For all remaining scenarios, libraries should not force their users to use the application environment for configuration. If the user of a library believes that certain parameter should be configured globally, then they can wrap the library functionality with their own application environment configuration. ### Avoid `use` when an `import` is enough A library should not provide `use MyLib` functionality if all `use MyLib` does is to `import`/`alias` the module itself. For example, this is an anti-pattern: ``` defmodule MyLib do defmacro __using__(_) do quote do import MyLib end end def some_fun(arg1, arg2) do ... end end ``` The reason why defining the `__using__` macro above should be avoided is because when a developer writes: ``` defmodule MyApp do use MyLib end ``` it allows `use MyLib` to run *any* code into the `MyApp` module. For someone reading the code, it is impossible to assess the impact that `use MyLib` has in a module without looking at the implementation of `__using__`. The following code is clearer: ``` defmodule MyApp do import MyLib end ``` The code above says we are only bringing in the functions from `MyLib` so we can invoke `some_fun(arg1, arg2)` directly without the `MyLib.` prefix. Even more important, `import MyLib` says that we have an option to not `import MyLib` at all as we can simply invoke the function as `MyLib.some_fun(arg1, arg2)`. If the module you want to invoke a function on has a long name, such as `SomeLibrary.Namespace.MyLib`, and you find it verbose, you can leverage the [`alias/2`](kernel.specialforms#alias/2) special form and still refer to the module as `MyLib`. While there are situations where `use SomeModule` is necessary, `use` should be skipped when all it does is to `import` or `alias` other modules. In a nutshell, `alias` should be preferred, as it is simpler and clearer than `import`, while `import` is simpler and clearer than `use`. ### Avoid macros Although the previous section could be summarized as "avoid macros", both topics are important enough to deserve their own sections. To quote [the official guide on Macros](https://elixir-lang.org/getting-started/meta/macros.html): > > Even though Elixir attempts its best to provide a safe environment for macros, the major responsibility of writing clean code with macros falls on developers. Macros are harder to write than ordinary Elixir functions and it's considered to be bad style to use them when they're not necessary. So write macros responsibly. > > Elixir already provides mechanisms to write your everyday code in a simple and readable fashion by using its data structures and functions. Macros should only be used as a last resort. Remember that **explicit is better than implicit**. **Clear code is better than concise code**. > > When you absolutely have to use a macro, make sure that a macro is not the only way the user can interface with your library and keep the amount of code generated by a macro to a minimum. For example, the [`Logger`](https://hexdocs.pm/logger/Logger.html) module provides [`Logger.debug/2`](https://hexdocs.pm/logger/Logger.html#debug/2), [`Logger.info/2`](https://hexdocs.pm/logger/Logger.html#info/2) and friends as macros that are capable of extracting environment information, but a low-level mechanism for logging is still available with [`Logger.bare_log/3`](https://hexdocs.pm/logger/Logger.html#bare_log/3). ### Avoid using processes for code organization A developer must never use a process for code organization purposes. A process must be used to model runtime properties such as: * Mutable state and access to shared resources (such as ETS, files, etc.) * Concurrency and distribution * Initialization, shutdown and restart logic (as seen in supervisors) * System messages such as timer messages and monitoring events In Elixir, code organization is done by modules and functions, processes are not necessary. For example, imagine you are implementing a calculator and you decide to put all the calculator operations behind a [`GenServer`](genserver): ``` def add(a, b) do GenServer.call(__MODULE__, {:add, a, b}) end def handle_call({:add, a, b}, _from, state) do {:reply, a + b, state} end def handle_call({:subtract, a, b}, _from, state) do {:reply, a - b, state} end ``` This is an anti-pattern not only because it convolutes the calculator logic but also because you put the calculator logic behind a single process that will potentially become a bottleneck in your system, especially as the number of calls grow. Instead just define the functions directly: ``` def add(a, b) do a + b end def subtract(a, b) do a - b end ``` Use processes only to model runtime properties, never for code organization. And even when you think something could be done in parallel with processes, often it is best to let the callers of your library decide how to parallelize, rather than impose a certain execution flow in users of your code. ### Avoid spawning unsupervised processes You should avoid spawning processes outside of a supervision tree, especially long-running ones. Instead, processes must be started inside supervision trees. This guarantees developers have full control over the initialization, restarts, and shutdown of the system. If your application does not have a supervision tree, one can be added by changing `def application` inside `mix.exs` to include a `:mod` key with the application callback name: ``` def application do [ extra_applications: [:logger], mod: {MyApp.Application, []} ] end ``` and then defining a `my_app/application.ex` file with the following template: ``` defmodule MyApp.Application do # See https://hexdocs.pm/elixir/Application.html # for more information on OTP Applications @moduledoc false use Application def start(_type, _args) do children = [ # Starts a worker by calling: MyApp.Worker.start_link(arg) # {MyApp.Worker, arg} ] # See https://hexdocs.pm/elixir/Supervisor.html # for other strategies and supported options opts = [strategy: :one_for_one, name: MyApp.Supervisor] Supervisor.start_link(children, opts) end end ``` This is the same template generated by `mix new --sup`. Each process started with the application must be listed as a child under the [`Supervisor`](supervisor) above. We call those "static processes" because they are known upfront. For handling dynamic processes, such as the ones started during requests and other user inputs, look at the [`DynamicSupervisor`](dynamicsupervisor) module. One of the few times where it is acceptable to start a process outside of a supervision tree is with [`Task.async/1`](task#async/1) and [`Task.await/2`](task#await/2). Opposite to [`Task.start_link/1`](task#start_link/1), the `async/await` mechanism gives you full control over the spawned process life cycle - which is also why you must always call [`Task.await/2`](task#await/2) after starting a task with [`Task.async/1`](task#async/1). Even though, if your application is spawning multiple async processes, you should consider using [`Task.Supervisor`](task.supervisor) for better visibility when instrumenting and monitoring the system.
programming_docs
elixir mix profile.fprof mix profile.fprof ================== Profiles the given file or expression using Erlang's `fprof` tool. `fprof` can be useful when you want to discover the bottlenecks of a sequential code. Before running the code, it invokes the `app.start` task which compiles and loads your project. Then the target expression is profiled, together with all processes which are spawned by it. Other processes (e.g. those residing in the OTP application supervision tree) are not profiled. To profile the code, you can use syntax similar to the [`mix run`](mix.tasks.run) task: ``` mix profile.fprof -e Hello.world mix profile.fprof my_script.exs arg1 arg2 arg3 ``` This task is automatically reenabled, so you can profile multiple times in the same Mix invocation. Command line options --------------------- * `--callers` - prints detailed information about immediate callers and called functions * `--details` - includes profile data for each profiled process * `--sort key` - sorts the output by given key: `acc` (default) or `own` * `--eval`, `-e` - evaluates the given code * `--require`, `-r` - requires pattern before running the command * `--parallel`, `-p` - makes all requires parallel * `--no-compile` - does not compile even if files require compilation * `--no-deps-check` - does not check dependencies * `--no-archives-check` - does not check archives * `--no-start` - does not start applications after compilation * `--no-elixir-version-check` - does not check the Elixir version from mix.exs * `--no-warmup` - does not execute code once before profiling Profile output --------------- Example output: ``` # CNT ACC (ms) OWN (ms) Total 200279 1972.188 1964.579 :fprof.apply_start_stop/4 0 1972.188 0.012 anonymous fn/0 in :elixir_compiler_2 1 1972.167 0.001 Test.run/0 1 1972.166 0.007 Test.do_something/1 3 1972.131 0.040 Test.bottleneck/0 1 1599.490 0.007 ... ``` The default output contains data gathered from all profiled processes. All times are wall clock milliseconds. The columns have the following meaning: * CNT - total number of invocations of the given function * ACC - total time spent in the function * OWN - time spent in the function, excluding the time of called functions The first row (Total) is the sum of all functions executed in all profiled processes. For the given output, we had a total of 200279 function calls and spent about 2 seconds running the code. More detailed information is returned if you provide the `--callers` and `--details` options. When `--callers` option is specified, you'll see expanded function entries: ``` Mod.caller1/0 3 200.000 0.017 Mod.caller2/0 2 100.000 0.017 Mod.some_function/0 5 300.000 0.017 <-- Mod.called1/0 4 250.000 0.010 Mod.called2/0 1 50.000 0.030 ``` Here, the arrow (`<--`) indicates the **marked** function - the function described by this paragraph. You also see its immediate callers (above) and called functions (below). All the values of caller functions describe the marked function. For example, the first row means that `Mod.caller1/0` invoked `Mod.some_function/0` 3 times. 200ms of the total time spent in `Mod.some_function/0` was spent processing calls from this particular caller. In contrast, the values for the called functions describe those functions, but in the context of the marked function. For example, the last row means that `Mod.called2/0` was called once by `Mod.some_function/0`, and in that case the total time spent in the function was 50ms. For a detailed explanation it's worth reading the analysis in [Erlang/OTP documentation for fprof](http://www.erlang.org/doc/man/fprof.html#analysis). Caveats -------- You should be aware that the code being profiled is running in an anonymous function which is invoked by [`:fprof` module](http://wwww.erlang.org/doc/man/fprof.html). Thus, you'll see some additional entries in your profile output, such as `:fprof` calls, an anonymous function with high ACC time, or an `:undefined` function which represents the outer caller (non-profiled code which started the profiler). Also, keep in mind that profiling might significantly increase the running time of the profiled processes. This might skew your results if, for example, those processes perform some I/O operations, since running time of those operations will remain unchanged, while CPU bound operations of the profiled processes might take significantly longer. Thus, when profiling some intensive program, try to reduce such dependencies, or be aware of the resulting bias. Finally, it's advised to profile your program with the `prod` environment, since this should provide more realistic insights into bottlenecks. Summary ======== Functions ---------- [profile(fun, opts \\ [])](#profile/2) Allows to programmatically run the `fprof` profiler on expression in `fun`. Functions ========== ### profile(fun, opts \\ []) Allows to programmatically run the `fprof` profiler on expression in `fun`. #### Options * `:callers` - prints detailed information about immediate callers and called functions * `:details` - includes profile data for each profiled process * `:sort` - sorts the output by given key: `:acc` (default) or `:own` elixir DynamicSupervisor behaviour DynamicSupervisor behaviour ============================ A supervisor that starts children dynamically. The [`Supervisor`](supervisor) module was designed to handle mostly static children that are started in the given order when the supervisor starts. A [`DynamicSupervisor`](#content) starts with no children. Instead, children are started on demand via [`start_child/2`](#start_child/2). When a dynamic supervisor terminates, all children are shut down at the same time, with no guarantee of ordering. Examples --------- A dynamic supervisor is started with no children, often under a supervisor with the supervision strategy (the only strategy currently supported is `:one_for_one`) and a name: ``` children = [ {DynamicSupervisor, strategy: :one_for_one, name: MyApp.DynamicSupervisor} ] Supervisor.start_link(children, strategy: :one_for_one) ``` The options given in the child specification are documented in [`start_link/1`](#start_link/1). Once the dynamic supervisor is running, we can start children with [`start_child/2`](#start_child/2), which receives a child specification: ``` {:ok, agent1} = DynamicSupervisor.start_child(MyApp.DynamicSupervisor, {Agent, fn -> %{} end}) Agent.update(agent1, &Map.put(&1, :key, "value")) Agent.get(agent1, & &1) #=> %{key: "value"} {:ok, agent2} = DynamicSupervisor.start_child(MyApp.DynamicSupervisor, {Agent, fn -> %{} end}) Agent.get(agent2, & &1) #=> %{} DynamicSupervisor.count_children(MyApp.DynamicSupervisor) #=> %{active: 2, specs: 2, supervisors: 0, workers: 2} ``` Module-based supervisors ------------------------- Similar to [`Supervisor`](supervisor), dynamic supervisors also support module-based supervisors. ``` defmodule MyApp.DynamicSupervisor do # Automatically defines child_spec/1 use DynamicSupervisor def start_link(init_arg) do DynamicSupervisor.start_link(__MODULE__, init_arg, name: __MODULE__) end @impl true def init(_init_arg) do DynamicSupervisor.init(strategy: :one_for_one) end end ``` See the [`Supervisor`](supervisor) docs for a discussion of when you may want to use module-based supervisors. A `@doc` annotation immediately preceding `use DynamicSupervisor` will be attached to the generated [`child_spec/1`](#child_spec/1) function. Name registration ------------------ A supervisor is bound to the same name registration rules as a [`GenServer`](genserver). Read more about these rules in the documentation for [`GenServer`](genserver). Migrating from Supervisor's :simple\_one\_for\_one --------------------------------------------------- In case you were using the deprecated `:simple_one_for_one` strategy from the [`Supervisor`](supervisor) module, you can migrate to the [`DynamicSupervisor`](#content) in few steps. Imagine the given "old" code: ``` defmodule MySupervisor do use Supervisor def start_link(init_arg) do Supervisor.start_link(__MODULE__, init_arg, name: __MODULE__) end def start_child(foo, bar, baz) do # This will start child by calling MyWorker.start_link(init_arg, foo, bar, baz) Supervisor.start_child(__MODULE__, [foo, bar, baz]) end @impl true def init(init_arg) do children = [ # Or the deprecated: worker(MyWorker, [init_arg]) %{id: MyWorker, start: {MyWorker, :start_link, [init_arg]}} ] Supervisor.init(children, strategy: :simple_one_for_one) end end ``` It can be upgraded to the DynamicSupervisor like this: ``` defmodule MySupervisor do use DynamicSupervisor def start_link(init_arg) do DynamicSupervisor.start_link(__MODULE__, init_arg, name: __MODULE__) end def start_child(foo, bar, baz) do # If MyWorker is not using the new child specs, we need to pass a map: # spec = %{id: MyWorker, start: {MyWorker, :start_link, [foo, bar, baz]}} spec = {MyWorker, foo: foo, bar: bar, baz: baz} DynamicSupervisor.start_child(__MODULE__, spec) end @impl true def init(init_arg) do DynamicSupervisor.init( strategy: :one_for_one, extra_arguments: [init_arg] ) end end ``` The difference is that the [`DynamicSupervisor`](#content) expects the child specification at the moment [`start_child/2`](#start_child/2) is called, and no longer on the init callback. If there are any initial arguments given on initialization, such as `[initial_arg]`, it can be given in the `:extra_arguments` flag on [`DynamicSupervisor.init/1`](dynamicsupervisor#init/1). Summary ======== Types ------ [init\_option()](#t:init_option/0) Options given to [`start_link/2`](#start_link/2) and [`init/1`](#init/1) [on\_start\_child()](#t:on_start_child/0) Return values of `start_child` functions [option()](#t:option/0) Option values used by the `start*` functions [options()](#t:options/0) Options used by the `start*` functions [strategy()](#t:strategy/0) Supported strategies [sup\_flags()](#t:sup_flags/0) The supervisor flags returned on init Functions ---------- [child\_spec(opts)](#child_spec/1) Returns a specification to start a dynamic supervisor under a supervisor. [count\_children(supervisor)](#count_children/1) Returns a map containing count values for the supervisor. [init(options)](#init/1) Receives a set of `options` that initializes a dynamic supervisor. [start\_child(supervisor, child\_spec)](#start_child/2) Dynamically adds a child specification to `supervisor` and starts that child. [start\_link(options)](#start_link/1) Starts a supervisor with the given options. [start\_link(mod, init\_arg, opts \\ [])](#start_link/3) Starts a module-based supervisor process with the given `module` and `arg`. [stop(supervisor, reason \\ :normal, timeout \\ :infinity)](#stop/3) Synchronously stops the given supervisor with the given `reason`. [terminate\_child(supervisor, pid)](#terminate_child/2) Terminates the given child identified by `pid`. [which\_children(supervisor)](#which_children/1) Returns a list with information about all children. Callbacks ---------- [init(init\_arg)](#c:init/1) Callback invoked to start the supervisor and during hot code upgrades. Types ====== ### init\_option() #### Specs ``` init_option() :: {:strategy, strategy()} | {:max_restarts, non_neg_integer()} | {:max_seconds, pos_integer()} | {:max_children, non_neg_integer() | :infinity} | {:extra_arguments, [term()]} ``` Options given to [`start_link/2`](#start_link/2) and [`init/1`](#init/1) ### on\_start\_child() #### Specs ``` on_start_child() :: {:ok, pid()} | {:ok, pid(), info :: term()} | :ignore | {:error, {:already_started, pid()} | :max_children | term()} ``` Return values of `start_child` functions ### option() #### Specs ``` option() :: {:name, Supervisor.name()} | init_option() ``` Option values used by the `start*` functions ### options() #### Specs ``` options() :: [option(), ...] ``` Options used by the `start*` functions ### strategy() #### Specs ``` strategy() :: :one_for_one ``` Supported strategies ### sup\_flags() #### Specs ``` sup_flags() :: %{ strategy: strategy(), intensity: non_neg_integer(), period: pos_integer(), max_children: non_neg_integer() | :infinity, extra_arguments: [term()] } ``` The supervisor flags returned on init Functions ========== ### child\_spec(opts) Returns a specification to start a dynamic supervisor under a supervisor. See [`Supervisor`](supervisor). ### count\_children(supervisor) #### Specs ``` count_children(Supervisor.supervisor()) :: %{ specs: non_neg_integer(), active: non_neg_integer(), supervisors: non_neg_integer(), workers: non_neg_integer() } ``` Returns a map containing count values for the supervisor. The map contains the following keys: * `:specs` - the number of children processes * `:active` - the count of all actively running child processes managed by this supervisor * `:supervisors` - the count of all supervisors whether or not the child process is still alive * `:workers` - the count of all workers, whether or not the child process is still alive ### init(options) #### Specs ``` init([init_option()]) :: {:ok, sup_flags()} ``` Receives a set of `options` that initializes a dynamic supervisor. This is typically invoked at the end of the [`init/1`](#c:init/1) callback of module-based supervisors. See the "Module-based supervisors" section in the module documentation for more information. The `options` received by this function are also supported by [`start_link/2`](#start_link/2). This function returns a tuple containing the supervisor options. #### Examples ``` def init(_arg) do DynamicSupervisor.init(max_children: 1000, strategy: :one_for_one) end ``` #### Options * `:strategy` - the restart strategy option. The only supported value is `:one_for_one` which means that no other child is terminated if a child process terminates. You can learn more about strategies in the [`Supervisor`](supervisor) module docs. * `:max_restarts` - the maximum number of restarts allowed in a time frame. Defaults to `3`. * `:max_seconds` - the time frame in which `:max_restarts` applies. Defaults to `5`. * `:max_children` - the maximum amount of children to be running under this supervisor at the same time. When `:max_children` is exceeded, [`start_child/2`](#start_child/2) returns `{:error, :max_children}`. Defaults to `:infinity`. * `:extra_arguments` - arguments that are prepended to the arguments specified in the child spec given to [`start_child/2`](#start_child/2). Defaults to an empty list. ### start\_child(supervisor, child\_spec) #### Specs ``` start_child( Supervisor.supervisor(), Supervisor.child_spec() | {module(), term()} | module() ) :: on_start_child() ``` Dynamically adds a child specification to `supervisor` and starts that child. `child_spec` should be a valid child specification as detailed in the "child\_spec/1" section of the documentation for [`Supervisor`](supervisor). The child process will be started as defined in the child specification. If the child process start function returns `{:ok, child}` or `{:ok, child, info}`, then child specification and PID are added to the supervisor and this function returns the same value. If the child process start function returns `:ignore`, then no child is added to the supervision tree and this function returns `:ignore` too. If the child process start function returns an error tuple or an erroneous value, or if it fails, the child specification is discarded and this function returns `{:error, error}` where `error` is the error or erroneous value returned from child process start function, or failure reason if it fails. If the supervisor already has N children in a way that N exceeds the amount of `:max_children` set on the supervisor initialization (see [`init/1`](#init/1)), then this function returns `{:error, :max_children}`. ### start\_link(options) #### Specs ``` start_link(options()) :: Supervisor.on_start() ``` Starts a supervisor with the given options. The `:strategy` is a required option and the currently supported value is `:one_for_one`. The remaining options can be found in the [`init/1`](#init/1) docs. The `:name` option can also be used to register a supervisor name. The supported values are described under the "Name registration" section in the [`GenServer`](genserver) module docs. If the supervisor is successfully spawned, this function returns `{:ok, pid}`, where `pid` is the PID of the supervisor. If the supervisor is given a name and a process with the specified name already exists, the function returns `{:error, {:already_started, pid}}`, where `pid` is the PID of that process. Note that a supervisor started with this function is linked to the parent process and exits not only on crashes but also if the parent process exits with `:normal` reason. ### start\_link(mod, init\_arg, opts \\ []) #### Specs ``` start_link(module(), term(), GenServer.options()) :: Supervisor.on_start() ``` Starts a module-based supervisor process with the given `module` and `arg`. To start the supervisor, the [`init/1`](#c:init/1) callback will be invoked in the given `module`, with `arg` as its argument. The [`init/1`](#c:init/1) callback must return a supervisor specification which can be created with the help of the [`init/1`](#init/1) function. If the [`init/1`](#c:init/1) callback returns `:ignore`, this function returns `:ignore` as well and the supervisor terminates with reason `:normal`. If it fails or returns an incorrect value, this function returns `{:error, term}` where `term` is a term with information about the error, and the supervisor terminates with reason `term`. The `:name` option can also be given in order to register a supervisor name, the supported values are described in the "Name registration" section in the [`GenServer`](genserver) module docs. ### stop(supervisor, reason \\ :normal, timeout \\ :infinity) #### Specs ``` stop(Supervisor.supervisor(), reason :: term(), timeout()) :: :ok ``` Synchronously stops the given supervisor with the given `reason`. It returns `:ok` if the supervisor terminates with the given reason. If it terminates with another reason, the call exits. This function keeps OTP semantics regarding error reporting. If the reason is any other than `:normal`, `:shutdown` or `{:shutdown, _}`, an error report is logged. ### terminate\_child(supervisor, pid) #### Specs ``` terminate_child(Supervisor.supervisor(), pid()) :: :ok | {:error, :not_found} ``` Terminates the given child identified by `pid`. If successful, this function returns `:ok`. If there is no process with the given PID, this function returns `{:error, :not_found}`. ### which\_children(supervisor) #### Specs ``` which_children(Supervisor.supervisor()) :: [ {:undefined, pid() | :restarting, :worker | :supervisor, :supervisor.modules()} ] ``` Returns a list with information about all children. Note that calling this function when supervising a large number of children under low memory conditions can cause an out of memory exception. This function returns a list of tuples containing: * `id` - it is always `:undefined` for dynamic supervisors * `child` - the PID of the corresponding child process or the atom `:restarting` if the process is about to be restarted * `type` - `:worker` or `:supervisor` as defined in the child specification * `modules` - as defined in the child specification Callbacks ========== ### init(init\_arg) #### Specs ``` init(init_arg :: term()) :: {:ok, sup_flags()} | :ignore ``` Callback invoked to start the supervisor and during hot code upgrades. Developers typically invoke [`DynamicSupervisor.init/1`](dynamicsupervisor#init/1) at the end of their init callback to return the proper supervision flags.
programming_docs
elixir Behaviour Behaviour ========== This module is deprecated. Use @callback and @macrocallback attributes instead. Mechanism for handling behaviours. This module is deprecated. Instead of [`defcallback/1`](#defcallback/1) and [`defmacrocallback/1`](#defmacrocallback/1), the `@callback` and `@macrocallback` module attributes can be used (respectively). See the documentation for [`Module`](module) for more information on these attributes. Instead of `MyModule.__behaviour__(:callbacks)`, `MyModule.behaviour_info(:callbacks)` can be used. Summary ======== Functions ---------- [defcallback(spec)](#defcallback/1) Defines a function callback according to the given type specification. [defmacrocallback(spec)](#defmacrocallback/1) Defines a macro callback according to the given type specification. Functions ========== ### defcallback(spec) Defines a function callback according to the given type specification. ### defmacrocallback(spec) Defines a macro callback according to the given type specification. elixir Float Float ====== Functions for working with floating-point numbers. Kernel functions ----------------- There are functions related to floating-point numbers on the [`Kernel`](kernel) module too. Here is a list of them: * [`Kernel.round/1`](kernel#round/1): rounds a number to the nearest integer. * [`Kernel.trunc/1`](kernel#trunc/1): returns the integer part of a number. Known issues ------------- There are some very well known problems with floating-point numbers and arithmetics due to the fact most decimal fractions cannot be represented by a floating-point binary and most operations are not exact, but operate on approximations. Those issues are not specific to Elixir, they are a property of floating point representation itself. For example, the numbers 0.1 and 0.01 are two of them, what means the result of squaring 0.1 does not give 0.01 neither the closest representable. Here is what happens in this case: * The closest representable number to 0.1 is 0.1000000014 * The closest representable number to 0.01 is 0.0099999997 * Doing 0.1 \* 0.1 should return 0.01, but because 0.1 is actually 0.1000000014, the result is 0.010000000000000002, and because this is not the closest representable number to 0.01, you'll get the wrong result for this operation There are also other known problems like flooring or rounding numbers. See [`round/2`](#round/2) and [`floor/2`](#floor/2) for more details about them. To learn more about floating-point arithmetic visit: * [0.30000000000000004.com](http://0.30000000000000004.com/) * [What Every Programmer Should Know About Floating-Point Arithmetic](https://floating-point-gui.de/) Summary ======== Types ------ [precision\_range()](#t:precision_range/0) Functions ---------- [ceil(number, precision \\ 0)](#ceil/2) Rounds a float to the smallest integer greater than or equal to `num`. [floor(number, precision \\ 0)](#floor/2) Rounds a float to the largest number less than or equal to `num`. [parse(binary)](#parse/1) Parses a binary into a float. [ratio(float)](#ratio/1) Returns a pair of integers whose ratio is exactly equal to the original float and with a positive denominator. [round(float, precision \\ 0)](#round/2) Rounds a floating-point value to an arbitrary number of fractional digits (between 0 and 15). [to\_charlist(float)](#to_charlist/1) Returns a charlist which corresponds to the text representation of the given float. [to\_string(float)](#to_string/1) Returns a binary which corresponds to the text representation of the given float. Types ====== ### precision\_range() #### Specs ``` precision_range() :: 0..15 ``` Functions ========== ### ceil(number, precision \\ 0) #### Specs ``` ceil(float(), precision_range()) :: float() ``` Rounds a float to the smallest integer greater than or equal to `num`. [`ceil/2`](#ceil/2) also accepts a precision to round a floating-point value down to an arbitrary number of fractional digits (between 0 and 15). The operation is performed on the binary floating point, without a conversion to decimal. The behaviour of [`ceil/2`](#ceil/2) for floats can be surprising. For example: ``` iex> Float.ceil(-12.52, 2) -12.51 ``` One may have expected it to ceil to -12.52. This is not a bug. Most decimal fractions cannot be represented as a binary floating point and therefore the number above is internally represented as -12.51999999, which explains the behaviour above. This function always returns floats. [`Kernel.trunc/1`](kernel#trunc/1) may be used instead to truncate the result to an integer afterwards. #### Examples ``` iex> Float.ceil(34.25) 35.0 iex> Float.ceil(-56.5) -56.0 iex> Float.ceil(34.251, 2) 34.26 ``` ### floor(number, precision \\ 0) #### Specs ``` floor(float(), precision_range()) :: float() ``` Rounds a float to the largest number less than or equal to `num`. [`floor/2`](#floor/2) also accepts a precision to round a floating-point value down to an arbitrary number of fractional digits (between 0 and 15). The operation is performed on the binary floating point, without a conversion to decimal. This function always returns a float. [`Kernel.trunc/1`](kernel#trunc/1) may be used instead to truncate the result to an integer afterwards. #### Known issues The behaviour of [`floor/2`](#floor/2) for floats can be surprising. For example: ``` iex> Float.floor(12.52, 2) 12.51 ``` One may have expected it to floor to 12.52. This is not a bug. Most decimal fractions cannot be represented as a binary floating point and therefore the number above is internally represented as 12.51999999, which explains the behaviour above. #### Examples ``` iex> Float.floor(34.25) 34.0 iex> Float.floor(-56.5) -57.0 iex> Float.floor(34.259, 2) 34.25 ``` ### parse(binary) #### Specs ``` parse(binary()) :: {float(), binary()} | :error ``` Parses a binary into a float. If successful, returns a tuple in the form of `{float, remainder_of_binary}`; when the binary cannot be coerced into a valid float, the atom `:error` is returned. If the size of float exceeds the maximum size of `1.7976931348623157e+308`, the [`ArgumentError`](argumenterror) exception is raised. If you want to convert a string-formatted float directly to a float, [`String.to_float/1`](string#to_float/1) can be used instead. #### Examples ``` iex> Float.parse("34") {34.0, ""} iex> Float.parse("34.25") {34.25, ""} iex> Float.parse("56.5xyz") {56.5, "xyz"} iex> Float.parse("pi") :error ``` ### ratio(float) #### Specs ``` ratio(float()) :: {integer(), pos_integer()} ``` Returns a pair of integers whose ratio is exactly equal to the original float and with a positive denominator. #### Examples ``` iex> Float.ratio(0.0) {0, 1} iex> Float.ratio(3.14) {7070651414971679, 2251799813685248} iex> Float.ratio(-3.14) {-7070651414971679, 2251799813685248} iex> Float.ratio(1.5) {3, 2} iex> Float.ratio(-1.5) {-3, 2} iex> Float.ratio(16.0) {16, 1} iex> Float.ratio(-16.0) {-16, 1} ``` ### round(float, precision \\ 0) #### Specs ``` round(float(), precision_range()) :: float() ``` Rounds a floating-point value to an arbitrary number of fractional digits (between 0 and 15). The rounding direction always ties to half up. The operation is performed on the binary floating point, without a conversion to decimal. This function only accepts floats and always returns a float. Use [`Kernel.round/1`](kernel#round/1) if you want a function that accepts both floats and integers and always returns an integer. #### Known issues The behaviour of [`round/2`](#round/2) for floats can be surprising. For example: ``` iex> Float.round(5.5675, 3) 5.567 ``` One may have expected it to round to the half up 5.568. This is not a bug. Most decimal fractions cannot be represented as a binary floating point and therefore the number above is internally represented as 5.567499999, which explains the behaviour above. If you want exact rounding for decimals, you must use a decimal library. The behaviour above is also in accordance to reference implementations, such as "Correctly Rounded Binary-Decimal and Decimal-Binary Conversions" by David M. Gay. #### Examples ``` iex> Float.round(12.5) 13.0 iex> Float.round(5.5674, 3) 5.567 iex> Float.round(5.5675, 3) 5.567 iex> Float.round(-5.5674, 3) -5.567 iex> Float.round(-5.5675) -6.0 iex> Float.round(12.341444444444441, 15) 12.341444444444441 ``` ### to\_charlist(float) #### Specs ``` to_charlist(float()) :: charlist() ``` Returns a charlist which corresponds to the text representation of the given float. It uses the shortest representation according to algorithm described in "Printing Floating-Point Numbers Quickly and Accurately" in Proceedings of the SIGPLAN '96 Conference on Programming Language Design and Implementation. #### Examples ``` iex> Float.to_charlist(7.0) '7.0' ``` ### to\_string(float) #### Specs ``` to_string(float()) :: String.t() ``` Returns a binary which corresponds to the text representation of the given float. It uses the shortest representation according to algorithm described in "Printing Floating-Point Numbers Quickly and Accurately" in Proceedings of the SIGPLAN '96 Conference on Programming Language Design and Implementation. #### Examples ``` iex> Float.to_string(7.0) "7.0" ``` elixir mix compile.yecc mix compile.yecc ================= Compiles Yecc source files. When this task runs, it will check the modification time of every file, and if it has changed, the file will be compiled. Files will be compiled in the same source directory with a .erl extension. You can force compilation regardless of modification times by passing the `--force` option. Command line options --------------------- * `--force` - forces compilation regardless of modification times * `--all-warnings` - prints warnings even from files that do not need to be recompiled Configuration -------------- * `:erlc_paths` - directories to find source files. Defaults to `["src"]`. * `:yecc_options` - compilation options that apply to Yecc's compiler. For a complete list of options, see [`:yecc.file/1`](http://www.erlang.org/doc/man/yecc.html#file-1). Note that the `:report`, `:return_errors`, and `:return_warnings` options are overridden by this compiler, thus setting them has no effect. elixir HashDict HashDict ========= This module is deprecated. Use Map instead. Tuple-based HashDict implementation. This module is deprecated. Use the [`Map`](map) module instead. Summary ======== Types ------ [t()](#t:t/0) Functions ---------- [delete(dict, key)](#delete/2) deprecated [drop(dict, keys)](#drop/2) deprecated [equal?(dict1, dict2)](#equal?/2) deprecated [fetch(hash\_dict, key)](#fetch/2) deprecated [fetch!(dict, key)](#fetch!/2) deprecated [get(dict, key, default \\ nil)](#get/3) deprecated [get\_and\_update(dict, key, fun)](#get_and_update/3) deprecated [get\_lazy(dict, key, fun)](#get_lazy/3) deprecated [has\_key?(dict, key)](#has_key?/2) deprecated [keys(dict)](#keys/1) deprecated [merge(dict1, dict2, fun \\ fn \_k, \_v1, v2 -> v2 end)](#merge/3) deprecated [new()](#new/0) deprecated Creates a new empty dict. [pop(dict, key, default \\ nil)](#pop/3) deprecated [pop\_lazy(dict, key, fun)](#pop_lazy/3) deprecated [put(hash\_dict, key, value)](#put/3) deprecated [put\_new(dict, key, value)](#put_new/3) deprecated [put\_new\_lazy(dict, key, fun)](#put_new_lazy/3) deprecated [size(hash\_dict)](#size/1) deprecated [split(dict, keys)](#split/2) deprecated [take(dict, keys)](#take/2) deprecated [to\_list(dict)](#to_list/1) deprecated [update(dict, key, initial, fun)](#update/4) deprecated [update!(dict, key, fun)](#update!/3) deprecated [values(dict)](#values/1) deprecated Types ====== ### t() #### Specs ``` t() ``` Functions ========== ### delete(dict, key) This function is deprecated. Use maps and the Map module instead. ### drop(dict, keys) This function is deprecated. Use maps and the Map module instead. ### equal?(dict1, dict2) This function is deprecated. Use maps and the Map module instead. ### fetch(hash\_dict, key) This function is deprecated. Use maps and the Map module instead. ### fetch!(dict, key) This function is deprecated. Use maps and the Map module instead. ### get(dict, key, default \\ nil) This function is deprecated. Use maps and the Map module instead. ### get\_and\_update(dict, key, fun) This function is deprecated. Use maps and the Map module instead. ### get\_lazy(dict, key, fun) This function is deprecated. Use maps and the Map module instead. ### has\_key?(dict, key) This function is deprecated. Use maps and the Map module instead. ### keys(dict) This function is deprecated. Use maps and the Map module instead. ### merge(dict1, dict2, fun \\ fn \_k, \_v1, v2 -> v2 end) This function is deprecated. Use maps and the Map module instead. ### new() This function is deprecated. Use maps and the Map module instead. #### Specs ``` new() :: Dict.t() ``` Creates a new empty dict. ### pop(dict, key, default \\ nil) This function is deprecated. Use maps and the Map module instead. ### pop\_lazy(dict, key, fun) This function is deprecated. Use maps and the Map module instead. ### put(hash\_dict, key, value) This function is deprecated. Use maps and the Map module instead. ### put\_new(dict, key, value) This function is deprecated. Use maps and the Map module instead. ### put\_new\_lazy(dict, key, fun) This function is deprecated. Use maps and the Map module instead. ### size(hash\_dict) This function is deprecated. Use maps and the Map module instead. ### split(dict, keys) This function is deprecated. Use maps and the Map module instead. ### take(dict, keys) This function is deprecated. Use maps and the Map module instead. ### to\_list(dict) This function is deprecated. Use maps and the Map module instead. ### update(dict, key, initial, fun) This function is deprecated. Use maps and the Map module instead. ### update!(dict, key, fun) This function is deprecated. Use maps and the Map module instead. ### values(dict) This function is deprecated. Use maps and the Map module instead. elixir Macro.Env Macro.Env ========== A struct that holds compile time environment information. The current environment can be accessed at any time as [`__ENV__/0`](kernel.specialforms#__ENV__/0). Inside macros, the caller environment can be accessed as [`__CALLER__/0`](kernel.specialforms#__CALLER__/0). An instance of [`Macro.Env`](#content) must not be modified by hand. If you need to create a custom environment to pass to [`Code.eval_quoted/3`](code#eval_quoted/3), use the following trick: ``` def make_custom_env do import SomeModule, only: [some_function: 2] alias A.B.C __ENV__ end ``` You may then call `make_custom_env()` to get a struct with the desired imports and aliases included. It contains the following fields: * `module` - the current module name * `file` - the current file name as a binary * `line` - the current line as an integer * `function` - a tuple as `{atom, integer}`, where the first element is the function name and the second its arity; returns `nil` if not inside a function * `context` - the context of the environment; it can be `nil` (default context), `:guard` (inside a guard) or `:match` (inside a match) * `aliases` - a list of two-element tuples, where the first element is the aliased name and the second one the actual name * `requires` - the list of required modules * `functions` - a list of functions imported from each module * `macros` - a list of macros imported from each module * `macro_aliases` - a list of aliases defined inside the current macro * `context_modules` - a list of modules defined in the current context * `lexical_tracker` - PID of the lexical tracker which is responsible for keeping user info The following fields pertain to variable handling and must not be accessed or relied on. To get a list of all variables, see [`vars/1`](#vars/1): * `current_vars` * `unused_vars` * `prematch_vars` * `contextual_vars` The following fields are deprecated and must not be accessed or relied on: * `vars` - a list keeping all defined variables as `{var, context}` Summary ======== Types ------ [aliases()](#t:aliases/0) [context()](#t:context/0) [context\_modules()](#t:context_modules/0) [file()](#t:file/0) [functions()](#t:functions/0) [lexical\_tracker()](#t:lexical_tracker/0) [line()](#t:line/0) [macro\_aliases()](#t:macro_aliases/0) [macros()](#t:macros/0) [name\_arity()](#t:name_arity/0) [requires()](#t:requires/0) [t()](#t:t/0) [variable()](#t:variable/0) Functions ---------- [has\_var?(env, var)](#has_var?/2) Checks if a variable belongs to the environment. [in\_guard?(env)](#in_guard?/1) Returns whether the compilation environment is currently inside a guard. [in\_match?(env)](#in_match?/1) Returns whether the compilation environment is currently inside a match clause. [location(env)](#location/1) Returns a keyword list containing the file and line information as keys. [stacktrace(env)](#stacktrace/1) Returns the environment stacktrace. [to\_match(env)](#to_match/1) Returns a [`Macro.Env`](#content) in the match context. [vars(env)](#vars/1) Returns a list of variables in the current environment. Types ====== ### aliases() #### Specs ``` aliases() :: [{module(), module()}] ``` ### context() #### Specs ``` context() :: :match | :guard | nil ``` ### context\_modules() #### Specs ``` context_modules() :: [module()] ``` ### file() #### Specs ``` file() :: binary() ``` ### functions() #### Specs ``` functions() :: [{module(), [name_arity()]}] ``` ### lexical\_tracker() #### Specs ``` lexical_tracker() :: pid() | nil ``` ### line() #### Specs ``` line() :: non_neg_integer() ``` ### macro\_aliases() #### Specs ``` macro_aliases() :: [{module(), {term(), module()}}] ``` ### macros() #### Specs ``` macros() :: [{module(), [name_arity()]}] ``` ### name\_arity() #### Specs ``` name_arity() :: {atom(), arity()} ``` ### requires() #### Specs ``` requires() :: [module()] ``` ### t() #### Specs ``` t() :: %Macro.Env{ module: atom(), file: file(), line: line(), function: name_arity() | nil, context: context(), requires: requires(), aliases: aliases(), functions: functions(), macros: macros(), macro_aliases: macro_aliases(), context_modules: context_modules(), vars: vars(), unused_vars: unused_vars(), current_vars: current_vars(), prematch_vars: prematch_vars(), lexical_tracker: lexical_tracker(), contextual_vars: contextual_vars() } ``` ### variable() #### Specs ``` variable() :: {atom(), atom() | term()} ``` Functions ========== ### has\_var?(env, var) #### Specs ``` has_var?(t(), variable()) :: boolean() ``` Checks if a variable belongs to the environment. ### in\_guard?(env) #### Specs ``` in_guard?(t()) :: boolean() ``` Returns whether the compilation environment is currently inside a guard. ### in\_match?(env) #### Specs ``` in_match?(t()) :: boolean() ``` Returns whether the compilation environment is currently inside a match clause. ### location(env) #### Specs ``` location(t()) :: keyword() ``` Returns a keyword list containing the file and line information as keys. ### stacktrace(env) #### Specs ``` stacktrace(t()) :: list() ``` Returns the environment stacktrace. ### to\_match(env) #### Specs ``` to_match(t()) :: t() ``` Returns a [`Macro.Env`](#content) in the match context. ### vars(env) #### Specs ``` vars(t()) :: [variable()] ``` Returns a list of variables in the current environment. Each variable is identified by a tuple of two elements, where the first element is the variable name as an atom and the second element is its context, which may be an atom or an integer.
programming_docs
elixir Time Time ===== A Time struct and functions. The Time struct contains the fields hour, minute, second and microseconds. New times can be built with the [`new/4`](#new/4) function or using the `~T` (see [`Kernel.sigil_T/2`](kernel#sigil_T/2)) sigil: ``` iex> ~T[23:00:07.001] ~T[23:00:07.001] ``` Both [`new/4`](#new/4) and sigil return a struct where the time fields can be accessed directly: ``` iex> time = ~T[23:00:07.001] iex> time.hour 23 iex> time.microsecond {1000, 3} ``` The functions on this module work with the [`Time`](#content) struct as well as any struct that contains the same fields as the [`Time`](#content) struct, such as [`NaiveDateTime`](naivedatetime) and [`DateTime`](datetime). Such functions expect [`Calendar.time/0`](calendar#t:time/0) in their typespecs (instead of [`t/0`](#t:t/0)). Developers should avoid creating the Time structs directly and instead rely on the functions provided by this module as well as the ones in third-party calendar libraries. Comparing times ---------------- Comparisons in Elixir using [`==/2`](kernel#==/2), [`>/2`](kernel#%3E/2), [`</2`](kernel#%3C/2) and similar are structural and based on the [`Time`](#content) struct fields. For proper comparison between times, use the [`compare/2`](#compare/2) function. Summary ======== Types ------ [t()](#t:t/0) Functions ---------- [add(time, number, unit \\ :second)](#add/3) Adds the `number` of `unit`s to the given `time`. [compare(time1, time2)](#compare/2) Compares two time structs. [convert(time, calendar)](#convert/2) Converts given `time` to a different calendar. [convert!(time, calendar)](#convert!/2) Similar to [`Time.convert/2`](time#convert/2), but raises an [`ArgumentError`](argumenterror) if the conversion between the two calendars is not possible. [diff(time1, time2, unit \\ :second)](#diff/3) Returns the difference between two times, considering only the hour, minute, second and microsecond. [from\_erl(tuple, microsecond \\ {0, 0}, calendar \\ Calendar.ISO)](#from_erl/3) Converts an Erlang time tuple to a [`Time`](#content) struct. [from\_erl!(tuple, microsecond \\ {0, 0}, calendar \\ Calendar.ISO)](#from_erl!/3) Converts an Erlang time tuple to a [`Time`](#content) struct. [from\_iso8601(string, calendar \\ Calendar.ISO)](#from_iso8601/2) Parses the extended "Local time" format described by [ISO 8601:2004](https://en.wikipedia.org/wiki/ISO_8601). [from\_iso8601!(string, calendar \\ Calendar.ISO)](#from_iso8601!/2) Parses the extended "Local time" format described by [ISO 8601:2004](https://en.wikipedia.org/wiki/ISO_8601). [new(hour, minute, second, microsecond \\ {0, 0}, calendar \\ Calendar.ISO)](#new/5) Builds a new time. [to\_erl(time)](#to_erl/1) Converts given `time` to an Erlang time tuple. [to\_iso8601(time, format \\ :extended)](#to_iso8601/2) Converts the given time to [ISO 8601:2004](https://en.wikipedia.org/wiki/ISO_8601). [to\_string(time)](#to_string/1) Converts the given `time` to a string. [truncate(time, precision)](#truncate/2) Returns the given time with the microsecond field truncated to the given precision (`:microsecond`, `millisecond` or `:second`). [utc\_now(calendar \\ Calendar.ISO)](#utc_now/1) Returns the current time in UTC. Types ====== ### t() #### Specs ``` t() :: %Time{ calendar: Calendar.calendar(), hour: Calendar.hour(), microsecond: Calendar.microsecond(), minute: Calendar.minute(), second: Calendar.second() } ``` Functions ========== ### add(time, number, unit \\ :second) #### Specs ``` add(Calendar.time(), integer(), System.time_unit()) :: t() ``` Adds the `number` of `unit`s to the given `time`. This function accepts the `number` measured according to [`Calendar.ISO`](calendar.iso). The time is returned in the same calendar as it was given in. Note the result value represents the time of day, meaning that it is cyclic, for instance, it will never go over 24 hours for the ISO calendar. #### Examples ``` iex> Time.add(~T[10:00:00], 27000) ~T[17:30:00.000000] iex> Time.add(~T[11:00:00.005], 2400) ~T[11:40:00.005000] iex> Time.add(~T[00:00:00], 86_399_999, :millisecond) ~T[23:59:59.999000] iex> Time.add(~T[17:10:05], 86400) ~T[17:10:05.000000] iex> Time.add(~T[23:00:00], -60) ~T[22:59:00.000000] ``` ### compare(time1, time2) #### Specs ``` compare(Calendar.time(), Calendar.time()) :: :lt | :eq | :gt ``` Compares two time structs. Returns `:gt` if first time is later than the second and `:lt` for vice versa. If the two times are equal `:eq` is returned. #### Examples ``` iex> Time.compare(~T[16:04:16], ~T[16:04:28]) :lt iex> Time.compare(~T[16:04:16], ~T[16:04:16]) :eq iex> Time.compare(~T[16:04:16.01], ~T[16:04:16.001]) :gt ``` This function can also be used to compare across more complex calendar types by considering only the time fields: ``` iex> Time.compare(~N[1900-01-01 16:04:16], ~N[2015-01-01 16:04:16]) :eq iex> Time.compare(~N[2015-01-01 16:04:16], ~N[2015-01-01 16:04:28]) :lt iex> Time.compare(~N[2015-01-01 16:04:16.01], ~N[2000-01-01 16:04:16.001]) :gt ``` ### convert(time, calendar) #### Specs ``` convert(Calendar.time(), Calendar.calendar()) :: {:ok, t()} | {:error, atom()} ``` Converts given `time` to a different calendar. Returns `{:ok, time}` if the conversion was successful, or `{:error, reason}` if it was not, for some reason. #### Examples Imagine someone implements `Calendar.Holocene`, a calendar based on the Gregorian calendar that adds exactly 10,000 years to the current Gregorian year: ``` iex> Time.convert(~T[13:30:15], Calendar.Holocene) {:ok, %Time{calendar: Calendar.Holocene, hour: 13, minute: 30, second: 15, microsecond: {0, 0}}} ``` ### convert!(time, calendar) #### Specs ``` convert!(Calendar.time(), Calendar.calendar()) :: t() ``` Similar to [`Time.convert/2`](time#convert/2), but raises an [`ArgumentError`](argumenterror) if the conversion between the two calendars is not possible. #### Examples Imagine someone implements `Calendar.Holocene`, a calendar based on the Gregorian calendar that adds exactly 10,000 years to the current Gregorian year: ``` iex> Time.convert!(~T[13:30:15], Calendar.Holocene) %Time{calendar: Calendar.Holocene, hour: 13, minute: 30, second: 15, microsecond: {0, 0}} ``` ### diff(time1, time2, unit \\ :second) #### Specs ``` diff(Calendar.time(), Calendar.time(), System.time_unit()) :: integer() ``` Returns the difference between two times, considering only the hour, minute, second and microsecond. As with the [`compare/2`](#compare/2) function both [`Time`](#content) structs and other structures containing time can be used. If for instance a [`NaiveDateTime`](naivedatetime) or [`DateTime`](datetime) is passed, only the hour, month, second, and microsecond is considered. Any additional information about a date or time zone is ignored when calculating the difference. The answer can be returned in any `unit` available from [`System.time_unit/0`](system#t:time_unit/0). If the first unit is smaller than the second, a negative number is returned. This function returns the difference in seconds where seconds are measured according to [`Calendar.ISO`](calendar.iso). #### Examples ``` iex> Time.diff(~T[00:29:12], ~T[00:29:10]) 2 # When passing a [`NaiveDateTime`](NaiveDateTime.html) the date part is ignored. iex> Time.diff(~N[2017-01-01 00:29:12], ~T[00:29:10]) 2 # Two [`NaiveDateTime`](NaiveDateTime.html) structs could have big differences in the date # but only the time part is considered. iex> Time.diff(~N[2017-01-01 00:29:12], ~N[1900-02-03 00:29:10]) 2 iex> Time.diff(~T[00:29:12], ~T[00:29:10], :microsecond) 2_000_000 iex> Time.diff(~T[00:29:10], ~T[00:29:12], :microsecond) -2_000_000 ``` ### from\_erl(tuple, microsecond \\ {0, 0}, calendar \\ Calendar.ISO) #### Specs ``` from_erl(:calendar.time(), Calendar.microsecond(), Calendar.calendar()) :: {:ok, t()} | {:error, atom()} ``` Converts an Erlang time tuple to a [`Time`](#content) struct. #### Examples ``` iex> Time.from_erl({23, 30, 15}, {5000, 3}) {:ok, ~T[23:30:15.005]} iex> Time.from_erl({24, 30, 15}) {:error, :invalid_time} ``` ### from\_erl!(tuple, microsecond \\ {0, 0}, calendar \\ Calendar.ISO) #### Specs ``` from_erl!(:calendar.time(), Calendar.microsecond(), Calendar.calendar()) :: t() ``` Converts an Erlang time tuple to a [`Time`](#content) struct. #### Examples ``` iex> Time.from_erl!({23, 30, 15}) ~T[23:30:15] iex> Time.from_erl!({23, 30, 15}, {5000, 3}) ~T[23:30:15.005] iex> Time.from_erl!({24, 30, 15}) ** (ArgumentError) cannot convert {24, 30, 15} to time, reason: :invalid_time ``` ### from\_iso8601(string, calendar \\ Calendar.ISO) #### Specs ``` from_iso8601(String.t(), Calendar.calendar()) :: {:ok, t()} | {:error, atom()} ``` Parses the extended "Local time" format described by [ISO 8601:2004](https://en.wikipedia.org/wiki/ISO_8601). Time zone offset may be included in the string but they will be simply discarded as such information is not included in times. As specified in the standard, the separator "T" may be omitted if desired as there is no ambiguity within this function. Time representations with reduced accuracy are not supported. Note that while ISO 8601 allows times to specify 24:00:00 as the zero hour of the next day, this notation is not supported by Elixir. Leap seconds are not supported as well by the built-in Calendar.ISO. #### Examples ``` iex> Time.from_iso8601("23:50:07") {:ok, ~T[23:50:07]} iex> Time.from_iso8601("23:50:07Z") {:ok, ~T[23:50:07]} iex> Time.from_iso8601("T23:50:07Z") {:ok, ~T[23:50:07]} iex> Time.from_iso8601("23:50:07,0123456") {:ok, ~T[23:50:07.012345]} iex> Time.from_iso8601("23:50:07.0123456") {:ok, ~T[23:50:07.012345]} iex> Time.from_iso8601("23:50:07.123Z") {:ok, ~T[23:50:07.123]} iex> Time.from_iso8601("2015:01:23 23-50-07") {:error, :invalid_format} iex> Time.from_iso8601("23:50:07A") {:error, :invalid_format} iex> Time.from_iso8601("23:50:07.") {:error, :invalid_format} iex> Time.from_iso8601("23:50:61") {:error, :invalid_time} ``` ### from\_iso8601!(string, calendar \\ Calendar.ISO) #### Specs ``` from_iso8601!(String.t(), Calendar.calendar()) :: t() ``` Parses the extended "Local time" format described by [ISO 8601:2004](https://en.wikipedia.org/wiki/ISO_8601). Raises if the format is invalid. #### Examples ``` iex> Time.from_iso8601!("23:50:07,123Z") ~T[23:50:07.123] iex> Time.from_iso8601!("23:50:07.123Z") ~T[23:50:07.123] iex> Time.from_iso8601!("2015:01:23 23-50-07") ** (ArgumentError) cannot parse "2015:01:23 23-50-07" as time, reason: :invalid_format ``` ### new(hour, minute, second, microsecond \\ {0, 0}, calendar \\ Calendar.ISO) #### Specs ``` new( Calendar.hour(), Calendar.minute(), Calendar.second(), Calendar.microsecond() | integer(), Calendar.calendar() ) :: {:ok, t()} | {:error, atom()} ``` Builds a new time. Expects all values to be integers. Returns `{:ok, time}` if each entry fits its appropriate range, returns `{:error, reason}` otherwise. Microseconds can also be given with a precision, which must be an integer between 0 and 6. The built-in calendar does not support leap seconds. #### Examples ``` iex> Time.new(0, 0, 0, 0) {:ok, ~T[00:00:00.000000]} iex> Time.new(23, 59, 59, 999_999) {:ok, ~T[23:59:59.999999]} iex> Time.new(24, 59, 59, 999_999) {:error, :invalid_time} iex> Time.new(23, 60, 59, 999_999) {:error, :invalid_time} iex> Time.new(23, 59, 60, 999_999) {:error, :invalid_time} iex> Time.new(23, 59, 59, 1_000_000) {:error, :invalid_time} # Invalid precision Time.new(23, 59, 59, {999_999, 10}) {:error, :invalid_time} ``` ### to\_erl(time) #### Specs ``` to_erl(Calendar.time()) :: :calendar.time() ``` Converts given `time` to an Erlang time tuple. WARNING: Loss of precision may occur, as Erlang time tuples only contain hours/minutes/seconds. #### Examples ``` iex> Time.to_erl(~T[23:30:15.999]) {23, 30, 15} iex> Time.to_erl(~N[2010-04-17 23:30:15.999]) {23, 30, 15} ``` ### to\_iso8601(time, format \\ :extended) #### Specs ``` to_iso8601(Calendar.time(), :extended | :basic) :: String.t() ``` Converts the given time to [ISO 8601:2004](https://en.wikipedia.org/wiki/ISO_8601). By default, [`Time.to_iso8601/2`](time#to_iso8601/2) returns times formatted in the "extended" format, for human readability. It also supports the "basic" format through passing the `:basic` option. ### Examples ``` iex> Time.to_iso8601(~T[23:00:13]) "23:00:13" iex> Time.to_iso8601(~T[23:00:13.001]) "23:00:13.001" iex> Time.to_iso8601(~T[23:00:13.001], :basic) "230013.001" iex> Time.to_iso8601(~N[2010-04-17 23:00:13]) "23:00:13" ``` ### to\_string(time) #### Specs ``` to_string(Calendar.time()) :: String.t() ``` Converts the given `time` to a string. ### Examples ``` iex> Time.to_string(~T[23:00:00]) "23:00:00" iex> Time.to_string(~T[23:00:00.001]) "23:00:00.001" iex> Time.to_string(~T[23:00:00.123456]) "23:00:00.123456" iex> Time.to_string(~N[2015-01-01 23:00:00.001]) "23:00:00.001" iex> Time.to_string(~N[2015-01-01 23:00:00.123456]) "23:00:00.123456" ``` ### truncate(time, precision) #### Specs ``` truncate(t(), :microsecond | :millisecond | :second) :: t() ``` Returns the given time with the microsecond field truncated to the given precision (`:microsecond`, `millisecond` or `:second`). The given time is returned unchanged if it already has lower precision than the given precision. #### Examples ``` iex> Time.truncate(~T[01:01:01.123456], :microsecond) ~T[01:01:01.123456] iex> Time.truncate(~T[01:01:01.123456], :millisecond) ~T[01:01:01.123] iex> Time.truncate(~T[01:01:01.123456], :second) ~T[01:01:01] ``` ### utc\_now(calendar \\ Calendar.ISO) #### Specs ``` utc_now(Calendar.calendar()) :: t() ``` Returns the current time in UTC. #### Examples ``` iex> time = Time.utc_now() iex> time.hour >= 0 true ``` elixir Where to go next Getting Started Where to go next ================ Eager to learn more? Keep reading! Build your first Elixir project ------------------------------- In order to get your first project started, Elixir ships with a build tool called Mix. You can get your new project started by running: ``` $ mix new path/to/new/project ``` We have written a guide that covers how to build an Elixir application, with its own supervision tree, configuration, tests, and more. The application works as a distributed key-value store where we organize key-value pairs into buckets and distribute those buckets across multiple nodes: * [Mix and OTP](mix-otp/introduction-to-mix) If you are planning to write your first library for other developers to use, don’t forget to read our [Library Guidelines](https://hexdocs.pm/elixir/library-guidelines.html). Meta-programming ---------------- Elixir is an extensible and very customizable programming language thanks to its meta-programming support. Most meta-programming in Elixir is done through macros, which are very useful in several situations, especially for writing DSLs. We have written a short guide that explains the basic mechanisms behind macros, shows how to write macros, and how to use macros to create DSLs: * [Meta-programming in Elixir](meta/quote-and-unquote) Community and other resources ----------------------------- We have a [Learning](https://elixir-lang.org/learning.html) section that suggests books, screencasts, and other resources for learning Elixir and exploring the ecosystem. There are also plenty of Elixir resources out there, like conference talks, open source projects, and other learning material produced by the community. Don’t forget that you can also check the [source code of Elixir itself](https://github.com/elixir-lang/elixir), which is mostly written in Elixir (mainly the `lib` directory), or [explore Elixir’s documentation](https://elixir-lang.org/docs.html). A byte of Erlang ---------------- Elixir runs on the Erlang Virtual Machine and, sooner or later, an Elixir developer will want to interface with existing Erlang libraries. Here’s a list of online resources that cover Erlang’s fundamentals and its more advanced features: * This [Erlang Syntax: A Crash Course](https://elixir-lang.org/crash-course.html) provides a concise intro to Erlang’s syntax. Each code snippet is accompanied by equivalent code in Elixir. This is an opportunity for you to not only get some exposure to Erlang’s syntax but also review some of the things you have learned in this guide. * Erlang’s official website has a short [tutorial](https://www.erlang.org/course). There is chapter with pictures briefly describing Erlang’s primitives for [concurrent programming](https://www.erlang.org/course/concurrent_programming.html). * [Learn You Some Erlang for Great Good!](http://learnyousomeerlang.com/) is an excellent introduction to Erlang, its design principles, standard library, best practices, and much more. Once you have read through the crash course mentioned above, you’ll be able to safely skip the first couple of chapters in the book that mostly deal with the syntax. When you reach [The Hitchhiker’s Guide to Concurrency](http://learnyousomeerlang.com/the-hitchhikers-guide-to-concurrency) chapter, that’s where the real fun starts. elixir Logger.Formatter Logger.Formatter ================= Conveniences for formatting data for logs. This module allows developers to specify a string that serves as template for log messages, for example: ``` $time $metadata[$level] $message\n ``` Will print error messages as: ``` 18:43:12.439 user_id=13 [error] Hello\n ``` The valid parameters you can use are: * `$time` - the time the log message was sent * `$date` - the date the log message was sent * `$message` - the log message * `$level` - the log level * `$node` - the node that prints the message * `$metadata` - user controlled data presented in `"key=val key2=val2 "` format * `$levelpad` - sets to a single space if level is 4 characters long, otherwise set to the empty space. Used to align the message after level. Backends typically allow developers to supply such control strings via configuration files. This module provides [`compile/1`](#compile/1), which compiles the string into a format for fast operations at runtime and [`format/5`](#format/5) to format the compiled pattern into an actual IO data. Metadata --------- Metadata to be sent to the logger can be read and written with the [`Logger.metadata/0`](logger#metadata/0) and [`Logger.metadata/1`](logger#metadata/1) functions. For example, you can set `Logger.metadata([user_id: 13])` to add user\_id metadata to the current process. The user can configure the backend to choose which metadata it wants to print and it will replace the `$metadata` value. Summary ======== Types ------ [pattern()](#t:pattern/0) [time()](#t:time/0) Functions ---------- [compile(pattern)](#compile/1) Compiles a format string into a data structure that [`format/5`](#format/5) can handle. [format(config, level, msg, timestamp, metadata)](#format/5) Takes a compiled format and injects the level, timestamp, message, and metadata keyword list and returns a properly formatted string. [format\_date(arg)](#format_date/1) Formats date as chardata. [format\_time(arg)](#format_time/1) Formats time as chardata. [prune(binary)](#prune/1) Prunes non-valid UTF-8 code points. Types ====== ### pattern() #### Specs ``` pattern() :: :date | :level | :levelpad | :message | :metadata | :node | :time ``` ### time() #### Specs ``` time() :: {{1970..10000, 1..12, 1..31}, {0..23, 0..59, 0..59, 0..999}} ``` Functions ========== ### compile(pattern) #### Specs ``` compile(binary() | nil) :: [pattern() | binary()] ``` ``` compile(pattern) :: pattern when pattern: {module(), function :: atom()} ``` Compiles a format string into a data structure that [`format/5`](#format/5) can handle. Check the module doc for documentation on the valid parameters that will be interpolated in the pattern. If you pass `nil` as the pattern, the pattern defaults to: ``` "\n$time $metadata[$level] $levelpad$message\n" ``` If you want to customize formatting through a custom formatter, you can pass a `{module, function}` tuple as the `pattern`. ``` iex> Logger.Formatter.compile("$time $metadata [$level] $message\n") [:time, " ", :metadata, " [", :level, "] ", :message, "\n"] iex> Logger.Formatter.compile({MyLoggerFormatter, :format}) {MyLoggerFormatter, :format} ``` ### format(config, level, msg, timestamp, metadata) #### Specs ``` format( {atom(), atom()} | [pattern() | binary()], Logger.level(), Logger.message(), time(), keyword() ) :: IO.chardata() ``` Takes a compiled format and injects the level, timestamp, message, and metadata keyword list and returns a properly formatted string. #### Examples ``` iex> pattern = Logger.Formatter.compile("[$level] $message") iex> timestamp = {{1977, 01, 28}, {13, 29, 00, 000}} iex> formatted = Logger.Formatter.format(pattern, :info, "hello", timestamp, []) iex> IO.chardata_to_string(formatted) "[info] hello" ``` ### format\_date(arg) #### Specs ``` format_date({1970..10000, 1..12, 1..31}) :: IO.chardata() ``` Formats date as chardata. ### format\_time(arg) #### Specs ``` format_time({0..23, 0..59, 0..59, 0..999}) :: IO.chardata() ``` Formats time as chardata. ### prune(binary) #### Specs ``` prune(IO.chardata()) :: IO.chardata() ``` Prunes non-valid UTF-8 code points. Typically called after formatting when the data cannot be printed.
programming_docs
elixir IO.Stream IO.Stream ========== Defines an [`IO.Stream`](#content) struct returned by [`IO.stream/2`](io#stream/2) and [`IO.binstream/2`](io#binstream/2). The following fields are public: * `device` - the IO device * `raw` - a boolean indicating if bin functions should be used * `line_or_bytes` - if reading should read lines or a given number of bytes It is worth noting that an IO stream has side effects and every time you go over the stream you may get different results. Summary ======== Types ------ [t()](#t:t/0) Types ====== ### t() #### Specs ``` t() :: %IO.Stream{device: term(), line_or_bytes: term(), raw: term()} ``` elixir mix do mix do ======= Executes the tasks separated by comma. The comma should be followed by a space. This task is automatically reenabled, so it can be called multiple times. Examples --------- The example below prints the available compilers and then the list of dependencies. ``` mix do compile --list, deps ``` elixir mix compile.xref mix compile.xref ================= Performs remote dispatch checking. It uses [`mix xref`](mix.tasks.xref) to check if any remote call does not exist or is deprecated, and emits warnings in such cases. This task does not show deprecated local calls (a call to a deprecated function or macro in the same module) nor calls to deprecated functionality in Elixir itself. When this task runs, it will check if the source code has been modified. If it has changed, [`mix xref`](mix.tasks.xref) will be run to check remote dispatches. You can force checking regardless of modification time by passing the `--force` option. Command line options --------------------- * `--force` - forces checking regardless of modification time * `--warnings-as-errors` - treats warnings as errors and returns a non-zero exit code elixir Keyword lists and maps Getting Started Keyword lists and maps ====================== So far we haven’t discussed any associative data structures, i.e. data structures that are able to associate a certain value (or multiple values) to a key. Different languages call these different names like dictionaries, hashes, associative arrays, etc. In Elixir, we have two main associative data structures: keyword lists and maps. It’s time to learn more about them! Keyword lists ------------- In many functional programming languages, it is common to use a list of 2-item tuples as the representation of a key-value data structure. In Elixir, when we have a list of tuples and the first item of the tuple (i.e. the key) is an atom, we call it a keyword list: ``` iex> list = [{:a, 1}, {:b, 2}] [a: 1, b: 2] iex> list == [a: 1, b: 2] true ``` As you can see above, Elixir supports a special syntax for defining such lists: `[key: value]`. Underneath it maps to the same list of tuples as above. Since keyword lists are lists, we can use all operations available to lists. For example, we can use `++` to add new values to a keyword list: ``` iex> list ++ [c: 3] [a: 1, b: 2, c: 3] iex> [a: 0] ++ list [a: 0, a: 1, b: 2] ``` Note that values added to the front are the ones fetched on lookup: ``` iex> new_list = [a: 0] ++ list [a: 0, a: 1, b: 2] iex> new_list[:a] 0 ``` Keyword lists are important because they have three special characteristics: * Keys must be atoms. * Keys are ordered, as specified by the developer. * Keys can be given more than once. For example, [the Ecto library](https://github.com/elixir-lang/ecto) makes use of these features to provide an elegant DSL for writing database queries: ``` query = from w in Weather, where: w.prcp > 0, where: w.temp < 20, select: w ``` These characteristics are what prompted keyword lists to be the default mechanism for passing options to functions in Elixir. In chapter 5, when we discussed the `if/2` macro, we mentioned that the following syntax is supported: ``` iex> if false, do: :this, else: :that :that ``` The `do:` and `else:` pairs form a keyword list! In fact, the call above is equivalent to: ``` iex> if(false, [do: :this, else: :that]) :that ``` Which, as we have seen above, is the same as: ``` iex> if(false, [{:do, :this}, {:else, :that}]) :that ``` In general, when the keyword list is the last argument of a function, the square brackets are optional. Although we can pattern match on keyword lists, it is rarely done in practice since pattern matching on lists requires the number of items and their order to match: ``` iex> [a: a] = [a: 1] [a: 1] iex> a 1 iex> [a: a] = [a: 1, b: 2] ** (MatchError) no match of right hand side value: [a: 1, b: 2] iex> [b: b, a: a] = [a: 1, b: 2] ** (MatchError) no match of right hand side value: [a: 1, b: 2] ``` In order to manipulate keyword lists, Elixir provides [the `Keyword` module](https://hexdocs.pm/elixir/Keyword.html). Remember, though, keyword lists are simply lists, and as such they provide the same linear performance characteristics as lists. The longer the list, the longer it will take to find a key, to count the number of items, and so on. For this reason, keyword lists are used in Elixir mainly for passing optional values. If you need to store many items or guarantee one-key associates with at maximum one-value, you should use maps instead. Maps ---- Whenever you need a key-value store, maps are the “go to” data structure in Elixir. A map is created using the `%{}` syntax: ``` iex> map = %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> map[:a] 1 iex> map[2] :b iex> map[:c] nil ``` Compared to keyword lists, we can already see two differences: * Maps allow any value as a key. * Maps’ keys do not follow any ordering. In contrast to keyword lists, maps are very useful with pattern matching. When a map is used in a pattern, it will always match on a subset of the given value: ``` iex> %{} = %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> %{:a => a} = %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> a 1 iex> %{:c => c} = %{:a => 1, 2 => :b} ** (MatchError) no match of right hand side value: %{2 => :b, :a => 1} ``` As shown above, a map matches as long as the keys in the pattern exist in the given map. Therefore, an empty map matches all maps. Variables can be used when accessing, matching and adding map keys: ``` iex> n = 1 1 iex> map = %{n => :one} %{1 => :one} iex> map[n] :one iex> %{^n => :one} = %{1 => :one, 2 => :two, 3 => :three} %{1 => :one, 2 => :two, 3 => :three} ``` [The `Map` module](https://hexdocs.pm/elixir/Map.html) provides a very similar API to the `Keyword` module with convenience functions to manipulate maps: ``` iex> Map.get(%{:a => 1, 2 => :b}, :a) 1 iex> Map.put(%{:a => 1, 2 => :b}, :c, 3) %{2 => :b, :a => 1, :c => 3} iex> Map.to_list(%{:a => 1, 2 => :b}) [{2, :b}, {:a, 1}] ``` Maps have the following syntax for updating a key’s value: ``` iex> map = %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> %{map | 2 => "two"} %{2 => "two", :a => 1} iex> %{map | :c => 3} ** (KeyError) key :c not found in: %{2 => :b, :a => 1} ``` The syntax above requires the given key to exist. It cannot be used to add new keys. For example, using it with the `:c` key failed because there is no `:c` in the map. When all the keys in a map are atoms, you can use the keyword syntax for convenience: ``` iex> map = %{a: 1, b: 2} %{a: 1, b: 2} ``` Another interesting property of maps is that they provide their own syntax for accessing atom keys: ``` iex> map = %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> map.a 1 iex> map.c ** (KeyError) key :c not found in: %{2 => :b, :a => 1} ``` Elixir developers typically prefer to use the `map.field` syntax and pattern matching instead of the functions in the `Map` module when working with maps because they lead to an assertive style of programming. [This blog post by José Valim](https://dashbit.co/blog/writing-assertive-code-with-elixir) provides insight and examples on how you get more concise and faster software by writing assertive code in Elixir. Nested data structures ---------------------- Often we will have maps inside maps, or even keywords lists inside maps, and so forth. Elixir provides conveniences for manipulating nested data structures via the `put_in/2`, `update_in/2` and other macros giving the same conveniences you would find in imperative languages while keeping the immutable properties of the language. Imagine you have the following structure: ``` iex> users = [ john: %{name: "John", age: 27, languages: ["Erlang", "Ruby", "Elixir"]}, mary: %{name: "Mary", age: 29, languages: ["Elixir", "F#", "Clojure"]} ] [john: %{age: 27, languages: ["Erlang", "Ruby", "Elixir"], name: "John"}, mary: %{age: 29, languages: ["Elixir", "F#", "Clojure"], name: "Mary"}] ``` We have a keyword list of users where each value is a map containing the name, age and a list of programming languages each user likes. If we wanted to access the age for john, we could write: ``` iex> users[:john].age 27 ``` It happens we can also use this same syntax for updating the value: ``` iex> users = put_in users[:john].age, 31 [john: %{age: 31, languages: ["Erlang", "Ruby", "Elixir"], name: "John"}, mary: %{age: 29, languages: ["Elixir", "F#", "Clojure"], name: "Mary"}] ``` The `update_in/2` macro is similar but allows us to pass a function that controls how the value changes. For example, let’s remove “Clojure” from Mary’s list of languages: ``` iex> users = update_in users[:mary].languages, fn languages -> List.delete(languages, "Clojure") end [john: %{age: 31, languages: ["Erlang", "Ruby", "Elixir"], name: "John"}, mary: %{age: 29, languages: ["Elixir", "F#"], name: "Mary"}] ``` There is more to learn about `put_in/2` and `update_in/2`, including the `get_and_update_in/2` that allows us to extract a value and update the data structure at once. There are also `put_in/3`, `update_in/3` and `get_and_update_in/3` which allow dynamic access into the data structure. [Check their respective documentation in the `Kernel` module for more information](https://hexdocs.pm/elixir/Kernel.html). This concludes our introduction to associative data structures in Elixir. You will find out that, given keyword lists and maps, you will always have the right tool to tackle problems that require associative data structures in Elixir. elixir Processes Getting Started Processes ========= In Elixir, all code runs inside processes. Processes are isolated from each other, run concurrent to one another and communicate via message passing. Processes are not only the basis for concurrency in Elixir, but they also provide the means for building distributed and fault-tolerant programs. Elixir’s processes should not be confused with operating system processes. Processes in Elixir are extremely lightweight in terms of memory and CPU (even compared to threads as used in many other programming languages). Because of this, it is not uncommon to have tens or even hundreds of thousands of processes running simultaneously. In this chapter, we will learn about the basic constructs for spawning new processes, as well as sending and receiving messages between processes. `spawn` ------- The basic mechanism for spawning new processes is the auto-imported `spawn/1` function: ``` iex> spawn fn -> 1 + 2 end #PID<0.43.0> ``` `spawn/1` takes a function which it will execute in another process. Notice `spawn/1` returns a PID (process identifier). At this point, the process you spawned is very likely dead. The spawned process will execute the given function and exit after the function is done: ``` iex> pid = spawn fn -> 1 + 2 end #PID<0.44.0> iex> Process.alive?(pid) false ``` > Note: you will likely get different process identifiers than the ones we are getting in this guide. > > We can retrieve the PID of the current process by calling `self/0`: ``` iex> self() #PID<0.41.0> iex> Process.alive?(self()) true ``` Processes get much more interesting when we are able to send and receive messages. `send` and `receive` --------------------- We can send messages to a process with `send/2` and receive them with `receive/1`: ``` iex> send self(), {:hello, "world"} {:hello, "world"} iex> receive do ...> {:hello, msg} -> msg ...> {:world, _msg} -> "won't match" ...> end "world" ``` When a message is sent to a process, the message is stored in the process mailbox. The `receive/1` block goes through the current process mailbox searching for a message that matches any of the given patterns. `receive/1` supports guards and many clauses, such as `case/2`. The process that sends the message does not block on `send/2`, it puts the message in the recipient’s mailbox and continues. In particular, a process can send messages to itself. If there is no message in the mailbox matching any of the patterns, the current process will wait until a matching message arrives. A timeout can also be specified: ``` iex> receive do ...> {:hello, msg} -> msg ...> after ...> 1_000 -> "nothing after 1s" ...> end "nothing after 1s" ``` A timeout of 0 can be given when you already expect the message to be in the mailbox. Let’s put it all together and send messages between processes: ``` iex> parent = self() #PID<0.41.0> iex> spawn fn -> send(parent, {:hello, self()}) end #PID<0.48.0> iex> receive do ...> {:hello, pid} -> "Got hello from #{inspect pid}" ...> end "Got hello from #PID<0.48.0>" ``` The `inspect/1` function is used to convert a data structure’s internal representation into a string, typically for printing. Notice that when the `receive` block gets executed the sender process we have spawned may already be dead, as its only instruction was to send a message. While in the shell, you may find the helper `flush/0` quite useful. It flushes and prints all the messages in the mailbox. ``` iex> send self(), :hello :hello iex> flush() :hello :ok ``` Links ----- The majority of times we spawn processes in Elixir, we spawn them as linked processes. Before we show an example with `spawn_link/1`, let’s see what happens when a process started with `spawn/1` fails: ``` iex> spawn fn -> raise "oops" end #PID<0.58.0> [error] Process #PID<0.58.00> raised an exception ** (RuntimeError) oops (stdlib) erl_eval.erl:668: :erl_eval.do_apply/6 ``` It merely logged an error but the parent process is still running. That’s because processes are isolated. If we want the failure in one process to propagate to another one, we should link them. This can be done with `spawn_link/1`: ``` iex> self() #PID<0.41.0> iex> spawn_link fn -> raise "oops" end ** (EXIT from #PID<0.41.0>) evaluator process exited with reason: an exception was raised: ** (RuntimeError) oops (stdlib) erl_eval.erl:668: :erl_eval.do_apply/6 [error] Process #PID<0.289.0> raised an exception ** (RuntimeError) oops (stdlib) erl_eval.erl:668: :erl_eval.do_apply/6 ``` Because processes are linked, we now see a message saying the parent process, which is the shell process, has received an EXIT signal from another process causing the shell to terminate. IEx detects this situation and starts a new shell session. Linking can also be done manually by calling `Process.link/1`. We recommend that you take a look at [the `Process` module](https://hexdocs.pm/elixir/Process.html) for other functionality provided by processes. Processes and links play an important role when building fault-tolerant systems. Elixir processes are isolated and don’t share anything by default. Therefore, a failure in a process will never crash or corrupt the state of another process. Links, however, allow processes to establish a relationship in case of failure. We often link our processes to supervisors which will detect when a process dies and start a new process in its place. While other languages would require us to catch/handle exceptions, in Elixir we are actually fine with letting processes fail because we expect supervisors to properly restart our systems. “Failing fast” is a common philosophy when writing Elixir software! `spawn/1` and `spawn_link/1` are the basic primitives for creating processes in Elixir. Although we have used them exclusively so far, most of the time we are going to use abstractions that build on top of them. Let’s see the most common one, called tasks. Tasks ----- Tasks build on top of the spawn functions to provide better error reports and introspection: ``` iex(1)> Task.start fn -> raise "oops" end {:ok, #PID<0.55.0>} 15:22:33.046 [error] Task #PID<0.55.0> started from #PID<0.53.0> terminating ** (RuntimeError) oops (stdlib) erl_eval.erl:668: :erl_eval.do_apply/6 (elixir) lib/task/supervised.ex:85: Task.Supervised.do_apply/2 (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3 Function: #Function<20.99386804/0 in :erl_eval.expr/5> Args: [] ``` Instead of `spawn/1` and `spawn_link/1`, we use `Task.start/1` and `Task.start_link/1` which return `{:ok, pid}` rather than just the PID. This is what enables tasks to be used in supervision trees. Furthermore, `Task` provides convenience functions, like `Task.async/1` and `Task.await/1`, and functionality to ease distribution. We will explore those functionalities in the ***Mix and OTP guide***, for now it is enough to remember to use `Task` to get better error reports. State ----- We haven’t talked about state so far in this guide. If you are building an application that requires state, for example, to keep your application configuration, or you need to parse a file and keep it in memory, where would you store it? Processes are the most common answer to this question. We can write processes that loop infinitely, maintain state, and send and receive messages. As an example, let’s write a module that starts new processes that work as a key-value store in a file named `kv.exs`: ``` defmodule KV do def start_link do Task.start_link(fn -> loop(%{}) end) end defp loop(map) do receive do {:get, key, caller} -> send caller, Map.get(map, key) loop(map) {:put, key, value} -> loop(Map.put(map, key, value)) end end end ``` Note that the `start_link` function starts a new process that runs the `loop/1` function, starting with an empty map. The `loop/1` (private) function then waits for messages and performs the appropriate action for each message. We made `loop/1` private by using `defp` instead of `def`. In the case of a `:get` message, it sends a message back to the caller and calls `loop/1` again, to wait for a new message. While the `:put` message actually invokes `loop/1` with a new version of the map, with the given `key` and `value` stored. Let’s give it a try by running `iex kv.exs`: ``` iex> {:ok, pid} = KV.start_link {:ok, #PID<0.62.0>} iex> send pid, {:get, :hello, self()} {:get, :hello, #PID<0.41.0>} iex> flush() nil :ok ``` At first, the process map has no keys, so sending a `:get` message and then flushing the current process inbox returns `nil`. Let’s send a `:put` message and try it again: ``` iex> send pid, {:put, :hello, :world} {:put, :hello, :world} iex> send pid, {:get, :hello, self()} {:get, :hello, #PID<0.41.0>} iex> flush() :world :ok ``` Notice how the process is keeping a state and we can get and update this state by sending the process messages. In fact, any process that knows the `pid` above will be able to send it messages and manipulate the state. It is also possible to register the `pid`, giving it a name, and allowing everyone that knows the name to send it messages: ``` iex> Process.register(pid, :kv) true iex> send :kv, {:get, :hello, self()} {:get, :hello, #PID<0.41.0>} iex> flush() :world :ok ``` Using processes to maintain state and name registration are very common patterns in Elixir applications. However, most of the time, we won’t implement those patterns manually as above, but by using one of the many abstractions that ship with Elixir. For example, Elixir provides [agents](https://hexdocs.pm/elixir/Agent.html), which are simple abstractions around state: ``` iex> {:ok, pid} = Agent.start_link(fn -> %{} end) {:ok, #PID<0.72.0>} iex> Agent.update(pid, fn map -> Map.put(map, :hello, :world) end) :ok iex> Agent.get(pid, fn map -> Map.get(map, :hello) end) :world ``` A `:name` option could also be given to `Agent.start_link/2` and it would be automatically registered. Besides agents, Elixir provides an API for building generic servers (called `GenServer`), tasks, and more, all powered by processes underneath. Those, along with supervision trees, will be explored with more detail in the ***Mix and OTP guide*** which will build a complete Elixir application from start to finish. For now, let’s move on and explore the world of I/O in Elixir.
programming_docs
elixir String String ======= A String in Elixir is a UTF-8 encoded binary. Code points and grapheme cluster --------------------------------- The functions in this module act according to the Unicode Standard, version 11.0.0. As per the standard, a code point is a single Unicode Character, which may be represented by one or more bytes. For example, the code point "é" is two bytes: ``` iex> byte_size("é") 2 ``` However, this module returns the proper length: ``` iex> String.length("é") 1 ``` Furthermore, this module also presents the concept of grapheme cluster (from now on referenced as graphemes). Graphemes can consist of multiple code points that may be perceived as a single character by readers. For example, "é" can be represented either as a single "e with acute" code point or as the letter "e" followed by a "combining acute accent" (two code points): ``` iex> string = "\u0065\u0301" iex> byte_size(string) 3 iex> String.length(string) 1 iex> String.codepoints(string) ["e", "́"] iex> String.graphemes(string) ["é"] ``` Although the example above is made of two characters, it is perceived by users as one. Graphemes can also be two characters that are interpreted as one by some languages. For example, some languages may consider "ch" as a single character. However, since this information depends on the locale, it is not taken into account by this module. In general, the functions in this module rely on the Unicode Standard, but do not contain any of the locale specific behaviour. More information about graphemes can be found in the [Unicode Standard Annex #29](https://www.unicode.org/reports/tr29/). The current Elixir version implements Extended Grapheme Cluster algorithm. For converting a binary to a different encoding and for Unicode normalization mechanisms, see Erlang's `:unicode` module. String and binary operations ----------------------------- To act according to the Unicode Standard, many functions in this module run in linear time, as they need to traverse the whole string considering the proper Unicode code points. For example, [`String.length/1`](string#length/1) will take longer as the input grows. On the other hand, [`Kernel.byte_size/1`](kernel#byte_size/1) always runs in constant time (i.e. regardless of the input size). This means often there are performance costs in using the functions in this module, compared to the more low-level operations that work directly with binaries: * [`Kernel.binary_part/3`](kernel#binary_part/3) - retrieves part of the binary * [`Kernel.bit_size/1`](kernel#bit_size/1) and [`Kernel.byte_size/1`](kernel#byte_size/1) - size related functions * [`Kernel.is_bitstring/1`](kernel#is_bitstring/1) and [`Kernel.is_binary/1`](kernel#is_binary/1) - type checking function * Plus a number of functions for working with binaries (bytes) in the [`:binary` module](http://www.erlang.org/doc/man/binary.html) There are many situations where using the [`String`](#content) module can be avoided in favor of binary functions or pattern matching. For example, imagine you have a string `prefix` and you want to remove this prefix from another string named `full`. One may be tempted to write: ``` iex> take_prefix = fn full, prefix -> ...> base = String.length(prefix) ...> String.slice(full, base, String.length(full) - base) ...> end iex> take_prefix.("Mr. John", "Mr. ") "John" ``` Although the function above works, it performs poorly. To calculate the length of the string, we need to traverse it fully, so we traverse both `prefix` and `full` strings, then slice the `full` one, traversing it again. A first attempt at improving it could be with ranges: ``` iex> take_prefix = fn full, prefix -> ...> base = String.length(prefix) ...> String.slice(full, base..-1) ...> end iex> take_prefix.("Mr. John", "Mr. ") "John" ``` While this is much better (we don't traverse `full` twice), it could still be improved. In this case, since we want to extract a substring from a string, we can use [`Kernel.byte_size/1`](kernel#byte_size/1) and [`Kernel.binary_part/3`](kernel#binary_part/3) as there is no chance we will slice in the middle of a code point made of more than one byte: ``` iex> take_prefix = fn full, prefix -> ...> base = byte_size(prefix) ...> binary_part(full, base, byte_size(full) - base) ...> end iex> take_prefix.("Mr. John", "Mr. ") "John" ``` Or simply use pattern matching: ``` iex> take_prefix = fn full, prefix -> ...> base = byte_size(prefix) ...> <<_::binary-size(base), rest::binary>> = full ...> rest ...> end iex> take_prefix.("Mr. John", "Mr. ") "John" ``` On the other hand, if you want to dynamically slice a string based on an integer value, then using [`String.slice/3`](string#slice/3) is the best option as it guarantees we won't incorrectly split a valid code point into multiple bytes. Integer code points -------------------- Although code points could be represented as integers, this module represents all code points as strings. For example: ``` iex> String.codepoints("olá") ["o", "l", "á"] ``` There are a couple of ways to retrieve a character integer code point. One may use the `?` construct: ``` iex> ?o 111 iex> ?á 225 ``` Or also via pattern matching: ``` iex> <<aacute::utf8>> = "á" iex> aacute 225 ``` As we have seen above, code points can be inserted into a string by their hexadecimal code: ``` "ol\u0061\u0301" #=> "olá" ``` Self-synchronization --------------------- The UTF-8 encoding is self-synchronizing. This means that if malformed data (i.e., data that is not possible according to the definition of the encoding) is encountered, only one code point needs to be rejected. This module relies on this behaviour to ignore such invalid characters. For example, [`length/1`](#length/1) will return a correct result even if an invalid code point is fed into it. In other words, this module expects invalid data to be detected elsewhere, usually when retrieving data from the external source. For example, a driver that reads strings from a database will be responsible to check the validity of the encoding. [`String.chunk/2`](string#chunk/2) can be used for breaking a string into valid and invalid parts. Patterns --------- Many functions in this module work with patterns. For example, [`String.split/2`](string#split/2) can split a string into multiple strings given a pattern. This pattern can be a string, a list of strings or a compiled pattern: ``` iex> String.split("foo bar", " ") ["foo", "bar"] iex> String.split("foo bar!", [" ", "!"]) ["foo", "bar", ""] iex> pattern = :binary.compile_pattern([" ", "!"]) iex> String.split("foo bar!", pattern) ["foo", "bar", ""] ``` The compiled pattern is useful when the same match will be done over and over again. Note though that the compiled pattern cannot be stored in a module attribute as the pattern is generated at runtime and does not survive compile time. Summary ======== Types ------ [codepoint()](#t:codepoint/0) A UTF-8 code point. It may be one or more bytes. [grapheme()](#t:grapheme/0) Multiple code points that may be perceived as a single character by readers [pattern()](#t:pattern/0) Pattern used in functions like [`replace/3`](#replace/3) and [`split/2`](#split/2) [t()](#t:t/0) A UTF-8 encoded binary. Functions ---------- [at(string, position)](#at/2) Returns the grapheme at the `position` of the given UTF-8 `string`. If `position` is greater than `string` length, then it returns `nil`. [bag\_distance(string1, string2)](#bag_distance/2) Computes the bag distance between two strings. [capitalize(string, mode \\ :default)](#capitalize/2) Converts the first character in the given string to uppercase and the remainder to lowercase according to `mode`. [chunk(string, trait)](#chunk/2) Splits the string into chunks of characters that share a common trait. [codepoints(string)](#codepoints/1) Returns all code points in the string. [contains?(string, contents)](#contains?/2) Checks if `string` contains any of the given `contents`. [downcase(string, mode \\ :default)](#downcase/2) Converts all characters in the given string to lowercase according to `mode`. [duplicate(subject, n)](#duplicate/2) Returns a string `subject` duplicated `n` times. [ends\_with?(string, suffix)](#ends_with?/2) Returns `true` if `string` ends with any of the suffixes given. [equivalent?(string1, string2)](#equivalent?/2) Returns `true` if `string1` is canonically equivalent to 'string2'. [first(string)](#first/1) Returns the first grapheme from a UTF-8 string, `nil` if the string is empty. [graphemes(string)](#graphemes/1) Returns Unicode graphemes in the string as per Extended Grapheme Cluster algorithm. [jaro\_distance(string1, string2)](#jaro_distance/2) Computes the Jaro distance (similarity) between two strings. [last(string)](#last/1) Returns the last grapheme from a UTF-8 string, `nil` if the string is empty. [length(string)](#length/1) Returns the number of Unicode graphemes in a UTF-8 string. [match?(string, regex)](#match?/2) Checks if `string` matches the given regular expression. [myers\_difference(string1, string2)](#myers_difference/2) Returns a keyword list that represents an edit script. [next\_codepoint(string)](#next_codepoint/1) Returns the next code point in a string. [next\_grapheme(binary)](#next_grapheme/1) Returns the next grapheme in a string. [next\_grapheme\_size(string)](#next_grapheme_size/1) Returns the size of the next grapheme. [normalize(string, form)](#normalize/2) deprecated Converts all characters in `string` to Unicode normalization form identified by `form`. [pad\_leading(string, count, padding \\ [" "])](#pad_leading/3) Returns a new string padded with a leading filler which is made of elements from the `padding`. [pad\_trailing(string, count, padding \\ [" "])](#pad_trailing/3) Returns a new string padded with a trailing filler which is made of elements from the `padding`. [printable?(string, character\_limit \\ :infinity)](#printable?/2) Checks if a string contains only printable characters up to `character_limit`. [replace(subject, pattern, replacement, options \\ [])](#replace/4) Returns a new string created by replacing occurrences of `pattern` in `subject` with `replacement`. [replace\_leading(string, match, replacement)](#replace_leading/3) Replaces all leading occurrences of `match` by `replacement` of `match` in `string`. [replace\_prefix(string, match, replacement)](#replace_prefix/3) Replaces prefix in `string` by `replacement` if it matches `match`. [replace\_suffix(string, match, replacement)](#replace_suffix/3) Replaces suffix in `string` by `replacement` if it matches `match`. [replace\_trailing(string, match, replacement)](#replace_trailing/3) Replaces all trailing occurrences of `match` by `replacement` in `string`. [reverse(string)](#reverse/1) Reverses the graphemes in given string. [slice(string, range)](#slice/2) Returns a substring from the offset given by the start of the range to the offset given by the end of the range. [slice(string, start, len)](#slice/3) Returns a substring starting at the offset `start`, and of length `len`. [split(binary)](#split/1) Divides a string into substrings at each Unicode whitespace occurrence with leading and trailing whitespace ignored. Groups of whitespace are treated as a single occurrence. Divisions do not occur on non-breaking whitespace. [split(string, pattern, options \\ [])](#split/3) Divides a string into parts based on a pattern. [split\_at(string, position)](#split_at/2) Splits a string into two at the specified offset. When the offset given is negative, location is counted from the end of the string. [splitter(string, pattern, options \\ [])](#splitter/3) Returns an enumerable that splits a string on demand. [starts\_with?(string, prefix)](#starts_with?/2) Returns `true` if `string` starts with any of the prefixes given. [to\_atom(string)](#to_atom/1) Converts a string to an atom. [to\_charlist(string)](#to_charlist/1) Converts a string into a charlist. [to\_existing\_atom(string)](#to_existing_atom/1) Converts a string to an existing atom. [to\_float(string)](#to_float/1) Returns a float whose text representation is `string`. [to\_integer(string)](#to_integer/1) Returns an integer whose text representation is `string`. [to\_integer(string, base)](#to_integer/2) Returns an integer whose text representation is `string` in base `base`. [trim(string)](#trim/1) Returns a string where all leading and trailing Unicode whitespaces have been removed. [trim(string, to\_trim)](#trim/2) Returns a string where all leading and trailing `to_trim` characters have been removed. [trim\_leading(string)](#trim_leading/1) Returns a string where all leading Unicode whitespaces have been removed. [trim\_leading(string, to\_trim)](#trim_leading/2) Returns a string where all leading `to_trim` characters have been removed. [trim\_trailing(string)](#trim_trailing/1) Returns a string where all trailing Unicode whitespaces has been removed. [trim\_trailing(string, to\_trim)](#trim_trailing/2) Returns a string where all trailing `to_trim` characters have been removed. [upcase(string, mode \\ :default)](#upcase/2) Converts all characters in the given string to uppercase according to `mode`. [valid?(string)](#valid?/1) Checks whether `string` contains only valid characters. Types ====== ### codepoint() #### Specs ``` codepoint() :: t() ``` A UTF-8 code point. It may be one or more bytes. ### grapheme() #### Specs ``` grapheme() :: t() ``` Multiple code points that may be perceived as a single character by readers ### pattern() #### Specs ``` pattern() :: t() | [t()] | :binary.cp() ``` Pattern used in functions like [`replace/3`](#replace/3) and [`split/2`](#split/2) ### t() #### Specs ``` t() :: binary() ``` A UTF-8 encoded binary. The types `String.t()` and `binary()` are equivalent to analysis tools. Although, for those reading the documentation, `String.t()` implies it is a UTF-8 encoded binary. Functions ========== ### at(string, position) #### Specs ``` at(t(), integer()) :: grapheme() | nil ``` Returns the grapheme at the `position` of the given UTF-8 `string`. If `position` is greater than `string` length, then it returns `nil`. #### Examples ``` iex> String.at("elixir", 0) "e" iex> String.at("elixir", 1) "l" iex> String.at("elixir", 10) nil iex> String.at("elixir", -1) "r" iex> String.at("elixir", -10) nil ``` ### bag\_distance(string1, string2) #### Specs ``` bag_distance(t(), t()) :: float() ``` Computes the bag distance between two strings. Returns a float value between 0 and 1 representing the bag distance between `string1` and `string2`. The bag distance is meant to be an efficient approximation of the distance between two strings to quickly rule out strings that are largely different. The algorithm is outlined in the "String Matching with Metric Trees Using an Approximate Distance" paper by Ilaria Bartolini, Paolo Ciaccia, and Marco Patella. #### Examples ``` iex> String.bag_distance("abc", "") 0.0 iex> String.bag_distance("abcd", "a") 0.25 iex> String.bag_distance("abcd", "ab") 0.5 iex> String.bag_distance("abcd", "abc") 0.75 iex> String.bag_distance("abcd", "abcd") 1.0 ``` ### capitalize(string, mode \\ :default) #### Specs ``` capitalize(t(), :default | :ascii | :greek) :: t() ``` Converts the first character in the given string to uppercase and the remainder to lowercase according to `mode`. `mode` may be `:default`, `:ascii` or `:greek`. The `:default` mode considers all non-conditional transformations outlined in the Unicode standard. `:ascii` lowercases only the letters A to Z. `:greek` includes the context sensitive mappings found in Greek. #### Examples ``` iex> String.capitalize("abcd") "Abcd" iex> String.capitalize("fin") "Fin" iex> String.capitalize("olá") "Olá" ``` ### chunk(string, trait) #### Specs ``` chunk(t(), :valid | :printable) :: [t()] ``` Splits the string into chunks of characters that share a common trait. The trait can be one of two options: * `:valid` - the string is split into chunks of valid and invalid character sequences * `:printable` - the string is split into chunks of printable and non-printable character sequences Returns a list of binaries each of which contains only one kind of characters. If the given string is empty, an empty list is returned. #### Examples ``` iex> String.chunk(<<?a, ?b, ?c, 0>>, :valid) ["abc\0"] iex> String.chunk(<<?a, ?b, ?c, 0, 0xFFFF::utf16>>, :valid) ["abc\0", <<0xFFFF::utf16>>] iex> String.chunk(<<?a, ?b, ?c, 0, 0x0FFFF::utf8>>, :printable) ["abc", <<0, 0x0FFFF::utf8>>] ``` ### codepoints(string) #### Specs ``` codepoints(t()) :: [codepoint()] ``` Returns all code points in the string. For details about code points and graphemes, see the [`String`](#content) module documentation. #### Examples ``` iex> String.codepoints("olá") ["o", "l", "á"] iex> String.codepoints("оптими зации") ["о", "п", "т", "и", "м", "и", " ", "з", "а", "ц", "и", "и"] iex> String.codepoints("ἅἪῼ") ["ἅ", "Ἢ", "ῼ"] iex> String.codepoints("é") ["é"] iex> String.codepoints("é") ["e", "́"] ``` ### contains?(string, contents) #### Specs ``` contains?(t(), pattern()) :: boolean() ``` Checks if `string` contains any of the given `contents`. `contents` can be either a string, a list of strings, or a compiled pattern. #### Examples ``` iex> String.contains?("elixir of life", "of") true iex> String.contains?("elixir of life", ["life", "death"]) true iex> String.contains?("elixir of life", ["death", "mercury"]) false ``` The argument can also be a compiled pattern: ``` iex> pattern = :binary.compile_pattern(["life", "death"]) iex> String.contains?("elixir of life", pattern) true ``` An empty string will always match: ``` iex> String.contains?("elixir of life", "") true iex> String.contains?("elixir of life", ["", "other"]) true ``` Be aware that this function can match within or across grapheme boundaries. For example, take the grapheme "é" which is made of the characters "e" and the acute accent. The following returns `true`: ``` iex> String.contains?(:unicode.characters_to_nfd_binary("é"), "e") true ``` However, if "é" is represented by the single character "e with acute" accent, then it will return `false`: ``` iex> String.contains?(:unicode.characters_to_nfc_binary("é"), "e") false ``` ### downcase(string, mode \\ :default) #### Specs ``` downcase(t(), :default | :ascii | :greek) :: t() ``` Converts all characters in the given string to lowercase according to `mode`. `mode` may be `:default`, `:ascii` or `:greek`. The `:default` mode considers all non-conditional transformations outlined in the Unicode standard. `:ascii` lowercases only the letters A to Z. `:greek` includes the context sensitive mappings found in Greek. #### Examples ``` iex> String.downcase("ABCD") "abcd" iex> String.downcase("AB 123 XPTO") "ab 123 xpto" iex> String.downcase("OLÁ") "olá" ``` The `:ascii` mode ignores Unicode characters and provides a more performant implementation when you know the string contains only ASCII characters: ``` iex> String.downcase("OLÁ", :ascii) "olÁ" ``` And `:greek` properly handles the context sensitive sigma in Greek: ``` iex> String.downcase("ΣΣ") "σσ" iex> String.downcase("ΣΣ", :greek) "σς" ``` ### duplicate(subject, n) #### Specs ``` duplicate(t(), non_neg_integer()) :: t() ``` Returns a string `subject` duplicated `n` times. Inlined by the compiler. #### Examples ``` iex> String.duplicate("abc", 0) "" iex> String.duplicate("abc", 1) "abc" iex> String.duplicate("abc", 2) "abcabc" ``` ### ends\_with?(string, suffix) #### Specs ``` ends_with?(t(), t() | [t()]) :: boolean() ``` Returns `true` if `string` ends with any of the suffixes given. `suffixes` can be either a single suffix or a list of suffixes. #### Examples ``` iex> String.ends_with?("language", "age") true iex> String.ends_with?("language", ["youth", "age"]) true iex> String.ends_with?("language", ["youth", "elixir"]) false ``` An empty suffix will always match: ``` iex> String.ends_with?("language", "") true iex> String.ends_with?("language", ["", "other"]) true ``` ### equivalent?(string1, string2) #### Specs ``` equivalent?(t(), t()) :: boolean() ``` Returns `true` if `string1` is canonically equivalent to 'string2'. It performs Normalization Form Canonical Decomposition (NFD) on the strings before comparing them. This function is equivalent to: ``` String.normalize(string1, :nfd) == String.normalize(string2, :nfd) ``` Therefore, if you plan to compare multiple strings, multiple times in a row, you may normalize them upfront and compare them directly to avoid multiple normalization passes. #### Examples ``` iex> String.equivalent?("abc", "abc") true iex> String.equivalent?("man\u0303ana", "mañana") true iex> String.equivalent?("abc", "ABC") false iex> String.equivalent?("nø", "nó") false ``` ### first(string) #### Specs ``` first(t()) :: grapheme() | nil ``` Returns the first grapheme from a UTF-8 string, `nil` if the string is empty. #### Examples ``` iex> String.first("elixir") "e" iex> String.first("եոգլի") "ե" ``` ### graphemes(string) #### Specs ``` graphemes(t()) :: [grapheme()] ``` Returns Unicode graphemes in the string as per Extended Grapheme Cluster algorithm. The algorithm is outlined in the [Unicode Standard Annex #29, Unicode Text Segmentation](https://www.unicode.org/reports/tr29/). For details about code points and graphemes, see the [`String`](#content) module documentation. #### Examples ``` iex> String.graphemes("Ńaïve") ["Ń", "a", "ï", "v", "e"] iex> String.graphemes("\u00e9") ["é"] iex> String.graphemes("\u0065\u0301") ["é"] ``` ### jaro\_distance(string1, string2) #### Specs ``` jaro_distance(t(), t()) :: float() ``` Computes the Jaro distance (similarity) between two strings. Returns a float value between `0.0` (equates to no similarity) and `1.0` (is an exact match) representing [Jaro](https://en.wikipedia.org/wiki/Jaro-Winkler_distance) distance between `string1` and `string2`. The Jaro distance metric is designed and best suited for short strings such as person names. Elixir itself uses this function to provide the "did you mean?" functionality. For instance, when you are calling a function in a module and you have a typo in the function name, we attempt to suggest the most similar function name available, if any, based on the [`jaro_distance/2`](#jaro_distance/2) score. #### Examples ``` iex> String.jaro_distance("Dwayne", "Duane") 0.8222222222222223 iex> String.jaro_distance("even", "odd") 0.0 iex> String.jaro_distance("same", "same") 1.0 ``` ### last(string) #### Specs ``` last(t()) :: grapheme() | nil ``` Returns the last grapheme from a UTF-8 string, `nil` if the string is empty. #### Examples ``` iex> String.last("elixir") "r" iex> String.last("եոգլի") "ի" ``` ### length(string) #### Specs ``` length(t()) :: non_neg_integer() ``` Returns the number of Unicode graphemes in a UTF-8 string. #### Examples ``` iex> String.length("elixir") 6 iex> String.length("եոգլի") 5 ``` ### match?(string, regex) #### Specs ``` match?(t(), Regex.t()) :: boolean() ``` Checks if `string` matches the given regular expression. #### Examples ``` iex> String.match?("foo", ~r/foo/) true iex> String.match?("bar", ~r/foo/) false ``` ### myers\_difference(string1, string2) #### Specs ``` myers_difference(t(), t()) :: [{:eq | :ins | :del, t()}] ``` Returns a keyword list that represents an edit script. Check [`List.myers_difference/2`](list#myers_difference/2) for more information. #### Examples ``` iex> string1 = "fox hops over the dog" iex> string2 = "fox jumps over the lazy cat" iex> String.myers_difference(string1, string2) [eq: "fox ", del: "ho", ins: "jum", eq: "ps over the ", del: "dog", ins: "lazy cat"] ``` ### next\_codepoint(string) #### Specs ``` next_codepoint(t()) :: {codepoint(), t()} | nil ``` Returns the next code point in a string. The result is a tuple with the code point and the remainder of the string or `nil` in case the string reached its end. As with other functions in the [`String`](#content) module, [`next_codepoint/1`](#next_codepoint/1) works with binaries that are invalid UTF-8. If the string starts with a sequence of bytes that is not valid in UTF-8 encoding, the first element of the returned tuple is a binary with the first byte. #### Examples ``` iex> String.next_codepoint("olá") {"o", "lá"} iex> invalid = "\x80\x80OK" # first two bytes are invalid in UTF-8 iex> {_, rest} = String.next_codepoint(invalid) {<<128>>, <<128, 79, 75>>} iex> String.next_codepoint(rest) {<<128>>, "OK"} ``` #### Comparison with binary pattern matching Binary pattern matching provides a similar way to decompose a string: ``` iex> <<codepoint::utf8, rest::binary>> = "Elixir" "Elixir" iex> codepoint 69 iex> rest "lixir" ``` though not entirely equivalent because `codepoint` comes as an integer, and the pattern won't match invalid UTF-8. Binary pattern matching, however, is simpler and more efficient, so pick the option that better suits your use case. ### next\_grapheme(binary) #### Specs ``` next_grapheme(t()) :: {grapheme(), t()} | nil ``` Returns the next grapheme in a string. The result is a tuple with the grapheme and the remainder of the string or `nil` in case the String reached its end. #### Examples ``` iex> String.next_grapheme("olá") {"o", "lá"} ``` ### next\_grapheme\_size(string) #### Specs ``` next_grapheme_size(t()) :: {pos_integer(), t()} | nil ``` Returns the size of the next grapheme. The result is a tuple with the next grapheme size and the remainder of the string or `nil` in case the string reached its end. #### Examples ``` iex> String.next_grapheme_size("olá") {1, "lá"} ``` ### normalize(string, form) This function is deprecated. Use :unicode.characters\_to\_nfc\_binary/1 or :unicode.characters\_to\_nfd\_binary/1 instead. Converts all characters in `string` to Unicode normalization form identified by `form`. #### Forms The supported forms are: * `:nfd` - Normalization Form Canonical Decomposition. Characters are decomposed by canonical equivalence, and multiple combining characters are arranged in a specific order. * `:nfc` - Normalization Form Canonical Composition. Characters are decomposed and then recomposed by canonical equivalence. #### Examples ``` iex> String.normalize("yêṩ", :nfd) "yêṩ" iex> String.normalize("leña", :nfc) "leña" ``` ### pad\_leading(string, count, padding \\ [" "]) #### Specs ``` pad_leading(t(), non_neg_integer(), t() | [t()]) :: t() ``` Returns a new string padded with a leading filler which is made of elements from the `padding`. Passing a list of strings as `padding` will take one element of the list for every missing entry. If the list is shorter than the number of inserts, the filling will start again from the beginning of the list. Passing a string `padding` is equivalent to passing the list of graphemes in it. If no `padding` is given, it defaults to whitespace. When `count` is less than or equal to the length of `string`, given `string` is returned. Raises [`ArgumentError`](argumenterror) if the given `padding` contains a non-string element. #### Examples ``` iex> String.pad_leading("abc", 5) " abc" iex> String.pad_leading("abc", 4, "12") "1abc" iex> String.pad_leading("abc", 6, "12") "121abc" iex> String.pad_leading("abc", 5, ["1", "23"]) "123abc" ``` ### pad\_trailing(string, count, padding \\ [" "]) #### Specs ``` pad_trailing(t(), non_neg_integer(), t() | [t()]) :: t() ``` Returns a new string padded with a trailing filler which is made of elements from the `padding`. Passing a list of strings as `padding` will take one element of the list for every missing entry. If the list is shorter than the number of inserts, the filling will start again from the beginning of the list. Passing a string `padding` is equivalent to passing the list of graphemes in it. If no `padding` is given, it defaults to whitespace. When `count` is less than or equal to the length of `string`, given `string` is returned. Raises [`ArgumentError`](argumenterror) if the given `padding` contains a non-string element. #### Examples ``` iex> String.pad_trailing("abc", 5) "abc " iex> String.pad_trailing("abc", 4, "12") "abc1" iex> String.pad_trailing("abc", 6, "12") "abc121" iex> String.pad_trailing("abc", 5, ["1", "23"]) "abc123" ``` ### printable?(string, character\_limit \\ :infinity) #### Specs ``` printable?(t(), 0) :: true ``` ``` printable?(t(), pos_integer() | :infinity) :: boolean() ``` Checks if a string contains only printable characters up to `character_limit`. Takes an optional `character_limit` as a second argument. If `character_limit` is `0`, this function will return `true`. #### Examples ``` iex> String.printable?("abc") true iex> String.printable?("abc" <> <<0>>) false iex> String.printable?("abc" <> <<0>>, 2) true iex> String.printable?("abc" <> <<0>>, 0) true ``` ### replace(subject, pattern, replacement, options \\ []) #### Specs ``` replace(t(), pattern() | Regex.t(), t() | (t() -> t() | iodata()), keyword()) :: t() ``` Returns a new string created by replacing occurrences of `pattern` in `subject` with `replacement`. The `subject` is always a string. The `pattern` may be a string, a regular expression, or a compiled pattern. The `replacement` may be a string or a function that receives the matched pattern and must return the replacement as a string or iodata. By default it replaces all occurrences but this behaviour can be controlled through the `:global` option; see the "Options" section below. #### Options * `:global` - (boolean) if `true`, all occurrences of `pattern` are replaced with `replacement`, otherwise only the first occurrence is replaced. Defaults to `true` #### Examples ``` iex> String.replace("a,b,c", ",", "-") "a-b-c" iex> String.replace("a,b,c", ",", "-", global: false) "a-b,c" ``` The pattern may also be a list of strings and the replacement may also be a function that receives the matched patterns: ``` iex> String.replace("a,b,c", ["a", "c"], fn <<char>> -> <<char + 1>> end) "b,b,d" ``` When the pattern is a regular expression, one can give `\N` or `\g{N}` in the `replacement` string to access a specific capture in the regular expression: ``` iex> String.replace("a,b,c", ~r/,(.)/, ",\\1\\g{1}") "a,bb,cc" ``` Notice we had to escape the backslash escape character (i.e., we used `\\N` instead of just `\N` to escape the backslash; same thing for `\\g{N}`). By giving `\0`, one can inject the whole matched pattern in the replacement string. A compiled pattern can also be given: ``` iex> pattern = :binary.compile_pattern(",") iex> String.replace("a,b,c", pattern, "[]") "a[]b[]c" ``` When an empty string is provided as a `pattern`, the function will treat it as an implicit empty string between each grapheme and the string will be interspersed. If an empty string is provided as `replacement` the `subject` will be returned: ``` iex> String.replace("ELIXIR", "", ".") ".E.L.I.X.I.R." iex> String.replace("ELIXIR", "", "") "ELIXIR" ``` ### replace\_leading(string, match, replacement) #### Specs ``` replace_leading(t(), t(), t()) :: t() ``` Replaces all leading occurrences of `match` by `replacement` of `match` in `string`. Returns the string untouched if there are no occurrences. If `match` is `""`, this function raises an [`ArgumentError`](argumenterror) exception: this happens because this function replaces **all** the occurrences of `match` at the beginning of `string`, and it's impossible to replace "multiple" occurrences of `""`. #### Examples ``` iex> String.replace_leading("hello world", "hello ", "") "world" iex> String.replace_leading("hello hello world", "hello ", "") "world" iex> String.replace_leading("hello world", "hello ", "ola ") "ola world" iex> String.replace_leading("hello hello world", "hello ", "ola ") "ola ola world" ``` ### replace\_prefix(string, match, replacement) #### Specs ``` replace_prefix(t(), t(), t()) :: t() ``` Replaces prefix in `string` by `replacement` if it matches `match`. Returns the string untouched if there is no match. If `match` is an empty string (`""`), `replacement` is just prepended to `string`. #### Examples ``` iex> String.replace_prefix("world", "hello ", "") "world" iex> String.replace_prefix("hello world", "hello ", "") "world" iex> String.replace_prefix("hello hello world", "hello ", "") "hello world" iex> String.replace_prefix("world", "hello ", "ola ") "world" iex> String.replace_prefix("hello world", "hello ", "ola ") "ola world" iex> String.replace_prefix("hello hello world", "hello ", "ola ") "ola hello world" iex> String.replace_prefix("world", "", "hello ") "hello world" ``` ### replace\_suffix(string, match, replacement) #### Specs ``` replace_suffix(t(), t(), t()) :: t() ``` Replaces suffix in `string` by `replacement` if it matches `match`. Returns the string untouched if there is no match. If `match` is an empty string (`""`), `replacement` is just appended to `string`. #### Examples ``` iex> String.replace_suffix("hello", " world", "") "hello" iex> String.replace_suffix("hello world", " world", "") "hello" iex> String.replace_suffix("hello world world", " world", "") "hello world" iex> String.replace_suffix("hello", " world", " mundo") "hello" iex> String.replace_suffix("hello world", " world", " mundo") "hello mundo" iex> String.replace_suffix("hello world world", " world", " mundo") "hello world mundo" iex> String.replace_suffix("hello", "", " world") "hello world" ``` ### replace\_trailing(string, match, replacement) #### Specs ``` replace_trailing(t(), t(), t()) :: t() ``` Replaces all trailing occurrences of `match` by `replacement` in `string`. Returns the string untouched if there are no occurrences. If `match` is `""`, this function raises an [`ArgumentError`](argumenterror) exception: this happens because this function replaces **all** the occurrences of `match` at the end of `string`, and it's impossible to replace "multiple" occurrences of `""`. #### Examples ``` iex> String.replace_trailing("hello world", " world", "") "hello" iex> String.replace_trailing("hello world world", " world", "") "hello" iex> String.replace_trailing("hello world", " world", " mundo") "hello mundo" iex> String.replace_trailing("hello world world", " world", " mundo") "hello mundo mundo" ``` ### reverse(string) #### Specs ``` reverse(t()) :: t() ``` Reverses the graphemes in given string. #### Examples ``` iex> String.reverse("abcd") "dcba" iex> String.reverse("hello world") "dlrow olleh" iex> String.reverse("hello ∂og") "go∂ olleh" ``` Keep in mind reversing the same string twice does not necessarily yield the original string: ``` iex> "̀e" "̀e" iex> String.reverse("̀e") "è" iex> String.reverse(String.reverse("̀e")) "è" ``` In the first example the accent is before the vowel, so it is considered two graphemes. However, when you reverse it once, you have the vowel followed by the accent, which becomes one grapheme. Reversing it again will keep it as one single grapheme. ### slice(string, range) #### Specs ``` slice(t(), Range.t()) :: t() ``` Returns a substring from the offset given by the start of the range to the offset given by the end of the range. If the start of the range is not a valid offset for the given string or if the range is in reverse order, returns `""`. If the start or end of the range is negative, the whole string is traversed first in order to convert the negative indices into positive ones. Remember this function works with Unicode graphemes and considers the slices to represent grapheme offsets. If you want to split on raw bytes, check [`Kernel.binary_part/3`](kernel#binary_part/3) instead. #### Examples ``` iex> String.slice("elixir", 1..3) "lix" iex> String.slice("elixir", 1..10) "lixir" iex> String.slice("elixir", 10..3) "" iex> String.slice("elixir", -4..-1) "ixir" iex> String.slice("elixir", 2..-1) "ixir" iex> String.slice("elixir", -4..6) "ixir" iex> String.slice("elixir", -1..-4) "" iex> String.slice("elixir", -10..-7) "" iex> String.slice("a", 0..1500) "a" iex> String.slice("a", 1..1500) "" ``` ### slice(string, start, len) #### Specs ``` slice(t(), integer(), non_neg_integer()) :: grapheme() ``` Returns a substring starting at the offset `start`, and of length `len`. If the offset is greater than string length, then it returns `""`. Remember this function works with Unicode graphemes and considers the slices to represent grapheme offsets. If you want to split on raw bytes, check [`Kernel.binary_part/3`](kernel#binary_part/3) instead. #### Examples ``` iex> String.slice("elixir", 1, 3) "lix" iex> String.slice("elixir", 1, 10) "lixir" iex> String.slice("elixir", 10, 3) "" iex> String.slice("elixir", -4, 4) "ixir" iex> String.slice("elixir", -10, 3) "" iex> String.slice("a", 0, 1500) "a" iex> String.slice("a", 1, 1500) "" iex> String.slice("a", 2, 1500) "" ``` ### split(binary) #### Specs ``` split(t()) :: [t()] ``` Divides a string into substrings at each Unicode whitespace occurrence with leading and trailing whitespace ignored. Groups of whitespace are treated as a single occurrence. Divisions do not occur on non-breaking whitespace. #### Examples ``` iex> String.split("foo bar") ["foo", "bar"] iex> String.split("foo" <> <<194, 133>> <> "bar") ["foo", "bar"] iex> String.split(" foo bar ") ["foo", "bar"] iex> String.split("no\u00a0break") ["no\u00a0break"] ``` ### split(string, pattern, options \\ []) #### Specs ``` split(t(), pattern() | Regex.t(), keyword()) :: [t()] ``` Divides a string into parts based on a pattern. Returns a list of these parts. The pattern can be a string, a list of strings, a regular expression, or a compiled pattern. The string is split into as many parts as possible by default, but can be controlled via the `:parts` option. Empty strings are only removed from the result if the `:trim` option is set to `true`. When the pattern used is a regular expression, the string is split using [`Regex.split/3`](regex#split/3). #### Options * `:parts` (positive integer or `:infinity`) - the string is split into at most as many parts as this option specifies. If `:infinity`, the string will be split into all possible parts. Defaults to `:infinity`. * `:trim` (boolean) - if `true`, empty strings are removed from the resulting list. This function also accepts all options accepted by [`Regex.split/3`](regex#split/3) if `pattern` is a regular expression. #### Examples Splitting with a string pattern: ``` iex> String.split("a,b,c", ",") ["a", "b", "c"] iex> String.split("a,b,c", ",", parts: 2) ["a", "b,c"] iex> String.split(" a b c ", " ", trim: true) ["a", "b", "c"] ``` A list of patterns: ``` iex> String.split("1,2 3,4", [" ", ","]) ["1", "2", "3", "4"] ``` A regular expression: ``` iex> String.split("a,b,c", ~r{,}) ["a", "b", "c"] iex> String.split("a,b,c", ~r{,}, parts: 2) ["a", "b,c"] iex> String.split(" a b c ", ~r{\s}, trim: true) ["a", "b", "c"] iex> String.split("abc", ~r{b}, include_captures: true) ["a", "b", "c"] ``` A compiled pattern: ``` iex> pattern = :binary.compile_pattern([" ", ","]) iex> String.split("1,2 3,4", pattern) ["1", "2", "3", "4"] ``` Splitting on empty string returns graphemes: ``` iex> String.split("abc", "") ["", "a", "b", "c", ""] iex> String.split("abc", "", trim: true) ["a", "b", "c"] iex> String.split("abc", "", parts: 1) ["abc"] iex> String.split("abc", "", parts: 3) ["", "a", "bc"] ``` Be aware that this function can split within or across grapheme boundaries. For example, take the grapheme "é" which is made of the characters "e" and the acute accent. The following will split the string into two parts: ``` iex> String.split(String.normalize("é", :nfd), "e") ["", "́"] ``` However, if "é" is represented by the single character "e with acute" accent, then it will split the string into just one part: ``` iex> String.split(String.normalize("é", :nfc), "e") ["é"] ``` ### split\_at(string, position) #### Specs ``` split_at(t(), integer()) :: {t(), t()} ``` Splits a string into two at the specified offset. When the offset given is negative, location is counted from the end of the string. The offset is capped to the length of the string. Returns a tuple with two elements. Note: keep in mind this function splits on graphemes and for such it has to linearly traverse the string. If you want to split a string or a binary based on the number of bytes, use [`Kernel.binary_part/3`](kernel#binary_part/3) instead. #### Examples ``` iex> String.split_at("sweetelixir", 5) {"sweet", "elixir"} iex> String.split_at("sweetelixir", -6) {"sweet", "elixir"} iex> String.split_at("abc", 0) {"", "abc"} iex> String.split_at("abc", 1000) {"abc", ""} iex> String.split_at("abc", -1000) {"", "abc"} ``` ### splitter(string, pattern, options \\ []) #### Specs ``` splitter(t(), pattern(), keyword()) :: Enumerable.t() ``` Returns an enumerable that splits a string on demand. This is in contrast to [`split/3`](#split/3) which splits the entire string upfront. This function does not support regular expressions by design. When using regular expressions, it is often more efficient to have the regular expressions traverse the string at once than in parts, like this function does. #### Options * :trim - when `true`, does not emit empty patterns #### Examples ``` iex> String.splitter("1,2 3,4 5,6 7,8,...,99999", [" ", ","]) |> Enum.take(4) ["1", "2", "3", "4"] iex> String.splitter("abcd", "") |> Enum.take(10) ["", "a", "b", "c", "d", ""] iex> String.splitter("abcd", "", trim: true) |> Enum.take(10) ["a", "b", "c", "d"] ``` A compiled pattern can also be given: ``` iex> pattern = :binary.compile_pattern([" ", ","]) iex> String.splitter("1,2 3,4 5,6 7,8,...,99999", pattern) |> Enum.take(4) ["1", "2", "3", "4"] ``` ### starts\_with?(string, prefix) #### Specs ``` starts_with?(t(), pattern()) :: boolean() ``` Returns `true` if `string` starts with any of the prefixes given. `prefix` can be either a string, a list of strings, or a compiled pattern. #### Examples ``` iex> String.starts_with?("elixir", "eli") true iex> String.starts_with?("elixir", ["erlang", "elixir"]) true iex> String.starts_with?("elixir", ["erlang", "ruby"]) false ``` A compiled pattern can also be given: ``` iex> pattern = :binary.compile_pattern(["erlang", "elixir"]) iex> String.starts_with?("elixir", pattern) true ``` An empty string will always match: ``` iex> String.starts_with?("elixir", "") true iex> String.starts_with?("elixir", ["", "other"]) true ``` ### to\_atom(string) #### Specs ``` to_atom(String.t()) :: atom() ``` Converts a string to an atom. Warning: this function creates atoms dynamically and atoms are not garbage-collected. Therefore, `string` should not be an untrusted value, such as input received from a socket or during a web request. Consider using [`to_existing_atom/1`](#to_existing_atom/1) instead. By default, the maximum number of atoms is `1_048_576`. This limit can be raised or lowered using the VM option `+t`. The maximum atom size is of 255 Unicode code points. Inlined by the compiler. #### Examples ``` iex> String.to_atom("my_atom") :my_atom ``` ### to\_charlist(string) #### Specs ``` to_charlist(t()) :: charlist() ``` Converts a string into a charlist. Specifically, this function takes a UTF-8 encoded binary and returns a list of its integer code points. It is similar to [`codepoints/1`](#codepoints/1) except that the latter returns a list of code points as strings. In case you need to work with bytes, take a look at the [`:binary` module](http://www.erlang.org/doc/man/binary.html). #### Examples ``` iex> String.to_charlist("æß") 'æß' ``` ### to\_existing\_atom(string) #### Specs ``` to_existing_atom(String.t()) :: atom() ``` Converts a string to an existing atom. The maximum atom size is of 255 Unicode code points. Inlined by the compiler. #### Examples ``` iex> _ = :my_atom iex> String.to_existing_atom("my_atom") :my_atom iex> String.to_existing_atom("this_atom_will_never_exist") ** (ArgumentError) argument error ``` ### to\_float(string) #### Specs ``` to_float(String.t()) :: float() ``` Returns a float whose text representation is `string`. `string` must be the string representation of a float including a decimal point. In order to parse a string without decimal point as a float then [`Float.parse/1`](float#parse/1) should be used. Otherwise, an [`ArgumentError`](argumenterror) will be raised. Inlined by the compiler. #### Examples ``` iex> String.to_float("2.2017764e+0") 2.2017764 iex> String.to_float("3.0") 3.0 String.to_float("3") #=> ** (ArgumentError) argument error ``` ### to\_integer(string) #### Specs ``` to_integer(String.t()) :: integer() ``` Returns an integer whose text representation is `string`. `string` must be the string representation of an integer. Otherwise, an [`ArgumentError`](argumenterror) will be raised. If you want to parse a string that may contain an ill-formatted integer, use [`Integer.parse/1`](integer#parse/1). Inlined by the compiler. #### Examples ``` iex> String.to_integer("123") 123 ``` Passing a string that does not represent an integer leads to an error: ``` String.to_integer("invalid data") #=> ** (ArgumentError) argument error ``` ### to\_integer(string, base) #### Specs ``` to_integer(String.t(), 2..36) :: integer() ``` Returns an integer whose text representation is `string` in base `base`. Inlined by the compiler. #### Examples ``` iex> String.to_integer("3FF", 16) 1023 ``` ### trim(string) #### Specs ``` trim(t()) :: t() ``` Returns a string where all leading and trailing Unicode whitespaces have been removed. #### Examples ``` iex> String.trim("\n abc\n ") "abc" ``` ### trim(string, to\_trim) #### Specs ``` trim(t(), t()) :: t() ``` Returns a string where all leading and trailing `to_trim` characters have been removed. #### Examples ``` iex> String.trim("a abc a", "a") " abc " ``` ### trim\_leading(string) #### Specs ``` trim_leading(t()) :: t() ``` Returns a string where all leading Unicode whitespaces have been removed. #### Examples ``` iex> String.trim_leading("\n abc ") "abc " ``` ### trim\_leading(string, to\_trim) #### Specs ``` trim_leading(t(), t()) :: t() ``` Returns a string where all leading `to_trim` characters have been removed. #### Examples ``` iex> String.trim_leading("__ abc _", "_") " abc _" iex> String.trim_leading("1 abc", "11") "1 abc" ``` ### trim\_trailing(string) #### Specs ``` trim_trailing(t()) :: t() ``` Returns a string where all trailing Unicode whitespaces has been removed. #### Examples ``` iex> String.trim_trailing(" abc\n ") " abc" ``` ### trim\_trailing(string, to\_trim) #### Specs ``` trim_trailing(t(), t()) :: t() ``` Returns a string where all trailing `to_trim` characters have been removed. #### Examples ``` iex> String.trim_trailing("_ abc __", "_") "_ abc " iex> String.trim_trailing("abc 1", "11") "abc 1" ``` ### upcase(string, mode \\ :default) #### Specs ``` upcase(t(), :default | :ascii | :greek) :: t() ``` Converts all characters in the given string to uppercase according to `mode`. `mode` may be `:default`, `:ascii` or `:greek`. The `:default` mode considers all non-conditional transformations outlined in the Unicode standard. `:ascii` uppercases only the letters a to z. `:greek` includes the context sensitive mappings found in Greek. #### Examples ``` iex> String.upcase("abcd") "ABCD" iex> String.upcase("ab 123 xpto") "AB 123 XPTO" iex> String.upcase("olá") "OLÁ" ``` The `:ascii` mode ignores Unicode characters and provides a more performant implementation when you know the string contains only ASCII characters: ``` iex> String.upcase("olá", :ascii) "OLá" ``` ### valid?(string) #### Specs ``` valid?(t()) :: boolean() ``` Checks whether `string` contains only valid characters. #### Examples ``` iex> String.valid?("a") true iex> String.valid?("ø") true iex> String.valid?(<<0xFFFF::16>>) false iex> String.valid?(<<0xEF, 0xB7, 0x90>>) true iex> String.valid?("asd" <> <<0xFFFF::16>>) false ```
programming_docs
elixir Bitwise Bitwise ======== A set of macros that perform calculations on bits. The macros in this module come in two flavors: named or operators. For example: ``` iex> use Bitwise iex> bnot(1) # named -2 iex> 1 &&& 1 # operator 1 ``` If you prefer to use only operators or skip them, you can pass the following options: * `:only_operators` - includes only operators * `:skip_operators` - skips operators For example: ``` iex> use Bitwise, only_operators: true iex> 1 &&& 1 1 ``` When invoked with no options, `use Bitwise` is equivalent to `import Bitwise`. All bitwise macros can be used in guards: ``` iex> use Bitwise iex> odd? = fn ...> int when band(int, 1) == 1 -> true ...> _ -> false ...> end iex> odd?.(1) true ``` Summary ======== Guards ------- [left &&& right](#&&&/2) Infix operator; calculates the bitwise AND of its arguments. [left <<< right](#%3C%3C%3C/2) Infix operator; calculates the result of an arithmetic left bitshift. [left >>> right](#%3E%3E%3E/2) Infix operator; calculates the result of an arithmetic right bitshift. [left ^^^ right](#%5E%5E%5E/2) Infix operator; calculates the bitwise XOR of its arguments. [band(left, right)](#band/2) Calculates the bitwise AND of its arguments. [bnot(expr)](#bnot/1) Calculates the bitwise NOT of its argument. [bor(left, right)](#bor/2) Calculates the bitwise OR of its arguments. [bsl(left, right)](#bsl/2) Calculates the result of an arithmetic left bitshift. [bsr(left, right)](#bsr/2) Calculates the result of an arithmetic right bitshift. [bxor(left, right)](#bxor/2) Calculates the bitwise XOR of its arguments. [left ||| right](#%7C%7C%7C/2) Infix operator; calculates the bitwise OR of its arguments. [~~~expr](#~~~/1) Prefix (unary) operator; calculates the bitwise NOT of its argument. Guards ======= ### left &&& right Infix operator; calculates the bitwise AND of its arguments. ``` iex> 9 &&& 3 1 ``` ### left <<< right Infix operator; calculates the result of an arithmetic left bitshift. ``` iex> 1 <<< 2 4 iex> 1 <<< -2 0 iex> -1 <<< 2 -4 iex> -1 <<< -2 -1 ``` ### left >>> right Infix operator; calculates the result of an arithmetic right bitshift. ``` iex> 1 >>> 2 0 iex> 1 >>> -2 4 iex> -1 >>> 2 -1 iex> -1 >>> -2 -4 ``` ### left ^^^ right Infix operator; calculates the bitwise XOR of its arguments. ``` iex> 9 ^^^ 3 10 ``` ### band(left, right) Calculates the bitwise AND of its arguments. ``` iex> band(9, 3) 1 ``` ### bnot(expr) Calculates the bitwise NOT of its argument. ``` iex> bnot(2) -3 iex> bnot(2) &&& 3 1 ``` ### bor(left, right) Calculates the bitwise OR of its arguments. ``` iex> bor(9, 3) 11 ``` ### bsl(left, right) Calculates the result of an arithmetic left bitshift. ``` iex> bsl(1, 2) 4 iex> bsl(1, -2) 0 iex> bsl(-1, 2) -4 iex> bsl(-1, -2) -1 ``` ### bsr(left, right) Calculates the result of an arithmetic right bitshift. ``` iex> bsr(1, 2) 0 iex> bsr(1, -2) 4 iex> bsr(-1, 2) -1 iex> bsr(-1, -2) -4 ``` ### bxor(left, right) Calculates the bitwise XOR of its arguments. ``` iex> bxor(9, 3) 10 ``` ### left ||| right Infix operator; calculates the bitwise OR of its arguments. ``` iex> 9 ||| 3 11 ``` ### ~~~expr Prefix (unary) operator; calculates the bitwise NOT of its argument. ``` iex> ~~~2 -3 iex> ~~~2 &&& 3 1 ``` elixir Inspect.Algebra Inspect.Algebra ================ A set of functions for creating and manipulating algebra documents. This module implements the functionality described in ["Strictly Pretty" (2000) by Christian Lindig](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2200) with small additions, like support for binary nodes and a break mode that maximises use of horizontal space. ``` iex> Inspect.Algebra.empty() :doc_nil iex> "foo" "foo" ``` With the functions in this module, we can concatenate different elements together and render them: ``` iex> doc = Inspect.Algebra.concat(Inspect.Algebra.empty(), "foo") iex> Inspect.Algebra.format(doc, 80) ["foo"] ``` The functions [`nest/2`](#nest/2), [`space/2`](#space/2) and [`line/2`](#line/2) help you put the document together into a rigid structure. However, the document algebra gets interesting when using functions like [`glue/3`](#glue/3) and [`group/1`](#group/1). A glue inserts a break between two documents. A group indicates a document that must fit the current line, otherwise breaks are rendered as new lines. Let's glue two docs together with a break, group it and then render it: ``` iex> doc = Inspect.Algebra.glue("a", " ", "b") iex> doc = Inspect.Algebra.group(doc) iex> Inspect.Algebra.format(doc, 80) ["a", " ", "b"] ``` Notice the break was represented as is, because we haven't reached a line limit. Once we do, it is replaced by a newline: ``` iex> doc = Inspect.Algebra.glue(String.duplicate("a", 20), " ", "b") iex> doc = Inspect.Algebra.group(doc) iex> Inspect.Algebra.format(doc, 10) ["aaaaaaaaaaaaaaaaaaaa", "\n", "b"] ``` This module uses the byte size to compute how much space there is left. If your document contains strings, then those need to be wrapped in [`string/1`](#string/1), which then relies on [`String.length/1`](string#length/1) to precompute the document size. Finally, this module also contains Elixir related functions, a bit tied to Elixir formatting, such as [`to_doc/2`](#to_doc/2). Implementation details ----------------------- The implementation of [`Inspect.Algebra`](#content) is based on the Strictly Pretty paper by [Lindig](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2200) which builds on top of previous pretty printing algorithms but is tailored to strict languages, such as Elixir. The core idea in the paper is the use of explicit document groups which are rendered as flat (breaks as spaces) or as break (breaks as newlines). This implementation provides two types of breaks: `:strict` and `:flex`. When a group does not fit, all strict breaks are treated as newlines. Flex breaks however are re-evaluated on every occurrence and may still be rendered flat. See [`break/1`](#break/1) and [`flex_break/1`](#flex_break/1) for more information. This implementation also adds [`force_unfit/1`](#force_unfit/1) and [`next_break_fits/2`](#next_break_fits/2) which give more control over the document fitting. Summary ======== Types ------ [t()](#t:t/0) Guards ------- [is\_doc(doc)](#is_doc/1) Functions ---------- [break(string \\ " ")](#break/1) Returns a break document based on the given `string`. [collapse\_lines(max)](#collapse_lines/1) Collapse any new lines and whitespace following this node, emitting up to `max` new lines. [color(doc, color\_key, opts)](#color/3) Colors a document if the `color_key` has a color in the options. [concat(docs)](#concat/1) Concatenates a list of documents returning a new document. [concat(doc1, doc2)](#concat/2) Concatenates two document entities returning a new document. [container\_doc(left, collection, right, inspect\_opts, fun, opts \\ [])](#container_doc/6) Wraps `collection` in `left` and `right` according to limit and contents. [empty()](#empty/0) Returns a document entity used to represent nothingness. [flex\_break(string \\ " ")](#flex_break/1) Returns a flex break document based on the given `string`. [flex\_glue(doc1, break\_string \\ " ", doc2)](#flex_glue/3) Glues two documents (`doc1` and `doc2`) inserting a [`flex_break/1`](#flex_break/1) given by `break_string` between them. [fold\_doc(docs, folder\_fun)](#fold_doc/2) Folds a list of documents into a document using the given folder function. [force\_unfit(doc)](#force_unfit/1) Forces the current group to be unfit. [format(doc, width)](#format/2) Formats a given document for a given width. [glue(doc1, break\_string \\ " ", doc2)](#glue/3) Glues two documents (`doc1` and `doc2`) inserting the given break `break_string` between them. [group(doc, mode \\ :self)](#group/2) Returns a group containing the specified document `doc`. [line()](#line/0) A mandatory linebreak. [line(doc1, doc2)](#line/2) Inserts a mandatory linebreak between two documents. [nest(doc, level, mode \\ :always)](#nest/3) Nests the given document at the given `level`. [next\_break\_fits(doc, mode \\ :enabled)](#next_break_fits/2) Considers the next break as fit. [space(doc1, doc2)](#space/2) Inserts a mandatory single space between two documents. [string(string)](#string/1) Creates a document represented by string. [to\_doc(term, opts)](#to_doc/2) Converts an Elixir term to an algebra document according to the [`Inspect`](inspect) protocol. Types ====== ### t() #### Specs ``` t() :: binary() | :doc_nil | :doc_line | doc_string() | doc_cons() | doc_nest() | doc_break() | doc_group() | doc_color() | doc_force() | doc_fits() | doc_collapse() ``` Guards ======= ### is\_doc(doc) Functions ========== ### break(string \\ " ") #### Specs ``` break(binary()) :: doc_break() ``` Returns a break document based on the given `string`. This break can be rendered as a linebreak or as the given `string`, depending on the `mode` of the chosen layout. #### Examples Let's create a document by concatenating two strings with a break between them: ``` iex> doc = Inspect.Algebra.concat(["a", Inspect.Algebra.break("\t"), "b"]) iex> Inspect.Algebra.format(doc, 80) ["a", "\t", "b"] ``` Notice the break was represented with the given string, because we didn't reach a line limit. Once we do, it is replaced by a newline: ``` iex> break = Inspect.Algebra.break("\t") iex> doc = Inspect.Algebra.concat([String.duplicate("a", 20), break, "b"]) iex> doc = Inspect.Algebra.group(doc) iex> Inspect.Algebra.format(doc, 10) ["aaaaaaaaaaaaaaaaaaaa", "\n", "b"] ``` ### collapse\_lines(max) #### Specs ``` collapse_lines(pos_integer()) :: doc_collapse() ``` Collapse any new lines and whitespace following this node, emitting up to `max` new lines. ### color(doc, color\_key, opts) #### Specs ``` color(t(), Inspect.Opts.color_key(), Inspect.Opts.t()) :: doc_color() ``` Colors a document if the `color_key` has a color in the options. ### concat(docs) #### Specs ``` concat([t()]) :: t() ``` Concatenates a list of documents returning a new document. #### Examples ``` iex> doc = Inspect.Algebra.concat(["a", "b", "c"]) iex> Inspect.Algebra.format(doc, 80) ["a", "b", "c"] ``` ### concat(doc1, doc2) #### Specs ``` concat(t(), t()) :: t() ``` Concatenates two document entities returning a new document. #### Examples ``` iex> doc = Inspect.Algebra.concat("hello", "world") iex> Inspect.Algebra.format(doc, 80) ["hello", "world"] ``` ### container\_doc(left, collection, right, inspect\_opts, fun, opts \\ []) #### Specs ``` container_doc( t(), [any()], t(), Inspect.Opts.t(), (term(), Inspect.Opts.t() -> t()), keyword() ) :: t() ``` Wraps `collection` in `left` and `right` according to limit and contents. It uses the given `left` and `right` documents as surrounding and the separator document `separator` to separate items in `docs`. If all entries in the collection are simple documents (texts or strings), then this function attempts to put as much as possible on the same line. If they are not simple, only one entry is shown per line if they do not fit. The limit in the given `inspect_opts` is respected and when reached this function stops processing and outputs `"..."` instead. #### Options * `:separator` - the separator used between each doc * `:break` - If `:strict`, always break between each element. If `:flex`, breaks only when necessary. If `:maybe`, chooses `:flex` only if all elements are text-based, otherwise is `:strict` #### Examples ``` iex> inspect_opts = %Inspect.Opts{limit: :infinity} iex> fun = fn i, _opts -> to_string(i) end iex> doc = Inspect.Algebra.container_doc("[", Enum.to_list(1..5), "]", inspect_opts, fun) iex> Inspect.Algebra.format(doc, 5) |> IO.iodata_to_binary() "[1,\n 2,\n 3,\n 4,\n 5]" iex> inspect_opts = %Inspect.Opts{limit: 3} iex> fun = fn i, _opts -> to_string(i) end iex> doc = Inspect.Algebra.container_doc("[", Enum.to_list(1..5), "]", inspect_opts, fun) iex> Inspect.Algebra.format(doc, 20) |> IO.iodata_to_binary() "[1, 2, 3, ...]" iex> inspect_opts = %Inspect.Opts{limit: 3} iex> fun = fn i, _opts -> to_string(i) end iex> opts = [separator: "!"] iex> doc = Inspect.Algebra.container_doc("[", Enum.to_list(1..5), "]", inspect_opts, fun, opts) iex> Inspect.Algebra.format(doc, 20) |> IO.iodata_to_binary() "[1! 2! 3! ...]" ``` ### empty() #### Specs ``` empty() :: :doc_nil ``` Returns a document entity used to represent nothingness. #### Examples ``` iex> Inspect.Algebra.empty() :doc_nil ``` ### flex\_break(string \\ " ") #### Specs ``` flex_break(binary()) :: doc_break() ``` Returns a flex break document based on the given `string`. A flex break still causes a group to break, like [`break/1`](#break/1), but it is re-evaluated when the documented is rendered. For example, take a group document represented as `[1, 2, 3]` where the space after every comma is a break. When the document above does not fit a single line, all breaks are enabled, causing the document to be rendered as: ``` [1, 2, 3] ``` However, if flex breaks are used, then each break is re-evaluated when rendered, so the document could be possible rendered as: ``` [1, 2, 3] ``` Hence the name "flex". they are more flexible when it comes to the document fitting. On the other hand, they are more expensive since each break needs to be re-evaluated. This function is used by [`container_doc/6`](#container_doc/6) and friends to the maximum number of entries on the same line. ### flex\_glue(doc1, break\_string \\ " ", doc2) #### Specs ``` flex_glue(t(), binary(), t()) :: t() ``` Glues two documents (`doc1` and `doc2`) inserting a [`flex_break/1`](#flex_break/1) given by `break_string` between them. This function is used by [`container_doc/6`](#container_doc/6) and friends to the maximum number of entries on the same line. ### fold\_doc(docs, folder\_fun) #### Specs ``` fold_doc([t()], (t(), t() -> t())) :: t() ``` Folds a list of documents into a document using the given folder function. The list of documents is folded "from the right"; in that, this function is similar to [`List.foldr/3`](list#foldr/3), except that it doesn't expect an initial accumulator and uses the last element of `docs` as the initial accumulator. #### Examples ``` iex> docs = ["A", "B", "C"] iex> docs = ...> Inspect.Algebra.fold_doc(docs, fn doc, acc -> ...> Inspect.Algebra.concat([doc, "!", acc]) ...> end) iex> Inspect.Algebra.format(docs, 80) ["A", "!", "B", "!", "C"] ``` ### force\_unfit(doc) #### Specs ``` force_unfit(t()) :: doc_force() ``` Forces the current group to be unfit. ### format(doc, width) #### Specs ``` format(t(), non_neg_integer() | :infinity) :: iodata() ``` Formats a given document for a given width. Takes the maximum width and a document to print as its arguments and returns an IO data representation of the best layout for the document to fit in the given width. The document starts flat (without breaks) until a group is found. #### Examples ``` iex> doc = Inspect.Algebra.glue("hello", " ", "world") iex> doc = Inspect.Algebra.group(doc) iex> doc |> Inspect.Algebra.format(30) |> IO.iodata_to_binary() "hello world" iex> doc |> Inspect.Algebra.format(10) |> IO.iodata_to_binary() "hello\nworld" ``` ### glue(doc1, break\_string \\ " ", doc2) #### Specs ``` glue(t(), binary(), t()) :: t() ``` Glues two documents (`doc1` and `doc2`) inserting the given break `break_string` between them. For more information on how the break is inserted, see [`break/1`](#break/1). #### Examples ``` iex> doc = Inspect.Algebra.glue("hello", "world") iex> Inspect.Algebra.format(doc, 80) ["hello", " ", "world"] iex> doc = Inspect.Algebra.glue("hello", "\t", "world") iex> Inspect.Algebra.format(doc, 80) ["hello", "\t", "world"] ``` ### group(doc, mode \\ :self) #### Specs ``` group(t(), :self | :inherit) :: doc_group() ``` Returns a group containing the specified document `doc`. Documents in a group are attempted to be rendered together to the best of the renderer ability. The group mode can also be set to `:inherit`, which means it automatically breaks if the parent group has broken too. #### Examples ``` iex> doc = ...> Inspect.Algebra.group( ...> Inspect.Algebra.concat( ...> Inspect.Algebra.group( ...> Inspect.Algebra.concat( ...> "Hello,", ...> Inspect.Algebra.concat( ...> Inspect.Algebra.break(), ...> "A" ...> ) ...> ) ...> ), ...> Inspect.Algebra.concat( ...> Inspect.Algebra.break(), ...> "B" ...> ) ...> ) ...> ) iex> Inspect.Algebra.format(doc, 80) ["Hello,", " ", "A", " ", "B"] iex> Inspect.Algebra.format(doc, 6) ["Hello,", "\n", "A", "\n", "B"] ``` ### line() #### Specs ``` line() :: t() ``` A mandatory linebreak. A group with linebreaks will fit if all lines in the group fit. #### Examples ``` iex> doc = ...> Inspect.Algebra.concat( ...> Inspect.Algebra.concat( ...> "Hughes", ...> Inspect.Algebra.line() ...> ), ...> "Wadler" ...> ) iex> Inspect.Algebra.format(doc, 80) ["Hughes", "\n", "Wadler"] ``` ### line(doc1, doc2) #### Specs ``` line(t(), t()) :: t() ``` Inserts a mandatory linebreak between two documents. See [`line/0`](#line/0). #### Examples ``` iex> doc = Inspect.Algebra.line("Hughes", "Wadler") iex> Inspect.Algebra.format(doc, 80) ["Hughes", "\n", "Wadler"] ``` ### nest(doc, level, mode \\ :always) #### Specs ``` nest(t(), non_neg_integer() | :cursor | :reset, :always | :break) :: doc_nest() ``` Nests the given document at the given `level`. If `level` is an integer, that's the indentation appended to line breaks whenever they occur. If the level is `:cursor`, the current position of the "cursor" in the document becomes the nesting. If the level is `:reset`, it is set back to 0. `mode` can be `:always`, which means nesting always happen, or `:break`, which means nesting only happens inside a group that has been broken. #### Examples ``` iex> doc = Inspect.Algebra.nest(Inspect.Algebra.glue("hello", "world"), 5) iex> doc = Inspect.Algebra.group(doc) iex> Inspect.Algebra.format(doc, 5) ["hello", "\n ", "world"] ``` ### next\_break\_fits(doc, mode \\ :enabled) #### Specs ``` next_break_fits(t(), :enabled | :disabled) :: doc_fits() ``` Considers the next break as fit. `mode` can be `:enabled` or `:disabled`. When `:enabled`, it will consider the document as fit as soon as it finds the next break, effectively cancelling the break. It will also ignore any [`force_unfit/1`](#force_unfit/1) in search of the next break. When disabled, it behaves as usual and it will ignore any further [`next_break_fits/2`](#next_break_fits/2) instruction. #### Examples This is used by Elixir's code formatter to avoid breaking code at some specific locations. For example, consider this code: ``` some_function_call(%{..., key: value, ...}) ``` Now imagine that this code does not fit its line. The code formatter introduces breaks inside `(` and `)` and inside `%{` and `}`. Therefore the document would break as: ``` some_function_call( %{ ..., key: value, ... } ) ``` The formatter wraps the algebra document representing the map in [`next_break_fits/1`](#next_break_fits/1) so the code is formatted as: ``` some_function_call(%{ ..., key: value, ... }) ``` ### space(doc1, doc2) #### Specs ``` space(t(), t()) :: t() ``` Inserts a mandatory single space between two documents. #### Examples ``` iex> doc = Inspect.Algebra.space("Hughes", "Wadler") iex> Inspect.Algebra.format(doc, 5) ["Hughes", " ", "Wadler"] ``` ### string(string) #### Specs ``` string(String.t()) :: doc_string() ``` Creates a document represented by string. While [`Inspect.Algebra`](#content) accepts binaries as documents, those are counted by binary size. On the other hand, `string` documents are measured in terms of graphemes towards the document size. #### Examples The following document has 10 bytes and therefore it does not format to width 9 without breaks: ``` iex> doc = Inspect.Algebra.glue("olá", " ", "mundo") iex> doc = Inspect.Algebra.group(doc) iex> Inspect.Algebra.format(doc, 9) ["olá", "\n", "mundo"] ``` However, if we use `string`, then the string length is used, instead of byte size, correctly fitting: ``` iex> string = Inspect.Algebra.string("olá") iex> doc = Inspect.Algebra.glue(string, " ", "mundo") iex> doc = Inspect.Algebra.group(doc) iex> Inspect.Algebra.format(doc, 9) ["olá", " ", "mundo"] ``` ### to\_doc(term, opts) #### Specs ``` to_doc(any(), Inspect.Opts.t()) :: t() ``` Converts an Elixir term to an algebra document according to the [`Inspect`](inspect) protocol.
programming_docs
elixir Collectable protocol Collectable protocol ===================== A protocol to traverse data structures. The [`Enum.into/2`](enum#into/2) function uses this protocol to insert an enumerable into a collection: ``` iex> Enum.into([a: 1, b: 2], %{}) %{a: 1, b: 2} ``` Why Collectable? ----------------- The [`Enumerable`](enumerable) protocol is useful to take values out of a collection. In order to support a wide range of values, the functions provided by the [`Enumerable`](enumerable) protocol do not keep shape. For example, passing a map to [`Enum.map/2`](enum#map/2) always returns a list. This design is intentional. [`Enumerable`](enumerable) was designed to support infinite collections, resources and other structures with fixed shape. For example, it doesn't make sense to insert values into a range, as it has a fixed shape where just the range limits are stored. The [`Collectable`](#content) module was designed to fill the gap left by the [`Enumerable`](enumerable) protocol. [`Collectable.into/1`](collectable#into/1) can be seen as the opposite of [`Enumerable.reduce/3`](enumerable#reduce/3). If the functions in [`Enumerable`](enumerable) are about taking values out, then [`Collectable.into/1`](collectable#into/1) is about collecting those values into a structure. Examples --------- To show how to manually use the [`Collectable`](#content) protocol, let's play with its implementation for [`MapSet`](mapset). ``` iex> {initial_acc, collector_fun} = Collectable.into(MapSet.new()) iex> updated_acc = Enum.reduce([1, 2, 3], initial_acc, fn elem, acc -> ...> collector_fun.(acc, {:cont, elem}) ...> end) iex> collector_fun.(updated_acc, :done) #MapSet<[1, 2, 3]> ``` To show how the protocol can be implemented, we can take again a look at the implementation for [`MapSet`](mapset). In this implementation "collecting" elements simply means inserting them in the set through [`MapSet.put/2`](mapset#put/2). ``` defimpl Collectable, for: MapSet do def into(original) do collector_fun = fn set, {:cont, elem} -> MapSet.put(set, elem) set, :done -> set _set, :halt -> :ok end {original, collector_fun} end end ``` Summary ======== Types ------ [command()](#t:command/0) [t()](#t:t/0) Functions ---------- [into(collectable)](#into/1) Returns an initial accumulator and a "collector" function. Types ====== ### command() #### Specs ``` command() :: {:cont, term()} | :done | :halt ``` ### t() #### Specs ``` t() :: term() ``` Functions ========== ### into(collectable) #### Specs ``` into(t()) :: {term(), (term(), command() -> t() | term())} ``` Returns an initial accumulator and a "collector" function. The returned function receives a term and a command and injects the term into the collectable on every `{:cont, term}` command. `:done` is passed as a command when no further values will be injected. This is useful when there's a need to close resources or normalizing values. A collectable must be returned when the command is `:done`. If injection is suddenly interrupted, `:halt` is passed and the function can return any value as it won't be used. For examples on how to use the [`Collectable`](#content) protocol and [`into/1`](#into/1) see the module documentation. elixir mix profile.cprof mix profile.cprof ================== Profiles the given file or expression using Erlang's `cprof` tool. `cprof` can be useful when you want to discover the bottlenecks related to function calls. Before running the code, it invokes the `app.start` task which compiles and loads your project. Then the target expression is profiled, together with all matching function calls, by setting breakpoints containing counters. These can only be set on BEAM code so BIFs cannot be call count traced. To profile the code, you can use syntax similar to the [`mix run`](mix.tasks.run) task: ``` mix profile.cprof -e Hello.world mix profile.cprof -e "[1, 2, 3] |> Enum.reverse |> Enum.map(&Integer.to_string/1)" mix profile.cprof my_script.exs arg1 arg2 arg3 ``` This task is automatically reenabled, so you can profile multiple times in the same Mix invocation. Command line options --------------------- * `--matching` - only profile calls matching the given `Module.function/arity` pattern * `--limit` - filters out any results with a call count less than the limit * `--module` - filters out any results not pertaining to the given module * `--eval`, `-e` - evaluate the given code * `--require`, `-r` - requires pattern before running the command * `--parallel`, `-p` - makes all requires parallel * `--no-compile` - does not compile even if files require compilation * `--no-deps-check` - does not check dependencies * `--no-archives-check` - does not check archives * `--no-halt` - does not halt the system after running the command * `--no-start` - does not start applications after compilation * `--no-elixir-version-check` - does not check the Elixir version from mix.exs Profile output --------------- Example output: ``` CNT Total 15 Enum 6 <-- Enum."-map/2-lists^map/1-0-"/2 4 Enum.reverse/1 1 Enum.map/2 1 :elixir_compiler 4 <-- anonymous fn/1 in :elixir_compiler.__FILE__/1 3 anonymous fn/0 in :elixir_compiler.__FILE__/1 1 String.Chars.Integer 3 <-- String.Chars.Integer.to_string/1 3 :erlang 2 <-- :erlang.trace_pattern/3 2 Profile done over 20229 matching functions ``` The default output contains data gathered from all matching functions. The left column structures each module and its total call count trace is presented on the right. Each module has its count discriminated by function below. The `<--` symbol is meant to help visualize where a new module call count begins. The first row (Total) is the sum of all function calls. In the last row the number of matching functions that were considered for profiling is presented. When `--matching` option is specified, call count tracing will be started only for the functions matching the given pattern: ``` String.Chars.Integer 3 <-- String.Chars.Integer.to_string/1 3 Profile done over 1 matching functions ``` The pattern can be a module name, such as [`String`](https://hexdocs.pm/elixir/String.html) to count all calls to that module, a call without arity, such as `String.split`, to count all calls to that function regardless of arity, or a call with arity, such as [`String.split/2`](https://hexdocs.pm/elixir/String.html#split/2), to count all calls to that exact module, function and arity. Caveats -------- You should be aware the profiler is stopped as soon as the code has finished running. This may need special attention, when: running asynchronous code as function calls which were called before the profiler stopped will not be counted; running synchronous code as long running computations and a profiler without a proper MFA trace pattern or filter may lead to a result set which is difficult to comprehend. Other caveats are the impossibility to call count trace BIFs, since breakpoints can only be set on BEAM code; functions calls performed by `:cprof` are not traced; the maximum size of a call counter is equal to the host machine's word size (for example, 2147483647 in a 32-bit host). Summary ======== Functions ---------- [profile(fun, opts \\ [])](#profile/2) Allows to programmatically run the `cprof` profiler on expression in `fun`. Functions ========== ### profile(fun, opts \\ []) Allows to programmatically run the `cprof` profiler on expression in `fun`. #### Options * `:matching` - only profile calls matching the given pattern in form of `{module, function, arity}`, where each element may be replaced by `:_` to allow any value * `:limit` - filters out any results with a call count less than the limit * `:module` - filters out any results not pertaining to the given module elixir mix compile.elixir mix compile.elixir =================== Compiles Elixir source files. Elixir is smart enough to recompile only files that have changed and their dependencies. This means if `lib/a.ex` is invoking a function defined over `lib/b.ex`, whenever `lib/b.ex` changes, `lib/a.ex` is also recompiled. Note it is important to recompile a file's dependencies as there are often compile time dependencies between them. Command line options --------------------- * `--force` - forces compilation regardless of modification times * `--docs` (`--no-docs`) - attaches (or not) documentation to compiled modules * `--debug-info` (`--no-debug-info`) - attaches (or not) debug info to compiled modules * `--ignore-module-conflict` - does not emit warnings if a module was previously defined * `--warnings-as-errors` - treats warnings in the current project as errors and return a non-zero exit code * `--long-compilation-threshold N` - sets the "long compilation" threshold (in seconds) to `N` (see the docs for [`Kernel.ParallelCompiler.compile/2`](https://hexdocs.pm/elixir/Kernel.ParallelCompiler.html#compile/2)) * `--all-warnings` - prints warnings even from files that do not need to be recompiled Configuration -------------- * `:elixirc_paths` - directories to find source files. Defaults to `["lib"]`. * `:elixirc_options` - compilation options that apply to Elixir's compiler. They are the same as the command line options listed above. They must be specified as atoms and use underscores instead of dashes (for example, `:debug_info`). These options can always be overridden from the command line and they have the same defaults as their command line counterparts, as documented above. elixir Stream Stream ======= Functions for creating and composing streams. Streams are composable, lazy enumerables (for an introduction on enumerables, see the [`Enum`](enum) module). Any enumerable that generates elements one by one during enumeration is called a stream. For example, Elixir's [`Range`](range) is a stream: ``` iex> range = 1..5 1..5 iex> Enum.map(range, &(&1 * 2)) [2, 4, 6, 8, 10] ``` In the example above, as we mapped over the range, the elements being enumerated were created one by one, during enumeration. The [`Stream`](#content) module allows us to map the range, without triggering its enumeration: ``` iex> range = 1..3 iex> stream = Stream.map(range, &(&1 * 2)) iex> Enum.map(stream, &(&1 + 1)) [3, 5, 7] ``` Notice we started with a range and then we created a stream that is meant to multiply each element in the range by 2. At this point, no computation was done. Only when [`Enum.map/2`](enum#map/2) is called we actually enumerate over each element in the range, multiplying it by 2 and adding 1. We say the functions in [`Stream`](#content) are *lazy* and the functions in [`Enum`](enum) are *eager*. Due to their laziness, streams are useful when working with large (or even infinite) collections. When chaining many operations with [`Enum`](enum), intermediate lists are created, while [`Stream`](#content) creates a recipe of computations that are executed at a later moment. Let's see another example: ``` 1..3 |> Enum.map(&IO.inspect(&1)) |> Enum.map(&(&1 * 2)) |> Enum.map(&IO.inspect(&1)) 1 2 3 2 4 6 #=> [2, 4, 6] ``` Notice that we first printed each element in the list, then multiplied each element by 2 and finally printed each new value. In this example, the list was enumerated three times. Let's see an example with streams: ``` stream = 1..3 |> Stream.map(&IO.inspect(&1)) |> Stream.map(&(&1 * 2)) |> Stream.map(&IO.inspect(&1)) Enum.to_list(stream) 1 2 2 4 3 6 #=> [2, 4, 6] ``` Although the end result is the same, the order in which the elements were printed changed! With streams, we print the first element and then print its double. In this example, the list was enumerated just once! That's what we meant when we said earlier that streams are composable, lazy enumerables. Notice we could call [`Stream.map/2`](stream#map/2) multiple times, effectively composing the streams and keeping them lazy. The computations are only performed when you call a function from the [`Enum`](enum) module. Like with [`Enum`](enum), the functions in this module work in linear time. This means that, the time it takes to perform an operation grows at the same rate as the length of the list. This is expected on operations such as [`Stream.map/2`](stream#map/2). After all, if we want to traverse every element on a stream, the longer the stream, the more elements we need to traverse, and the longer it will take. Creating Streams ----------------- There are many functions in Elixir's standard library that return streams, some examples are: * [`IO.stream/2`](io#stream/2) - streams input lines, one by one * [`URI.query_decoder/1`](uri#query_decoder/1) - decodes a query string, pair by pair This module also provides many convenience functions for creating streams, like [`Stream.cycle/1`](stream#cycle/1), [`Stream.unfold/2`](stream#unfold/2), [`Stream.resource/3`](stream#resource/3) and more. Note the functions in this module are guaranteed to return enumerables. Since enumerables can have different shapes (structs, anonymous functions, and so on), the functions in this module may return any of those shapes and this may change at any time. For example, a function that today returns an anonymous function may return a struct in future releases. Summary ======== Types ------ [acc()](#t:acc/0) [default()](#t:default/0) [element()](#t:element/0) [index()](#t:index/0) Zero-based index. Functions ---------- [chunk\_by(enum, fun)](#chunk_by/2) Chunks the `enum` by buffering elements for which `fun` returns the same value. [chunk\_every(enum, count)](#chunk_every/2) Shortcut to `chunk_every(enum, count, count)`. [chunk\_every(enum, count, step, leftover \\ [])](#chunk_every/4) Streams the enumerable in chunks, containing `count` elements each, where each new chunk starts `step` elements into the enumerable. [chunk\_while(enum, acc, chunk\_fun, after\_fun)](#chunk_while/4) Chunks the `enum` with fine grained control when every chunk is emitted. [concat(enumerables)](#concat/1) Creates a stream that enumerates each enumerable in an enumerable. [concat(first, second)](#concat/2) Creates a stream that enumerates the first argument, followed by the second. [cycle(enumerable)](#cycle/1) Creates a stream that cycles through the given enumerable, infinitely. [dedup(enum)](#dedup/1) Creates a stream that only emits elements if they are different from the last emitted element. [dedup\_by(enum, fun)](#dedup_by/2) Creates a stream that only emits elements if the result of calling `fun` on the element is different from the (stored) result of calling `fun` on the last emitted element. [drop(enum, n)](#drop/2) Lazily drops the next `n` elements from the enumerable. [drop\_every(enum, nth)](#drop_every/2) Creates a stream that drops every `nth` element from the enumerable. [drop\_while(enum, fun)](#drop_while/2) Lazily drops elements of the enumerable while the given function returns a truthy value. [each(enum, fun)](#each/2) Executes the given function for each element. [filter(enum, fun)](#filter/2) Creates a stream that filters elements according to the given function on enumeration. [flat\_map(enum, mapper)](#flat_map/2) Maps the given `fun` over `enumerable` and flattens the result. [intersperse(enumerable, intersperse\_element)](#intersperse/2) Lazily intersperses `intersperse_element` between each element of the enumeration. [interval(n)](#interval/1) Creates a stream that emits a value after the given period `n` in milliseconds. [into(enum, collectable, transform \\ fn x -> x end)](#into/3) Injects the stream values into the given collectable as a side-effect. [iterate(start\_value, next\_fun)](#iterate/2) Emits a sequence of values, starting with `start_value`. Successive values are generated by calling `next_fun` on the previous value. [map(enum, fun)](#map/2) Creates a stream that will apply the given function on enumeration. [map\_every(enum, nth, fun)](#map_every/3) Creates a stream that will apply the given function on every `nth` element from the enumerable. [reject(enum, fun)](#reject/2) Creates a stream that will reject elements according to the given function on enumeration. [repeatedly(generator\_fun)](#repeatedly/1) Returns a stream generated by calling `generator_fun` repeatedly. [resource(start\_fun, next\_fun, after\_fun)](#resource/3) Emits a sequence of values for the given resource. [run(stream)](#run/1) Runs the given stream. [scan(enum, fun)](#scan/2) Creates a stream that applies the given function to each element, emits the result and uses the same result as the accumulator for the next computation. Uses the first element in the enumerable as the starting value. [scan(enum, acc, fun)](#scan/3) Creates a stream that applies the given function to each element, emits the result and uses the same result as the accumulator for the next computation. Uses the given `acc` as the starting value. [take(enum, count)](#take/2) Lazily takes the next `count` elements from the enumerable and stops enumeration. [take\_every(enum, nth)](#take_every/2) Creates a stream that takes every `nth` element from the enumerable. [take\_while(enum, fun)](#take_while/2) Lazily takes elements of the enumerable while the given function returns a truthy value. [timer(n)](#timer/1) Creates a stream that emits a single value after `n` milliseconds. [transform(enum, acc, reducer)](#transform/3) Transforms an existing stream. [transform(enum, start\_fun, reducer, after\_fun)](#transform/4) Transforms an existing stream with function-based start and finish. [unfold(next\_acc, next\_fun)](#unfold/2) Emits a sequence of values for the given accumulator. [uniq(enum)](#uniq/1) Creates a stream that only emits elements if they are unique. [uniq\_by(enum, fun)](#uniq_by/2) Creates a stream that only emits elements if they are unique, by removing the elements for which function `fun` returned duplicate elements. [with\_index(enum, offset \\ 0)](#with_index/2) Creates a stream where each element in the enumerable will be wrapped in a tuple alongside its index. [zip(enumerables)](#zip/1) Zips corresponding elements from a finite collection of enumerables into one stream of tuples. [zip(left, right)](#zip/2) Zips two collections together, lazily. Types ====== ### acc() #### Specs ``` acc() :: any() ``` ### default() #### Specs ``` default() :: any() ``` ### element() #### Specs ``` element() :: any() ``` ### index() #### Specs ``` index() :: non_neg_integer() ``` Zero-based index. Functions ========== ### chunk\_by(enum, fun) #### Specs ``` chunk_by(Enumerable.t(), (element() -> any())) :: Enumerable.t() ``` Chunks the `enum` by buffering elements for which `fun` returns the same value. Elements are only emitted when `fun` returns a new value or the `enum` finishes. #### Examples ``` iex> stream = Stream.chunk_by([1, 2, 2, 3, 4, 4, 6, 7, 7], &(rem(&1, 2) == 1)) iex> Enum.to_list(stream) [[1], [2, 2], [3], [4, 4, 6], [7, 7]] ``` ### chunk\_every(enum, count) #### Specs ``` chunk_every(Enumerable.t(), pos_integer()) :: Enumerable.t() ``` Shortcut to `chunk_every(enum, count, count)`. ### chunk\_every(enum, count, step, leftover \\ []) #### Specs ``` chunk_every( Enumerable.t(), pos_integer(), pos_integer(), Enumerable.t() | :discard ) :: Enumerable.t() ``` Streams the enumerable in chunks, containing `count` elements each, where each new chunk starts `step` elements into the enumerable. `step` is optional and, if not passed, defaults to `count`, i.e. chunks do not overlap. If the last chunk does not have `count` elements to fill the chunk, elements are taken from `leftover` to fill in the chunk. If `leftover` does not have enough elements to fill the chunk, then a partial chunk is returned with less than `count` elements. If `:discard` is given in `leftover`, the last chunk is discarded unless it has exactly `count` elements. #### Examples ``` iex> Stream.chunk_every([1, 2, 3, 4, 5, 6], 2) |> Enum.to_list() [[1, 2], [3, 4], [5, 6]] iex> Stream.chunk_every([1, 2, 3, 4, 5, 6], 3, 2, :discard) |> Enum.to_list() [[1, 2, 3], [3, 4, 5]] iex> Stream.chunk_every([1, 2, 3, 4, 5, 6], 3, 2, [7]) |> Enum.to_list() [[1, 2, 3], [3, 4, 5], [5, 6, 7]] iex> Stream.chunk_every([1, 2, 3, 4, 5, 6], 3, 3, []) |> Enum.to_list() [[1, 2, 3], [4, 5, 6]] ``` ### chunk\_while(enum, acc, chunk\_fun, after\_fun) #### Specs ``` chunk_while( Enumerable.t(), acc(), (element(), acc() -> {:cont, chunk, acc()} | {:cont, acc()} | {:halt, acc()}), (acc() -> {:cont, chunk, acc()} | {:cont, acc()}) ) :: Enumerable.t() when chunk: any() ``` Chunks the `enum` with fine grained control when every chunk is emitted. `chunk_fun` receives the current element and the accumulator and must return `{:cont, element, acc}` to emit the given chunk and continue with accumulator or `{:cont, acc}` to not emit any chunk and continue with the return accumulator. `after_fun` is invoked when iteration is done and must also return `{:cont, element, acc}` or `{:cont, acc}`. #### Examples ``` iex> chunk_fun = fn element, acc -> ...> if rem(element, 2) == 0 do ...> {:cont, Enum.reverse([element | acc]), []} ...> else ...> {:cont, [element | acc]} ...> end ...> end iex> after_fun = fn ...> [] -> {:cont, []} ...> acc -> {:cont, Enum.reverse(acc), []} ...> end iex> stream = Stream.chunk_while(1..10, [], chunk_fun, after_fun) iex> Enum.to_list(stream) [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] ``` ### concat(enumerables) #### Specs ``` concat(Enumerable.t()) :: Enumerable.t() ``` Creates a stream that enumerates each enumerable in an enumerable. #### Examples ``` iex> stream = Stream.concat([1..3, 4..6, 7..9]) iex> Enum.to_list(stream) [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` ### concat(first, second) #### Specs ``` concat(Enumerable.t(), Enumerable.t()) :: Enumerable.t() ``` Creates a stream that enumerates the first argument, followed by the second. #### Examples ``` iex> stream = Stream.concat(1..3, 4..6) iex> Enum.to_list(stream) [1, 2, 3, 4, 5, 6] iex> stream1 = Stream.cycle([1, 2, 3]) iex> stream2 = Stream.cycle([4, 5, 6]) iex> stream = Stream.concat(stream1, stream2) iex> Enum.take(stream, 6) [1, 2, 3, 1, 2, 3] ``` ### cycle(enumerable) #### Specs ``` cycle(Enumerable.t()) :: Enumerable.t() ``` Creates a stream that cycles through the given enumerable, infinitely. #### Examples ``` iex> stream = Stream.cycle([1, 2, 3]) iex> Enum.take(stream, 5) [1, 2, 3, 1, 2] ``` ### dedup(enum) #### Specs ``` dedup(Enumerable.t()) :: Enumerable.t() ``` Creates a stream that only emits elements if they are different from the last emitted element. This function only ever needs to store the last emitted element. Elements are compared using [`===/2`](kernel#===/2). #### Examples ``` iex> Stream.dedup([1, 2, 3, 3, 2, 1]) |> Enum.to_list() [1, 2, 3, 2, 1] ``` ### dedup\_by(enum, fun) #### Specs ``` dedup_by(Enumerable.t(), (element() -> term())) :: Enumerable.t() ``` Creates a stream that only emits elements if the result of calling `fun` on the element is different from the (stored) result of calling `fun` on the last emitted element. #### Examples ``` iex> Stream.dedup_by([{1, :x}, {2, :y}, {2, :z}, {1, :x}], fn {x, _} -> x end) |> Enum.to_list() [{1, :x}, {2, :y}, {1, :x}] ``` ### drop(enum, n) #### Specs ``` drop(Enumerable.t(), integer()) :: Enumerable.t() ``` Lazily drops the next `n` elements from the enumerable. If a negative `n` is given, it will drop the last `n` elements from the collection. Note that the mechanism by which this is implemented will delay the emission of any element until `n` additional elements have been emitted by the enum. #### Examples ``` iex> stream = Stream.drop(1..10, 5) iex> Enum.to_list(stream) [6, 7, 8, 9, 10] iex> stream = Stream.drop(1..10, -5) iex> Enum.to_list(stream) [1, 2, 3, 4, 5] ``` ### drop\_every(enum, nth) #### Specs ``` drop_every(Enumerable.t(), non_neg_integer()) :: Enumerable.t() ``` Creates a stream that drops every `nth` element from the enumerable. The first element is always dropped, unless `nth` is 0. `nth` must be a non-negative integer. #### Examples ``` iex> stream = Stream.drop_every(1..10, 2) iex> Enum.to_list(stream) [2, 4, 6, 8, 10] iex> stream = Stream.drop_every(1..1000, 1) iex> Enum.to_list(stream) [] iex> stream = Stream.drop_every([1, 2, 3, 4, 5], 0) iex> Enum.to_list(stream) [1, 2, 3, 4, 5] ``` ### drop\_while(enum, fun) #### Specs ``` drop_while(Enumerable.t(), (element() -> as_boolean(term()))) :: Enumerable.t() ``` Lazily drops elements of the enumerable while the given function returns a truthy value. #### Examples ``` iex> stream = Stream.drop_while(1..10, &(&1 <= 5)) iex> Enum.to_list(stream) [6, 7, 8, 9, 10] ``` ### each(enum, fun) #### Specs ``` each(Enumerable.t(), (element() -> term())) :: Enumerable.t() ``` Executes the given function for each element. Useful for adding side effects (like printing) to a stream. #### Examples ``` iex> stream = Stream.each([1, 2, 3], fn x -> send(self(), x) end) iex> Enum.to_list(stream) iex> receive do: (x when is_integer(x) -> x) 1 iex> receive do: (x when is_integer(x) -> x) 2 iex> receive do: (x when is_integer(x) -> x) 3 ``` ### filter(enum, fun) #### Specs ``` filter(Enumerable.t(), (element() -> as_boolean(term()))) :: Enumerable.t() ``` Creates a stream that filters elements according to the given function on enumeration. #### Examples ``` iex> stream = Stream.filter([1, 2, 3], fn x -> rem(x, 2) == 0 end) iex> Enum.to_list(stream) [2] ``` ### flat\_map(enum, mapper) #### Specs ``` flat_map(Enumerable.t(), (element() -> Enumerable.t())) :: Enumerable.t() ``` Maps the given `fun` over `enumerable` and flattens the result. This function returns a new stream built by appending the result of invoking `fun` on each element of `enumerable` together. #### Examples ``` iex> stream = Stream.flat_map([1, 2, 3], fn x -> [x, x * 2] end) iex> Enum.to_list(stream) [1, 2, 2, 4, 3, 6] iex> stream = Stream.flat_map([1, 2, 3], fn x -> [[x]] end) iex> Enum.to_list(stream) [[1], [2], [3]] ``` ### intersperse(enumerable, intersperse\_element) #### Specs ``` intersperse(Enumerable.t(), any()) :: Enumerable.t() ``` Lazily intersperses `intersperse_element` between each element of the enumeration. #### Examples ``` iex> Stream.intersperse([1, 2, 3], 0) |> Enum.to_list() [1, 0, 2, 0, 3] iex> Stream.intersperse([1], 0) |> Enum.to_list() [1] iex> Stream.intersperse([], 0) |> Enum.to_list() [] ``` ### interval(n) #### Specs ``` interval(non_neg_integer()) :: Enumerable.t() ``` Creates a stream that emits a value after the given period `n` in milliseconds. The values emitted are an increasing counter starting at `0`. This operation will block the caller by the given interval every time a new element is streamed. Do not use this function to generate a sequence of numbers. If blocking the caller process is not necessary, use `Stream.iterate(0, & &1 + 1)` instead. #### Examples ``` iex> Stream.interval(10) |> Enum.take(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` ### into(enum, collectable, transform \\ fn x -> x end) #### Specs ``` into(Enumerable.t(), Collectable.t(), (term() -> term())) :: Enumerable.t() ``` Injects the stream values into the given collectable as a side-effect. This function is often used with [`run/1`](#run/1) since any evaluation is delayed until the stream is executed. See [`run/1`](#run/1) for an example. ### iterate(start\_value, next\_fun) #### Specs ``` iterate(element(), (element() -> element())) :: Enumerable.t() ``` Emits a sequence of values, starting with `start_value`. Successive values are generated by calling `next_fun` on the previous value. #### Examples ``` iex> Stream.iterate(0, &(&1 + 1)) |> Enum.take(5) [0, 1, 2, 3, 4] ``` ### map(enum, fun) #### Specs ``` map(Enumerable.t(), (element() -> any())) :: Enumerable.t() ``` Creates a stream that will apply the given function on enumeration. #### Examples ``` iex> stream = Stream.map([1, 2, 3], fn x -> x * 2 end) iex> Enum.to_list(stream) [2, 4, 6] ``` ### map\_every(enum, nth, fun) #### Specs ``` map_every(Enumerable.t(), non_neg_integer(), (element() -> any())) :: Enumerable.t() ``` Creates a stream that will apply the given function on every `nth` element from the enumerable. The first element is always passed to the given function. `nth` must be a non-negative integer. #### Examples ``` iex> stream = Stream.map_every(1..10, 2, fn x -> x * 2 end) iex> Enum.to_list(stream) [2, 2, 6, 4, 10, 6, 14, 8, 18, 10] iex> stream = Stream.map_every([1, 2, 3, 4, 5], 1, fn x -> x * 2 end) iex> Enum.to_list(stream) [2, 4, 6, 8, 10] iex> stream = Stream.map_every(1..5, 0, fn x -> x * 2 end) iex> Enum.to_list(stream) [1, 2, 3, 4, 5] ``` ### reject(enum, fun) #### Specs ``` reject(Enumerable.t(), (element() -> as_boolean(term()))) :: Enumerable.t() ``` Creates a stream that will reject elements according to the given function on enumeration. #### Examples ``` iex> stream = Stream.reject([1, 2, 3], fn x -> rem(x, 2) == 0 end) iex> Enum.to_list(stream) [1, 3] ``` ### repeatedly(generator\_fun) #### Specs ``` repeatedly((() -> element())) :: Enumerable.t() ``` Returns a stream generated by calling `generator_fun` repeatedly. #### Examples ``` # Although not necessary, let's seed the random algorithm iex> :rand.seed(:exsplus, {1, 2, 3}) iex> Stream.repeatedly(&:rand.uniform/0) |> Enum.take(3) [0.40502929729990744, 0.45336720247823126, 0.04094511692041057] ``` ### resource(start\_fun, next\_fun, after\_fun) #### Specs ``` resource( (() -> acc()), (acc() -> {[element()], acc()} | {:halt, acc()}), (acc() -> term()) ) :: Enumerable.t() ``` Emits a sequence of values for the given resource. Similar to [`transform/3`](#transform/3) but the initial accumulated value is computed lazily via `start_fun` and executes an `after_fun` at the end of enumeration (both in cases of success and failure). Successive values are generated by calling `next_fun` with the previous accumulator (the initial value being the result returned by `start_fun`) and it must return a tuple containing a list of elements to be emitted and the next accumulator. The enumeration finishes if it returns `{:halt, acc}`. As the name says, this function is useful to stream values from resources. #### Examples ``` Stream.resource( fn -> File.open!("sample") end, fn file -> case IO.read(file, :line) do data when is_binary(data) -> {[data], file} _ -> {:halt, file} end end, fn file -> File.close(file) end ) ``` ### run(stream) #### Specs ``` run(Enumerable.t()) :: :ok ``` Runs the given stream. This is useful when a stream needs to be run, for side effects, and there is no interest in its return result. #### Examples Open up a file, replace all `#` by `%` and stream to another file without loading the whole file in memory: ``` File.stream!("/path/to/file") |> Stream.map(&String.replace(&1, "#", "%")) |> Stream.into(File.stream!("/path/to/other/file")) |> Stream.run() ``` No computation will be done until we call one of the [`Enum`](enum) functions or [`run/1`](#run/1). ### scan(enum, fun) #### Specs ``` scan(Enumerable.t(), (element(), acc() -> any())) :: Enumerable.t() ``` Creates a stream that applies the given function to each element, emits the result and uses the same result as the accumulator for the next computation. Uses the first element in the enumerable as the starting value. #### Examples ``` iex> stream = Stream.scan(1..5, &(&1 + &2)) iex> Enum.to_list(stream) [1, 3, 6, 10, 15] ``` ### scan(enum, acc, fun) #### Specs ``` scan(Enumerable.t(), acc(), (element(), acc() -> any())) :: Enumerable.t() ``` Creates a stream that applies the given function to each element, emits the result and uses the same result as the accumulator for the next computation. Uses the given `acc` as the starting value. #### Examples ``` iex> stream = Stream.scan(1..5, 0, &(&1 + &2)) iex> Enum.to_list(stream) [1, 3, 6, 10, 15] ``` ### take(enum, count) #### Specs ``` take(Enumerable.t(), integer()) :: Enumerable.t() ``` Lazily takes the next `count` elements from the enumerable and stops enumeration. If a negative `count` is given, the last `count` values will be taken. For such, the collection is fully enumerated keeping up to `2 * count` elements in memory. Once the end of the collection is reached, the last `count` elements will be executed. Therefore, using a negative `count` on an infinite collection will never return. #### Examples ``` iex> stream = Stream.take(1..100, 5) iex> Enum.to_list(stream) [1, 2, 3, 4, 5] iex> stream = Stream.take(1..100, -5) iex> Enum.to_list(stream) [96, 97, 98, 99, 100] iex> stream = Stream.cycle([1, 2, 3]) |> Stream.take(5) iex> Enum.to_list(stream) [1, 2, 3, 1, 2] ``` ### take\_every(enum, nth) #### Specs ``` take_every(Enumerable.t(), non_neg_integer()) :: Enumerable.t() ``` Creates a stream that takes every `nth` element from the enumerable. The first element is always included, unless `nth` is 0. `nth` must be a non-negative integer. #### Examples ``` iex> stream = Stream.take_every(1..10, 2) iex> Enum.to_list(stream) [1, 3, 5, 7, 9] iex> stream = Stream.take_every([1, 2, 3, 4, 5], 1) iex> Enum.to_list(stream) [1, 2, 3, 4, 5] iex> stream = Stream.take_every(1..1000, 0) iex> Enum.to_list(stream) [] ``` ### take\_while(enum, fun) #### Specs ``` take_while(Enumerable.t(), (element() -> as_boolean(term()))) :: Enumerable.t() ``` Lazily takes elements of the enumerable while the given function returns a truthy value. #### Examples ``` iex> stream = Stream.take_while(1..100, &(&1 <= 5)) iex> Enum.to_list(stream) [1, 2, 3, 4, 5] ``` ### timer(n) #### Specs ``` timer(non_neg_integer()) :: Enumerable.t() ``` Creates a stream that emits a single value after `n` milliseconds. The value emitted is `0`. This operation will block the caller by the given time until the element is streamed. #### Examples ``` iex> Stream.timer(10) |> Enum.to_list() [0] ``` ### transform(enum, acc, reducer) #### Specs ``` transform(Enumerable.t(), acc, fun) :: Enumerable.t() when fun: (element(), acc -> {Enumerable.t(), acc} | {:halt, acc}), acc: any() ``` Transforms an existing stream. It expects an accumulator and a function that receives each stream element and an accumulator, and must return a tuple containing a new stream (often a list) with the new accumulator or a tuple with `:halt` as first element and the accumulator as second. Note: this function is similar to [`Enum.flat_map_reduce/3`](enum#flat_map_reduce/3) except the latter returns both the flat list and accumulator, while this one returns only the stream. #### Examples [`Stream.transform/3`](stream#transform/3) is useful as it can be used as the basis to implement many of the functions defined in this module. For example, we can implement `Stream.take(enum, n)` as follows: ``` iex> enum = 1..100 iex> n = 3 iex> stream = Stream.transform(enum, 0, fn i, acc -> ...> if acc < n, do: {[i], acc + 1}, else: {:halt, acc} ...> end) iex> Enum.to_list(stream) [1, 2, 3] ``` ### transform(enum, start\_fun, reducer, after\_fun) #### Specs ``` transform(Enumerable.t(), (() -> acc), fun, (acc -> term())) :: Enumerable.t() when fun: (element(), acc -> {Enumerable.t(), acc} | {:halt, acc}), acc: any() ``` Transforms an existing stream with function-based start and finish. The accumulator is only calculated when transformation starts. It also allows an after function to be given which is invoked when the stream halts or completes. This function can be seen as a combination of [`Stream.resource/3`](stream#resource/3) with [`Stream.transform/3`](stream#transform/3). ### unfold(next\_acc, next\_fun) #### Specs ``` unfold(acc(), (acc() -> {element(), acc()} | nil)) :: Enumerable.t() ``` Emits a sequence of values for the given accumulator. Successive values are generated by calling `next_fun` with the previous accumulator and it must return a tuple with the current value and next accumulator. The enumeration finishes if it returns `nil`. #### Examples ``` iex> Stream.unfold(5, fn ...> 0 -> nil ...> n -> {n, n - 1} ...> end) |> Enum.to_list() [5, 4, 3, 2, 1] ``` ### uniq(enum) #### Specs ``` uniq(Enumerable.t()) :: Enumerable.t() ``` Creates a stream that only emits elements if they are unique. Keep in mind that, in order to know if an element is unique or not, this function needs to store all unique values emitted by the stream. Therefore, if the stream is infinite, the number of elements stored will grow infinitely, never being garbage-collected. #### Examples ``` iex> Stream.uniq([1, 2, 3, 3, 2, 1]) |> Enum.to_list() [1, 2, 3] ``` ### uniq\_by(enum, fun) #### Specs ``` uniq_by(Enumerable.t(), (element() -> term())) :: Enumerable.t() ``` Creates a stream that only emits elements if they are unique, by removing the elements for which function `fun` returned duplicate elements. The function `fun` maps every element to a term which is used to determine if two elements are duplicates. Keep in mind that, in order to know if an element is unique or not, this function needs to store all unique values emitted by the stream. Therefore, if the stream is infinite, the number of elements stored will grow infinitely, never being garbage-collected. #### Example ``` iex> Stream.uniq_by([{1, :x}, {2, :y}, {1, :z}], fn {x, _} -> x end) |> Enum.to_list() [{1, :x}, {2, :y}] iex> Stream.uniq_by([a: {:tea, 2}, b: {:tea, 2}, c: {:coffee, 1}], fn {_, y} -> y end) |> Enum.to_list() [a: {:tea, 2}, c: {:coffee, 1}] ``` ### with\_index(enum, offset \\ 0) #### Specs ``` with_index(Enumerable.t(), integer()) :: Enumerable.t() ``` Creates a stream where each element in the enumerable will be wrapped in a tuple alongside its index. If an `offset` is given, we will index from the given offset instead of from zero. #### Examples ``` iex> stream = Stream.with_index([1, 2, 3]) iex> Enum.to_list(stream) [{1, 0}, {2, 1}, {3, 2}] iex> stream = Stream.with_index([1, 2, 3], 3) iex> Enum.to_list(stream) [{1, 3}, {2, 4}, {3, 5}] ``` ### zip(enumerables) #### Specs ``` zip(enumerables) :: Enumerable.t() when enumerables: [Enumerable.t()] | Enumerable.t() ``` Zips corresponding elements from a finite collection of enumerables into one stream of tuples. The zipping finishes as soon as any enumerable in the given collection completes. #### Examples ``` iex> concat = Stream.concat(1..3, 4..6) iex> cycle = Stream.cycle(["foo", "bar", "baz"]) iex> Stream.zip([concat, [:a, :b, :c], cycle]) |> Enum.to_list() [{1, :a, "foo"}, {2, :b, "bar"}, {3, :c, "baz"}] ``` ### zip(left, right) #### Specs ``` zip(Enumerable.t(), Enumerable.t()) :: Enumerable.t() ``` Zips two collections together, lazily. The zipping finishes as soon as any enumerable completes. #### Examples ``` iex> concat = Stream.concat(1..3, 4..6) iex> cycle = Stream.cycle([:a, :b, :c]) iex> Stream.zip(concat, cycle) |> Enum.to_list() [{1, :a}, {2, :b}, {3, :c}, {4, :a}, {5, :b}, {6, :c}] ```
programming_docs
elixir mix test mix test ========= Runs the tests for a project. This task starts the current application, loads up `test/test_helper.exs` and then requires all files matching the `test/**/*_test.exs` pattern in parallel. A list of files can be given after the task name in order to select the files to compile: ``` mix test test/some/particular/file_test.exs ``` Tests in umbrella projects can be run from the root by specifying the full suite path, including `apps/my_app/test`, in which case recursive tests for other child apps will be skipped completely: ``` # To run all tests for my_app from the umbrella root mix test apps/my_app/test # To run a given test file on my_app from the umbrella root mix test apps/my_app/test/some/particular/file_test.exs ``` Command line options --------------------- * `--color` - enables color in the output * `--cover` - runs coverage tool. See "Coverage" section below * `--exclude` - excludes tests that match the filter * `--failed` - runs only tests that failed the last time they ran * `--force` - forces compilation regardless of modification times * `--formatter` - sets the formatter module that will print the results. Defaults to [`ExUnit.CLIFormatter`](https://hexdocs.pm/ex_unit/ExUnit.CLIFormatter.html) * `--include` - includes tests that match the filter * `--listen-on-stdin` - runs tests, and then listens on stdin. Receiving a newline will result in the tests being run again. Very useful when combined with `--stale` and external commands which produce output on stdout upon file system modifications * `--max-cases` - sets the maximum number of tests running asynchronously. Only tests from different modules run in parallel. Defaults to twice the number of cores * `--max-failures` - the suite stops evaluating tests when this number of test failures is reached. It runs all tests if omitted * `--no-archives-check` - does not check archives * `--no-color` - disables color in the output * `--no-compile` - does not compile, even if files require compilation * `--no-deps-check` - does not check dependencies * `--no-elixir-version-check` - does not check the Elixir version from `mix.exs` * `--no-start` - does not start applications after compilation * `--only` - runs only tests that match the filter * `--preload-modules` - preloads all modules defined in applications * `--raise` - raises if the test suite failed * `--seed` - seeds the random number generator used to randomize the order of tests; `--seed 0` disables randomization * `--slowest` - prints timing information for the N slowest tests. Automatically sets `--trace` and `--preload-modules` * `--stale` - runs only tests which reference modules that changed since the last time tests were ran with `--stale`. You can read more about this option in the "Stale" section below * `--timeout` - sets the timeout for the tests * `--trace` - runs tests with detailed reporting. Automatically sets `--max-cases` to `1`. Note that in trace mode test timeouts will be ignored as timeout is set to `:infinity` See [`ExUnit.configure/1`](https://hexdocs.pm/ex_unit/ExUnit.html#configure/1) for more information on configuration options. Filters -------- ExUnit provides tags and filtering functionality that allow developers to select which tests to run. The most common functionality is to exclude some particular tests from running by default in your test helper file: ``` # Exclude all external tests from running ExUnit.configure(exclude: [external: true]) ``` Then, whenever desired, those tests could be included in the run via the `--include` option: ``` mix test --include external:true ``` The example above will run all tests that have the external option set to `true`. It is also possible to include all examples that have a given tag, regardless of its value: ``` mix test --include external ``` Note that all tests are included by default, so unless they are excluded first (either in the test helper or via the `--exclude` option) the `--include` option has no effect. For this reason, Mix also provides an `--only` option that excludes all tests and includes only the given ones: ``` mix test --only external ``` Which is similar to: ``` mix test --include external --exclude test ``` It differs in that the test suite will fail if no tests are executed when the `--only` option is used. In case a single file is being tested, it is possible to pass one or more specific line numbers to run only those given tests: ``` mix test test/some/particular/file_test.exs:12 ``` Which is equivalent to: ``` mix test --exclude test --include line:12 test/some/particular/file_test.exs ``` Or: ``` mix test test/some/particular/file_test.exs:12:24 ``` Which is equivalent to: ``` mix test --exclude test --include line:12 --include line:24 test/some/particular/file_test.exs ``` If a given line starts a `describe` block, that line filter runs all tests in it. Otherwise, it runs the closest test on or before the given line number. Configuration -------------- * `:test_paths` - list of paths containing test files. Defaults to `["test"]` if the `test` directory exists; otherwise, it defaults to `[]`. It is expected that all test paths contain a `test_helper.exs` file * `:test_pattern` - a pattern to load test files. Defaults to `*_test.exs` * `:warn_test_pattern` - a pattern to match potentially misnamed test files and display a warning. Defaults to `*_test.ex` * `:test_coverage` - a set of options to be passed down to the coverage mechanism Coverage --------- The `:test_coverage` configuration accepts the following options: * `:output` - the output directory for cover results. Defaults to `"cover"` * `:tool` - the coverage tool * `:summary` - summary output configuration; can be either a boolean or a keyword list. When a keyword list is passed, it can specify a `:threshold`, which is a boolean or numeric value that enables coloring of code coverage results in red or green depending on whether the percentage is below or above the specified threshold, respectively. Defaults to `[threshold: 90]` By default, a very simple wrapper around OTP's `cover` is used as a tool, but it can be overridden as follows: ``` def project() do [ ... test_coverage: [tool: CoverModule] ... ] end ``` `CoverModule` can be any module that exports `start/2`, receiving the compilation path and the `test_coverage` options as arguments. It must return either `nil` or an anonymous function of zero arity that will be run after the test suite is done. "Stale" -------- The `--stale` command line option attempts to run only those test files which reference modules that have changed since the last time you ran this task with `--stale`. The first time this task is run with `--stale`, all tests are run and a manifest is generated. On subsequent runs, a test file is marked "stale" if any modules it references (and any modules those modules reference, recursively) were modified since the last run with `--stale`. A test file is also marked "stale" if it has been changed since the last run with `--stale`. elixir Agent Agent ====== Agents are a simple abstraction around state. Often in Elixir there is a need to share or store state that must be accessed from different processes or by the same process at different points in time. The [`Agent`](#content) module provides a basic server implementation that allows state to be retrieved and updated via a simple API. Examples --------- For example, the following agent implements a counter: ``` defmodule Counter do use Agent def start_link(initial_value) do Agent.start_link(fn -> initial_value end, name: __MODULE__) end def value do Agent.get(__MODULE__, & &1) end def increment do Agent.update(__MODULE__, &(&1 + 1)) end end ``` Usage would be: ``` Counter.start_link(0) #=> {:ok, #PID<0.123.0>} Counter.value() #=> 0 Counter.increment() #=> :ok Counter.increment() #=> :ok Counter.value() #=> 2 ``` Thanks to the agent server process, the counter can be safely incremented concurrently. Agents provide a segregation between the client and server APIs (similar to [`GenServer`](genserver)s). In particular, the functions passed as arguments to the calls to [`Agent`](#content) functions are invoked inside the agent (the server). This distinction is important because you may want to avoid expensive operations inside the agent, as they will effectively block the agent until the request is fulfilled. Consider these two examples: ``` # Compute in the agent/server def get_something(agent) do Agent.get(agent, fn state -> do_something_expensive(state) end) end # Compute in the agent/client def get_something(agent) do Agent.get(agent, & &1) |> do_something_expensive() end ``` The first function blocks the agent. The second function copies all the state to the client and then executes the operation in the client. One aspect to consider is whether the data is large enough to require processing in the server, at least initially, or small enough to be sent to the client cheaply. Another factor is whether the data needs to be processed atomically: getting the state and calling `do_something_expensive(state)` outside of the agent means that the agent's state can be updated in the meantime. This is specially important in case of updates as computing the new state in the client rather than in the server can lead to race conditions if multiple clients are trying to update the same state to different values. How to supervise ----------------- An [`Agent`](#content) is most commonly started under a supervision tree. When we invoke `use Agent`, it automatically defines a [`child_spec/1`](#child_spec/1) function that allows us to start the agent directly under a supervisor. To start an agent under a supervisor with an initial counter of 0, one may do: ``` children = [ {Counter, 0} ] Supervisor.start_link(children, strategy: :one_for_all) ``` While one could also simply pass the `Counter` as a child to the supervisor, such as: ``` children = [ Counter # Same as {Counter, []} ] Supervisor.start_link(children, strategy: :one_for_all) ``` The definition above wouldn't work for this particular example, as it would attempt to start the counter with an initial value of an empty list. However, this may be a viable option in your own agents. A common approach is to use a keyword list, as that would allow setting the initial value and giving a name to the counter process, for example: ``` def start_link(opts) do {initial_value, opts} = Keyword.pop(opts, :initial_value, 0) Agent.start_link(fn -> initial_value end, opts) end ``` and then you can use `Counter`, `{Counter, name: :my_counter}` or even `{Counter, initial_value: 0, name: :my_counter}` as a child specification. `use Agent` also accepts a list of options which configures the child specification and therefore how it runs under a supervisor. The generated [`child_spec/1`](#child_spec/1) can be customized with the following options: * `:id` - the child specification identifier, defaults to the current module * `:restart` - when the child should be restarted, defaults to `:permanent` * `:shutdown` - how to shut down the child, either immediately or by giving it time to shut down For example: ``` use Agent, restart: :transient, shutdown: 10_000 ``` See the "Child specification" section in the [`Supervisor`](supervisor) module for more detailed information. The `@doc` annotation immediately preceding `use Agent` will be attached to the generated [`child_spec/1`](#child_spec/1) function. Name registration ------------------ An agent is bound to the same name registration rules as GenServers. Read more about it in the [`GenServer`](genserver) documentation. A word on distributed agents ----------------------------- It is important to consider the limitations of distributed agents. Agents provide two APIs, one that works with anonymous functions and another that expects an explicit module, function, and arguments. In a distributed setup with multiple nodes, the API that accepts anonymous functions only works if the caller (client) and the agent have the same version of the caller module. Keep in mind this issue also shows up when performing "rolling upgrades" with agents. By rolling upgrades we mean the following situation: you wish to deploy a new version of your software by *shutting down* some of your nodes and replacing them with nodes running a new version of the software. In this setup, part of your environment will have one version of a given module and the other part another version (the newer one) of the same module. The best solution is to simply use the explicit module, function, and arguments APIs when working with distributed agents. Hot code swapping ------------------ An agent can have its code hot swapped live by simply passing a module, function, and arguments tuple to the update instruction. For example, imagine you have an agent named `:sample` and you want to convert its inner state from a keyword list to a map. It can be done with the following instruction: ``` {:update, :sample, {:advanced, {Enum, :into, [%{}]}}} ``` The agent's state will be added to the given list of arguments (`[%{}]`) as the first argument. Summary ======== Types ------ [agent()](#t:agent/0) The agent reference [name()](#t:name/0) The agent name [on\_start()](#t:on_start/0) Return values of `start*` functions [state()](#t:state/0) The agent state Functions ---------- [cast(agent, fun)](#cast/2) Performs a cast (*fire and forget*) operation on the agent state. [cast(agent, module, fun, args)](#cast/4) Performs a cast (*fire and forget*) operation on the agent state. [child\_spec(arg)](#child_spec/1) Returns a specification to start an agent under a supervisor. [get(agent, fun, timeout \\ 5000)](#get/3) Gets an agent value via the given anonymous function. [get(agent, module, fun, args, timeout \\ 5000)](#get/5) Gets an agent value via the given function. [get\_and\_update(agent, fun, timeout \\ 5000)](#get_and_update/3) Gets and updates the agent state in one operation via the given anonymous function. [get\_and\_update(agent, module, fun, args, timeout \\ 5000)](#get_and_update/5) Gets and updates the agent state in one operation via the given function. [start(fun, options \\ [])](#start/2) Starts an agent process without links (outside of a supervision tree). [start(module, fun, args, options \\ [])](#start/4) Starts an agent without links with the given module, function, and arguments. [start\_link(fun, options \\ [])](#start_link/2) Starts an agent linked to the current process with the given function. [start\_link(module, fun, args, options \\ [])](#start_link/4) Starts an agent linked to the current process. [stop(agent, reason \\ :normal, timeout \\ :infinity)](#stop/3) Synchronously stops the agent with the given `reason`. [update(agent, fun, timeout \\ 5000)](#update/3) Updates the agent state via the given anonymous function. [update(agent, module, fun, args, timeout \\ 5000)](#update/5) Updates the agent state via the given function. Types ====== ### agent() #### Specs ``` agent() :: pid() | {atom(), node()} | name() ``` The agent reference ### name() #### Specs ``` name() :: atom() | {:global, term()} | {:via, module(), term()} ``` The agent name ### on\_start() #### Specs ``` on_start() :: {:ok, pid()} | {:error, {:already_started, pid()} | term()} ``` Return values of `start*` functions ### state() #### Specs ``` state() :: term() ``` The agent state Functions ========== ### cast(agent, fun) #### Specs ``` cast(agent(), (state() -> state())) :: :ok ``` Performs a cast (*fire and forget*) operation on the agent state. The function `fun` is sent to the `agent` which invokes the function passing the agent state. The return value of `fun` becomes the new state of the agent. Note that `cast` returns `:ok` immediately, regardless of whether `agent` (or the node it should live on) exists. ### cast(agent, module, fun, args) #### Specs ``` cast(agent(), module(), atom(), [term()]) :: :ok ``` Performs a cast (*fire and forget*) operation on the agent state. Same as [`cast/2`](#cast/2) but a module, function, and arguments are expected instead of an anonymous function. The state is added as first argument to the given list of arguments. ### child\_spec(arg) Returns a specification to start an agent under a supervisor. See the "Child specification" section in the [`Supervisor`](supervisor) module for more detailed information. ### get(agent, fun, timeout \\ 5000) #### Specs ``` get(agent(), (state() -> a), timeout()) :: a when a: var ``` Gets an agent value via the given anonymous function. The function `fun` is sent to the `agent` which invokes the function passing the agent state. The result of the function invocation is returned from this function. `timeout` is an integer greater than zero which specifies how many milliseconds are allowed before the agent executes the function and returns the result value, or the atom `:infinity` to wait indefinitely. If no result is received within the specified time, the function call fails and the caller exits. #### Examples ``` iex> {:ok, pid} = Agent.start_link(fn -> 42 end) iex> Agent.get(pid, fn state -> state end) 42 ``` ### get(agent, module, fun, args, timeout \\ 5000) #### Specs ``` get(agent(), module(), atom(), [term()], timeout()) :: any() ``` Gets an agent value via the given function. Same as [`get/3`](#get/3) but a module, function, and arguments are expected instead of an anonymous function. The state is added as first argument to the given list of arguments. ### get\_and\_update(agent, fun, timeout \\ 5000) #### Specs ``` get_and_update(agent(), (state() -> {a, state()}), timeout()) :: a when a: var ``` Gets and updates the agent state in one operation via the given anonymous function. The function `fun` is sent to the `agent` which invokes the function passing the agent state. The function must return a tuple with two elements, the first being the value to return (that is, the "get" value) and the second one being the new state of the agent. `timeout` is an integer greater than zero which specifies how many milliseconds are allowed before the agent executes the function and returns the result value, or the atom `:infinity` to wait indefinitely. If no result is received within the specified time, the function call fails and the caller exits. #### Examples ``` iex> {:ok, pid} = Agent.start_link(fn -> 42 end) iex> Agent.get_and_update(pid, fn state -> {state, state + 1} end) 42 iex> Agent.get(pid, fn state -> state end) 43 ``` ### get\_and\_update(agent, module, fun, args, timeout \\ 5000) #### Specs ``` get_and_update(agent(), module(), atom(), [term()], timeout()) :: any() ``` Gets and updates the agent state in one operation via the given function. Same as [`get_and_update/3`](#get_and_update/3) but a module, function, and arguments are expected instead of an anonymous function. The state is added as first argument to the given list of arguments. ### start(fun, options \\ []) #### Specs ``` start((() -> term()), GenServer.options()) :: on_start() ``` Starts an agent process without links (outside of a supervision tree). See [`start_link/2`](#start_link/2) for more information. #### Examples ``` iex> {:ok, pid} = Agent.start(fn -> 42 end) iex> Agent.get(pid, fn state -> state end) 42 ``` ### start(module, fun, args, options \\ []) #### Specs ``` start(module(), atom(), [any()], GenServer.options()) :: on_start() ``` Starts an agent without links with the given module, function, and arguments. See [`start_link/4`](#start_link/4) for more information. ### start\_link(fun, options \\ []) #### Specs ``` start_link((() -> term()), GenServer.options()) :: on_start() ``` Starts an agent linked to the current process with the given function. This is often used to start the agent as part of a supervision tree. Once the agent is spawned, the given function `fun` is invoked in the server process, and should return the initial agent state. Note that [`start_link/2`](#start_link/2) does not return until the given function has returned. #### Options The `:name` option is used for registration as described in the module documentation. If the `:timeout` option is present, the agent is allowed to spend at most the given number of milliseconds on initialization or it will be terminated and the start function will return `{:error, :timeout}`. If the `:debug` option is present, the corresponding function in the [`:sys` module](http://www.erlang.org/doc/man/sys.html) will be invoked. If the `:spawn_opt` option is present, its value will be passed as options to the underlying process as in [`Process.spawn/4`](process#spawn/4). #### Return values If the server is successfully created and initialized, the function returns `{:ok, pid}`, where `pid` is the PID of the server. If an agent with the specified name already exists, the function returns `{:error, {:already_started, pid}}` with the PID of that process. If the given function callback fails, the function returns `{:error, reason}`. #### Examples ``` iex> {:ok, pid} = Agent.start_link(fn -> 42 end) iex> Agent.get(pid, fn state -> state end) 42 iex> {:error, {exception, _stacktrace}} = Agent.start(fn -> raise "oops" end) iex> exception %RuntimeError{message: "oops"} ``` ### start\_link(module, fun, args, options \\ []) #### Specs ``` start_link(module(), atom(), [any()], GenServer.options()) :: on_start() ``` Starts an agent linked to the current process. Same as [`start_link/2`](#start_link/2) but a module, function, and arguments are expected instead of an anonymous function; `fun` in `module` will be called with the given arguments `args` to initialize the state. ### stop(agent, reason \\ :normal, timeout \\ :infinity) #### Specs ``` stop(agent(), reason :: term(), timeout()) :: :ok ``` Synchronously stops the agent with the given `reason`. It returns `:ok` if the agent terminates with the given reason. If the agent terminates with another reason, the call will exit. This function keeps OTP semantics regarding error reporting. If the reason is any other than `:normal`, `:shutdown` or `{:shutdown, _}`, an error report will be logged. #### Examples ``` iex> {:ok, pid} = Agent.start_link(fn -> 42 end) iex> Agent.stop(pid) :ok ``` ### update(agent, fun, timeout \\ 5000) #### Specs ``` update(agent(), (state() -> state()), timeout()) :: :ok ``` Updates the agent state via the given anonymous function. The function `fun` is sent to the `agent` which invokes the function passing the agent state. The return value of `fun` becomes the new state of the agent. This function always returns `:ok`. `timeout` is an integer greater than zero which specifies how many milliseconds are allowed before the agent executes the function and returns the result value, or the atom `:infinity` to wait indefinitely. If no result is received within the specified time, the function call fails and the caller exits. #### Examples ``` iex> {:ok, pid} = Agent.start_link(fn -> 42 end) iex> Agent.update(pid, fn state -> state + 1 end) :ok iex> Agent.get(pid, fn state -> state end) 43 ``` ### update(agent, module, fun, args, timeout \\ 5000) #### Specs ``` update(agent(), module(), atom(), [term()], timeout()) :: :ok ``` Updates the agent state via the given function. Same as [`update/3`](#update/3) but a module, function, and arguments are expected instead of an anonymous function. The state is added as first argument to the given list of arguments.
programming_docs
elixir ExUnit.Case ExUnit.Case ============ Helpers for defining test cases. This module must be used in other modules as a way to configure and prepare them for testing. When used, it accepts the following options: * `:async` - configures tests in this module to run concurrently with tests in other modules. Tests in the same module never run concurrently. It should be enabled only if tests do not change any global state. Defaults to `false`. This module automatically includes all callbacks defined in [`ExUnit.Callbacks`](exunit.callbacks). See that module for more information on `setup`, `start_supervised`, `on_exit` and the test process lifecycle. For grouping tests together, see [`describe/2`](#describe/2) in this module. Examples --------- ``` defmodule AssertionTest do # Use the module use ExUnit.Case, async: true # The "test" macro is imported by ExUnit.Case test "always pass" do assert true end end ``` Context -------- All tests receive a context as an argument. The context is particularly useful for sharing information between callbacks and tests: ``` defmodule KVTest do use ExUnit.Case setup do {:ok, pid} = KV.start_link() {:ok, pid: pid} end test "stores key-value pairs", context do assert KV.put(context[:pid], :hello, :world) == :ok assert KV.get(context[:pid], :hello) == :world end end ``` As the context is a map, it can be pattern matched on to extract information: ``` test "stores key-value pairs", %{pid: pid} = _context do assert KV.put(pid, :hello, :world) == :ok assert KV.get(pid, :hello) == :world end ``` Tags ----- The context is used to pass information from the callbacks to the test. In order to pass information from the test to the callback, ExUnit provides tags. By tagging a test, the tag value can be accessed in the context, allowing the developer to customize the test. Let's see an example: ``` defmodule FileTest do # Changing directory cannot be async use ExUnit.Case, async: false setup context do # Read the :cd tag value if cd = context[:cd] do prev_cd = File.cwd!() File.cd!(cd) on_exit(fn -> File.cd!(prev_cd) end) end :ok end @tag cd: "fixtures" test "reads UTF-8 fixtures" do File.read("README.md") end end ``` In the example above, we have defined a tag called `:cd` that is read in the setup callback to configure the working directory the test is going to run on. Tags are also very effective when used with case templates ([`ExUnit.CaseTemplate`](exunit.casetemplate)) allowing callbacks in the case template to customize the test behaviour. Note a tag can be set in two different ways: ``` @tag key: value @tag :key # equivalent to setting @tag key: true ``` If a tag is given more than once, the last value wins. ### Module and describe tags A tag can be set for all tests in a module or describe block by setting `@moduletag` or `@describetag` inside each context respectively: ``` defmodule ApiTest do use ExUnit.Case @moduletag :external describe "makes calls to the right endpoint" do @describetag :endpoint # ... end end ``` If you are setting a `@moduletag`, you must set that after your call to `use ExUnit.Case` otherwise you will see compilation errors. If the same key is set via `@tag`, the `@tag` value has higher precedence. ### Known tags The following tags are set automatically by ExUnit and are therefore reserved: * `:module` - the module on which the test was defined * `:file` - the file on which the test was defined * `:line` - the line on which the test was defined * `:test` - the test name * `:async` - if the test case is in async mode * `:registered` - used for [`ExUnit.Case.register_attribute/3`](exunit.case#register_attribute/3) values * `:describe` - the describe block the test belongs to The following tags customize how tests behave: * `:capture_log` - see the "Log Capture" section below * `:skip` - skips the test with the given reason * `:timeout` - customizes the test timeout in milliseconds (defaults to 60000). Accepts `:infinity` as a timeout value. The `:test_type` tag is automatically set by ExUnit, but is *not* reserved. This tag is available for users to customize if they desire. Filters -------- Tags can also be used to identify specific tests, which can then be included or excluded using filters. The most common functionality is to exclude some particular tests from running, which can be done via [`ExUnit.configure/1`](exunit#configure/1): ``` # Exclude all external tests from running ExUnit.configure(exclude: [external: true]) ``` From now on, ExUnit will not run any test that has the `:external` option set to `true`. This behaviour can be reversed with the `:include` option which is usually passed through the command line: ``` mix test --include external:true ``` Run [`mix help test`](https://hexdocs.pm/mix/Mix.Tasks.Test.html) for more information on how to run filters via Mix. Another use case for tags and filters is to exclude all tests that have a particular tag by default, regardless of its value, and include only a certain subset: ``` ExUnit.configure(exclude: :os, include: [os: :unix]) ``` A given include/exclude filter can be given more than once: ``` ExUnit.configure(exclude: [os: :unix, os: :windows]) ``` Keep in mind that all tests are included by default, so unless they are excluded first, the `include` option has no effect. Log Capture ------------ ExUnit can optionally suppress printing of log messages that are generated during a test. Log messages generated while running a test are captured and only if the test fails are they printed to aid with debugging. You can opt into this behaviour for individual tests by tagging them with `:capture_log` or enable log capture for all tests in the ExUnit configuration: ``` ExUnit.start(capture_log: true) ``` This default can be overridden by `@tag capture_log: false` or `@moduletag capture_log: false`. Since `setup_all` blocks don't belong to a specific test, log messages generated in them (or between tests) are never captured. If you want to suppress these messages as well, remove the console backend globally by setting: ``` config :logger, backends: [] ``` Summary ======== Functions ---------- [describe(message, list)](#describe/2) Describes tests together. [register\_attribute(env, name, opts \\ [])](#register_attribute/3) Registers a new attribute to be used during [`ExUnit.Case`](#content) tests. [register\_test(map, test\_type, name, tags)](#register_test/4) Registers a function to run as part of this case. [test(message)](#test/1) Defines a not implemented test with a string. [test(message, var \\ quote do \_ end, contents)](#test/3) Defines a test with a string. Functions ========== ### describe(message, list) Describes tests together. Every describe block receives a name which is used as prefix for upcoming tests. Inside a block, [`ExUnit.Callbacks.setup/1`](exunit.callbacks#setup/1) may be invoked and it will define a setup callback to run only for the current block. The describe name is also added as a tag, allowing developers to run tests for specific blocks. #### Examples ``` defmodule StringTest do use ExUnit.Case, async: true describe "String.capitalize/1" do test "first grapheme is in uppercase" do assert String.capitalize("hello") == "Hello" end test "converts remaining graphemes to lowercase" do assert String.capitalize("HELLO") == "Hello" end end end ``` When using Mix, you can run all tests in a describe block by name: ``` mix test --only describe:"String.capitalize/1" ``` or by passing the exact line the describe block starts on: ``` mix test path/to/file:123 ``` Note describe blocks cannot be nested. Instead of relying on hierarchy for composition, developers should build on top of named setups. For example: ``` defmodule UserManagementTest do use ExUnit.Case, async: true describe "when user is logged in and is an admin" do setup [:log_user_in, :set_type_to_admin] test ... end describe "when user is logged in and is a manager" do setup [:log_user_in, :set_type_to_manager] test ... end defp log_user_in(context) do # ... end end ``` By forbidding hierarchies in favor of named setups, it is straightforward for the developer to glance at each describe block and know exactly the setup steps involved. ### register\_attribute(env, name, opts \\ []) Registers a new attribute to be used during [`ExUnit.Case`](#content) tests. The attribute values will be available through `context.registered`. Registered values are cleared after each [`ExUnit.Case.test/3`](exunit.case#test/3) similar to `@tag`. [`Module.register_attribute/3`](https://hexdocs.pm/elixir/Module.html#register_attribute/3) is used to register the attribute, this function takes the same options. #### Examples ``` defmodule MyTest do use ExUnit.Case ExUnit.Case.register_attribute(__MODULE__, :fixtures, accumulate: true) @fixtures :user @fixtures {:post, insert: false} test "using custom attribute", context do assert context.registered.fixtures == [{:post, insert: false}, :user] end test "custom attributes are cleared per test", context do assert context.registered.fixtures == [] end end ``` ### register\_test(map, test\_type, name, tags) Registers a function to run as part of this case. This is used by third-party projects, like QuickCheck, to implement macros like `property/3` that works like `test` but instead defines a property. See [`test/3`](#test/3) implementation for an example of invoking this function. The test type will be converted to a string and pluralized for display. You can use [`ExUnit.plural_rule/2`](exunit#plural_rule/2) to set a custom pluralization. ### test(message) Defines a not implemented test with a string. Provides a convenient macro that allows a test to be defined with a string, but not yet implemented. The resulting test will always fail and print a "Not implemented" error message. The resulting test case is also tagged with `:not_implemented`. #### Examples ``` test "this will be a test in future" ``` ### test(message, var \\ quote do \_ end, contents) Defines a test with a string. Provides a convenient macro that allows a test to be defined with a string. This macro automatically inserts the atom `:ok` as the last line of the test. That said, a passing test always returns `:ok`, but, more importantly, it forces Elixir to not tail call optimize the test and therefore avoids hiding lines from the backtrace. #### Examples ``` test "true is equal to true" do assert true == true end ``` elixir ExUnit.TestModule ExUnit.TestModule ================== A struct that keeps information about the test case. It is received by formatters and contains the following fields: * `:name` - the test case name * `:state` - the test error state (see [`ExUnit.state/0`](exunit#t:state/0)) * `:tests` - all tests for this case Summary ======== Types ------ [t()](#t:t/0) Types ====== ### t() #### Specs ``` t() :: %ExUnit.TestModule{ name: module(), state: ExUnit.state(), tests: [ExUnit.Test.t()] } ``` elixir IO and the file system Getting Started IO and the file system ====================== This chapter is a quick introduction to input/output mechanisms and file-system-related tasks, as well as to related modules like [`IO`](https://hexdocs.pm/elixir/IO.html), [`File`](https://hexdocs.pm/elixir/File.html) and [`Path`](https://hexdocs.pm/elixir/Path.html). We had originally sketched this chapter to come much earlier in the getting started guide. However, we noticed the IO system provides a great opportunity to shed some light on some philosophies and curiosities of Elixir and the VM. The `IO` module --------------- The [`IO`](https://hexdocs.pm/elixir/IO.html) module is the main mechanism in Elixir for reading and writing to standard input/output (`:stdio`), standard error (`:stderr`), files, and other IO devices. Usage of the module is pretty straightforward: ``` iex> IO.puts("hello world") hello world :ok iex> IO.gets("yes or no? ") yes or no? yes "yes\n" ``` By default, functions in the `IO` module read from the standard input and write to the standard output. We can change that by passing, for example, `:stderr` as an argument (in order to write to the standard error device): ``` iex> IO.puts(:stderr, "hello world") hello world :ok ``` The `File` module ----------------- The [`File`](https://hexdocs.pm/elixir/File.html) module contains functions that allow us to open files as IO devices. By default, files are opened in binary mode, which requires developers to use the specific `IO.binread/2` and `IO.binwrite/2` functions from the `IO` module: ``` iex> {:ok, file} = File.open("hello", [:write]) {:ok, #PID<0.47.0>} iex> IO.binwrite(file, "world") :ok iex> File.close(file) :ok iex> File.read("hello") {:ok, "world"} ``` A file can also be opened with `:utf8` encoding, which tells the `File` module to interpret the bytes read from the file as UTF-8-encoded bytes. Besides functions for opening, reading and writing files, the `File` module has many functions to work with the file system. Those functions are named after their UNIX equivalents. For example, `File.rm/1` can be used to remove files, `File.mkdir/1` to create directories, `File.mkdir_p/1` to create directories and all their parent chain. There are even `File.cp_r/2` and `File.rm_rf/1` to respectively copy and remove files and directories recursively (i.e., copying and removing the contents of the directories too). You will also notice that functions in the `File` module have two variants: one “regular” variant and another variant with a trailing bang (`!`). For example, when we read the `"hello"` file in the example above, we use `File.read/1`. Alternatively, we can use `File.read!/1`: ``` iex> File.read("hello") {:ok, "world"} iex> File.read!("hello") "world" iex> File.read("unknown") {:error, :enoent} iex> File.read!("unknown") ** (File.Error) could not read file "unknown": no such file or directory ``` Notice that the version with `!` returns the contents of the file instead of a tuple, and if anything goes wrong the function raises an error. The version without `!` is preferred when you want to handle different outcomes using pattern matching: ``` case File.read(file) do {:ok, body} -> # do something with the `body` {:error, reason} -> # handle the error caused by `reason` end ``` However, if you expect the file to be there, the bang variation is more useful as it raises a meaningful error message. Avoid writing: ``` {:ok, body} = File.read(file) ``` as, in case of an error, `File.read/1` will return `{:error, reason}` and the pattern matching will fail. You will still get the desired result (a raised error), but the message will be about the pattern which doesn’t match (thus being cryptic in respect to what the error actually is about). Therefore, if you don’t want to handle the error outcomes, prefer using `File.read!/1`. The `Path` module ----------------- The majority of the functions in the `File` module expect paths as arguments. Most commonly, those paths will be regular binaries. The [`Path`](https://hexdocs.pm/elixir/Path.html) module provides facilities for working with such paths: ``` iex> Path.join("foo", "bar") "foo/bar" iex> Path.expand("~/hello") "/Users/jose/hello" ``` Using functions from the `Path` module as opposed to directly manipulating strings is preferred since the `Path` module takes care of different operating systems transparently. Finally, keep in mind that Elixir will automatically convert slashes (`/`) into backslashes (`\`) on Windows when performing file operations. With this, we have covered the main modules that Elixir provides for dealing with IO and interacting with the file system. In the next sections, we will discuss some advanced topics regarding IO. Those sections are not necessary in order to write Elixir code, so feel free to skip them, but they do provide a nice overview of how the IO system is implemented in the VM and other curiosities. Processes --------- You may have noticed that `File.open/2` returns a tuple like `{:ok, pid}`: ``` iex> {:ok, file} = File.open("hello", [:write]) {:ok, #PID<0.47.0>} ``` That happens because the `IO` module actually works with processes (see [chapter 11](processes)). Given a file is a process, when you write to a file that has been closed, you are actually sending a message to a process which has been terminated: ``` iex> File.close(file) :ok iex> IO.write(file, "is anybody out there") {:error, :terminated} ``` Let’s see in more detail what happens when you request `IO.write(pid, binary)`. The `IO` module sends a message to the process identified by `pid` with the desired operation. A small ad-hoc process can help us see it: ``` iex> pid = spawn fn -> ...> receive do: (msg -> IO.inspect msg) ...> end #PID<0.57.0> iex> IO.write(pid, "hello") {:io_request, #PID<0.41.0>, #Reference<0.0.8.91>, {:put_chars, :unicode, "hello"}} ** (ErlangError) erlang error: :terminated ``` After `IO.write/2`, we can see the request sent by the `IO` module (a four-elements tuple) printed out. Soon after that, we see that it fails since the `IO` module expected some kind of result, which we did not supply. By modeling IO devices with processes, the Erlang VM allows I/O messages to be routed between different nodes running Distributed Erlang or even exchange files to perform read/write operations across nodes. `iodata` and `chardata` ------------------------ In all of the examples above, we used binaries when writing to files. In the chapter [“Binaries, strings, and charlists”](binaries-strings-and-char-lists), we mentioned how strings are made of bytes while charlists are lists with Unicode codepoints. The functions in `IO` and `File` also allow lists to be given as arguments. Not only that, they also allow a mixed list of lists, integers, and binaries to be given: ``` iex> IO.puts('hello world') hello world :ok iex> IO.puts(['hello', ?\s, "world"]) hello world :ok ``` However, using lists in IO operations requires some attention. A list may represent either a bunch of bytes or a bunch of characters and which one to use depends on the encoding of the IO device. If the file is opened without encoding, the file is expected to be in raw mode, and the functions in the `IO` module starting with `bin*` must be used. Those functions expect an `iodata` as an argument; i.e., they expect a list of integers representing bytes or binaries to be given. On the other hand, `:stdio` and files opened with `:utf8` encoding work with the remaining functions in the `IO` module. Those functions expect a `char_data` as an argument, that is, a list of characters or strings. Although this is a subtle difference, you only need to worry about these details if you intend to pass lists to those functions. Binaries are already represented by the underlying bytes and as such their representation is always “raw”. This finishes our tour of IO devices and IO related functionality. We have learned about three Elixir modules - [`IO`](https://hexdocs.pm/elixir/IO.html), [`File`](https://hexdocs.pm/elixir/File.html), and [`Path`](https://hexdocs.pm/elixir/Path.html) - as well as how the VM uses processes for the underlying IO mechanisms and how to use `chardata` and `iodata` for IO operations. elixir Writing Documentation Writing Documentation ===================== Elixir treats documentation as a first-class citizen. This means documentation should be easy to write and easy to read. In this document you will learn how to write documentation in Elixir, covering constructs like module attributes, style practices and doctests. Markdown --------- Elixir documentation is written using Markdown. There are plenty of guides on Markdown online, we recommend the ones available at GitHub as a getting started point: * [Basic writing and formatting syntax](https://help.github.com/articles/basic-writing-and-formatting-syntax/) * [Mastering Markdown](https://guides.github.com/features/mastering-markdown/) Module Attributes ------------------ Documentation in Elixir is usually attached to module attributes. Let's see an example: ``` defmodule MyApp.Hello do @moduledoc """ This is the Hello module. """ @moduledoc since: "1.0.0" @doc """ Says hello to the given `name`. Returns `:ok`. ## Examples iex> MyApp.Hello.world(:john) :ok """ @doc since: "1.3.0" def world(name) do IO.puts("hello #{name}") end end ``` The `@moduledoc` attribute is used to add documentation to the module. `@doc` is used before a function to provide documentation for it. Besides the attributes above, `@typedoc` can also be used to attach documentation to types defined as part of typespecs. Elixir also allows metadata to be attached to documentation, by passing a keyword list to `@doc` and friends. Function Arguments ------------------- When documenting a function, argument names are inferred by the compiler. For example: ``` def size(%{size: size}) do size end ``` The compiler will infer this argument as `map`. Sometimes the inference will be suboptimal, especially if the function contains multiple clauses with the argument matching on different values each time. You can specify the proper names for documentation by declaring only the function head at any moment before the implementation: ``` def size(map_with_size) def size(%{size: size}) do size end ``` Documentation metadata ----------------------- Elixir allows developers to attach arbitrary metadata to the documentation. This is done by passing a keyword list to the relevant attribute (such as `@moduledoc`, `@typedoc`, and `@doc`). A commonly used metadata is `:since`, which annotates in which version that particular module, function, type, or callback was added, as shown in the example above. Another common metadata is `:deprecated`, which emits a warning in the documentation, explaining that its usage is discouraged: ``` @doc deprecated: "Use Foo.bar/2 instead" ``` Note the `:deprecated` key does not warn when a developer invokes the functions. If you want the code to also emit a warning, you can use the `@deprecated` attribute: ``` @deprecated "Use Foo.bar/2 instead" ``` Metadata can have any key. Documentation tools often use metadata to provide more data to readers and to enrich the user experience. Recommendations ---------------- When writing documentation: * Keep the first paragraph of the documentation concise and simple, typically one-line. Tools like [ExDoc](https://github.com/elixir-lang/ex_doc/) use the first line to generate a summary. * Reference modules by their full name. Markdown uses backticks (```) to quote code. Elixir builds on top of that to automatically generate links when module or function names are referenced. For this reason, always use full module names. If you have a module called `MyApp.Hello`, always reference it as ``MyApp.Hello`` and never as ``Hello``. * Reference functions by name and arity if they are local, as in ``world/1``, or by module, name and arity if pointing to an external module: ``MyApp.Hello.world/1``. * Reference a `@callback` by prepending `c:`, as in ``c:world/1``. * Reference a `@type` by prepending `t:`, as in ``t:values/0``. * Start new sections with second level Markdown headers `##`. First level headers are reserved for module and function names. * Place documentation before the first clause of multi-clause functions. Documentation is always per function and arity and not per clause. * Use the `:since` key in the documentation metadata to annotate whenever new functions or modules are added to your API. Doctests --------- We recommend that developers include examples in their documentation, often under their own `## Examples` heading. To ensure examples do not get out of date, Elixir's test framework (ExUnit) provides a feature called doctests that allows developers to test the examples in their documentation. Doctests work by parsing out code samples starting with `iex>` from the documentation. You can read more about it at [`ExUnit.DocTest`](https://hexdocs.pm/ex_unit/ExUnit.DocTest.html). Notice doctests have limitations. When you cannot doctest a function, because it relies on state or side-effects, we recommend developers include examples directly without the `iex>` prompt. Documentation != Code comments ------------------------------- Elixir treats documentation and code comments as different concepts. Documentation is an explicit contract between you and users of your Application Programming Interface (API), be them third-party developers, co-workers, or your future self. Modules and functions must always be documented if they are part of your API. Code comments are aimed at developers reading the code. They are useful for marking improvements, leaving notes (for example, why you had to resort to a workaround due to a bug in a library), and so forth. They are tied to the source code: you can completely rewrite a function and remove all existing code comments, and it will continue to behave the same, with no change to either its behaviour or its documentation. Because private functions cannot be accessed externally, Elixir will warn if a private function has a `@doc` attribute and will discard its content. However, you can add code comments to private functions, as with any other piece of code, and we recommend developers to do so whenever they believe it will add relevant information to the readers and maintainers of such code. Finally, beware of redundant code comments, such as the ones describing the exact same that the code does: ``` # Total is the sum of the batch and individual entries total = batch_sum + individual_sum ``` In summary, documentation is a contract with users of your API, who may not necessarily have access to the source code; whereas code comments are for those who interact directly with the source. You can learn and express different guarantees about your software by separating those two concepts. Hiding Internal Modules and Functions -------------------------------------- Besides the modules and functions libraries provide as part of their public interface, libraries may also implement important functionality that is not part of their API. While these modules and functions can be accessed, they are meant to be internal to the library and thus should not have documentation for end users. Conveniently, Elixir allows developers to hide modules and functions from the documentation, by setting `@doc false` to hide a particular function, or `@moduledoc false` to hide the whole module. If a module is hidden, you may even document the functions in the module, but the module itself won't be listed in the documentation: ``` defmodule MyApp.Hidden do @moduledoc false @doc """ This function won't be listed in docs. """ def function_that_wont_be_listed_in_docs do # ... end end ``` However, keep in mind that adding `@doc false` does not make the function private. The function above can still be invoked as `MyApp.Sample.add(1, 2)`. Not only that, if `MyApp.Sample` is imported, the `add/2` function will also be imported into the caller. For those reasons, be cautious when adding `@doc false` to functions, instead use one of these two options: * Move the undocumented function to a module with `@moduledoc false`, like `MyApp.Hidden`, ensuring the function won't be accidentally exposed or imported. Remember you can use `@moduledoc false` to hide a whole module and still document each function with `@doc`. Tools will still ignore the module. * Start the function name with one or two underscores, for example, `__add__/2`, and add `@doc false`. The compiler does not import functions with leading underscores and they hint to anyone reading the code of their intended private usage. Code.fetch\_docs/1 ------------------- Elixir stores documentation inside pre-defined chunks in the bytecode. It can be accessed from Elixir by using the [`Code.fetch_docs/1`](code#fetch_docs/1) function. This also means documentation is only accessed when required and not when modules are loaded by the Virtual Machine. The only downside is that modules defined in-memory, like the ones defined in IEx, cannot have their documentation accessed as they do not have their bytecode written to disk.
programming_docs
elixir case, cond, and if Getting Started case, cond, and if ================== In this chapter, we will learn about the `case`, `cond`, and `if` control flow structures. `case` ------ `case` allows us to compare a value against many patterns until we find a matching one: ``` iex> case {1, 2, 3} do ...> {4, 5, 6} -> ...> "This clause won't match" ...> {1, x, 3} -> ...> "This clause will match and bind x to 2 in this clause" ...> _ -> ...> "This clause would match any value" ...> end "This clause will match and bind x to 2 in this clause" ``` If you want to pattern match against an existing variable, you need to use the `^` operator: ``` iex> x = 1 1 iex> case 10 do ...> ^x -> "Won't match" ...> _ -> "Will match" ...> end "Will match" ``` Clauses also allow extra conditions to be specified via guards: ``` iex> case {1, 2, 3} do ...> {1, x, 3} when x > 0 -> ...> "Will match" ...> _ -> ...> "Would match, if guard condition were not satisfied" ...> end "Will match" ``` The first clause above will only match when `x` is positive. Keep in mind errors in guards do not leak but simply make the guard fail: ``` iex> hd(1) ** (ArgumentError) argument error iex> case 1 do ...> x when hd(x) -> "Won't match" ...> x -> "Got #{x}" ...> end "Got 1" ``` If none of the clauses match, an error is raised: ``` iex> case :ok do ...> :error -> "Won't match" ...> end ** (CaseClauseError) no case clause matching: :ok ``` Consult [the full documentation for guards](https://hexdocs.pm/elixir/patterns-and-guards.html#guards) for more information about guards, how they are used, and what expressions are allowed in them. Note anonymous functions can also have multiple clauses and guards: ``` iex> f = fn ...> x, y when x > 0 -> x + y ...> x, y -> x * y ...> end #Function<12.71889879/2 in :erl_eval.expr/5> iex> f.(1, 3) 4 iex> f.(-1, 3) -3 ``` The number of arguments in each anonymous function clause needs to be the same, otherwise an error is raised. ``` iex> f2 = fn ...> x, y when x > 0 -> x + y ...> x, y, z -> x * y + z ...> end ** (CompileError) iex:1: cannot mix clauses with different arities in anonymous functions ``` `cond` ------ `case` is useful when you need to match against different values. However, in many circumstances, we want to check different conditions and find the first one that does not evaluate to `nil` or `false`. In such cases, one may use `cond`: ``` iex> cond do ...> 2 + 2 == 5 -> ...> "This will not be true" ...> 2 * 2 == 3 -> ...> "Nor this" ...> 1 + 1 == 2 -> ...> "But this will" ...> end "But this will" ``` This is equivalent to `else if` clauses in many imperative languages (although used much less frequently here). If all of the conditions return `nil` or `false`, an error (`CondClauseError`) is raised. For this reason, it may be necessary to add a final condition, equal to `true`, which will always match: ``` iex> cond do ...> 2 + 2 == 5 -> ...> "This is never true" ...> 2 * 2 == 3 -> ...> "Nor this" ...> true -> ...> "This is always true (equivalent to else)" ...> end "This is always true (equivalent to else)" ``` Finally, note `cond` considers any value besides `nil` and `false` to be true: ``` iex> cond do ...> hd([1, 2, 3]) -> ...> "1 is considered as true" ...> end "1 is considered as true" ``` `if` and `unless` ------------------ Besides `case` and `cond`, Elixir also provides the macros `if/2` and `unless/2` which are useful when you need to check for only one condition: ``` iex> if true do ...> "This works!" ...> end "This works!" iex> unless true do ...> "This will never be seen" ...> end nil ``` If the condition given to `if/2` returns `false` or `nil`, the body given between `do/end` is not executed and instead it returns `nil`. The opposite happens with `unless/2`. They also support `else` blocks: ``` iex> if nil do ...> "This won't be seen" ...> else ...> "This will" ...> end "This will" ``` > Note: An interesting note regarding `if/2` and `unless/2` is that they are implemented as macros in the language; they aren’t special language constructs as they would be in many languages. You can check the documentation and the source of `if/2` in [the `Kernel` module docs](https://hexdocs.pm/elixir/Kernel.html). The `Kernel` module is also where operators like `+/2` and functions like `is_function/2` are defined, all automatically imported and available in your code by default. > > `do/end` blocks ---------------- At this point, we have learned four control structures: `case`, `cond`, `if`, and `unless`, and they were all wrapped in `do/end` blocks. It happens we could also write `if` as follows: ``` iex> if true, do: 1 + 2 3 ``` Notice how the example above has a comma between `true` and `do:`, that’s because it is using Elixir’s regular syntax where each argument is separated by a comma. We say this syntax is using *keyword lists*. We can pass `else` using keywords too: ``` iex> if false, do: :this, else: :that :that ``` `do/end` blocks are a syntactic convenience built on top of the keywords one. That’s why `do/end` blocks do not require a comma between the previous argument and the block. They are useful exactly because they remove the verbosity when writing blocks of code. These are equivalent: ``` iex> if true do ...> a = 1 + 2 ...> a + 10 ...> end 13 iex> if true, do: ( ...> a = 1 + 2 ...> a + 10 ...> ) 13 ``` One thing to keep in mind when using `do/end` blocks is they are always bound to the outermost function call. For example, the following expression: ``` iex> is_number if true do ...> 1 + 2 ...> end ** (CompileError) iex:1: undefined function is_number/2 ``` Would be parsed as: ``` iex> is_number(if true) do ...> 1 + 2 ...> end ** (CompileError) iex:1: undefined function is_number/2 ``` which leads to an undefined function error because that invocation passes two arguments, and `is_number/2` does not exist. The `if true` expression is invalid in itself because it needs the block, but since the arity of `is_number/2` does not match, Elixir does not even reach its evaluation. Adding explicit parentheses is enough to bind the block to `if`: ``` iex> is_number(if true do ...> 1 + 2 ...> end) true ``` Keyword lists play an important role in the language and are quite common in many functions and macros. We will explore them a bit more in a future chapter. Now it is time to talk about “Binaries, strings, and char lists”. elixir Calendar.UTCOnlyTimeZoneDatabase Calendar.UTCOnlyTimeZoneDatabase ================================= Built-in time zone database that works only in Etc/UTC. For all other time zones, it returns `{:error, :utc_only_time_zone_database}`. elixir File File ===== This module contains functions to manipulate files. Some of those functions are low-level, allowing the user to interact with files or IO devices, like [`open/2`](#open/2), [`copy/3`](#copy/3) and others. This module also provides higher level functions that work with filenames and have their naming based on UNIX variants. For example, one can copy a file via [`cp/3`](#cp/3) and remove files and directories recursively via [`rm_rf/1`](#rm_rf/1). Paths given to functions in this module can be either relative to the current working directory (as returned by [`File.cwd/0`](file#cwd/0)), or absolute paths. Shell conventions like `~` are not expanded automatically. To use paths like `~/Downloads`, you can use [`Path.expand/1`](path#expand/1) or [`Path.expand/2`](path#expand/2) to expand your path to an absolute path. Encoding --------- In order to write and read files, one must use the functions in the [`IO`](io) module. By default, a file is opened in binary mode, which requires the functions [`IO.binread/2`](io#binread/2) and [`IO.binwrite/2`](io#binwrite/2) to interact with the file. A developer may pass `:utf8` as an option when opening the file, then the slower [`IO.read/2`](io#read/2) and [`IO.write/2`](io#write/2) functions must be used as they are responsible for doing the proper conversions and providing the proper data guarantees. Note that filenames when given as charlists in Elixir are always treated as UTF-8. In particular, we expect that the shell and the operating system are configured to use UTF-8 encoding. Binary filenames are considered raw and passed to the operating system as is. API ---- Most of the functions in this module return `:ok` or `{:ok, result}` in case of success, `{:error, reason}` otherwise. Those functions also have a variant that ends with `!` which returns the result (instead of the `{:ok, result}` tuple) in case of success or raises an exception in case it fails. For example: ``` File.read("hello.txt") #=> {:ok, "World"} File.read("invalid.txt") #=> {:error, :enoent} File.read!("hello.txt") #=> "World" File.read!("invalid.txt") #=> raises File.Error ``` In general, a developer should use the former in case they want to react if the file does not exist. The latter should be used when the developer expects their software to fail in case the file cannot be read (i.e. it is literally an exception). Processes and raw files ------------------------ Every time a file is opened, Elixir spawns a new process. Writing to a file is equivalent to sending messages to the process that writes to the file descriptor. This means files can be passed between nodes and message passing guarantees they can write to the same file in a network. However, you may not always want to pay the price for this abstraction. In such cases, a file can be opened in `:raw` mode. The options `:read_ahead` and `:delayed_write` are also useful when operating on large files or working with files in tight loops. Check [`:file.open/2`](http://www.erlang.org/doc/man/file.html#open-2) for more information about such options and other performance considerations. Summary ======== Types ------ [encoding\_mode()](#t:encoding_mode/0) [erlang\_time()](#t:erlang_time/0) [io\_device()](#t:io_device/0) [mode()](#t:mode/0) [posix()](#t:posix/0) [posix\_time()](#t:posix_time/0) [stat\_options()](#t:stat_options/0) [stream\_mode()](#t:stream_mode/0) Functions ---------- [cd(path)](#cd/1) Sets the current working directory. [cd!(path)](#cd!/1) The same as [`cd/1`](#cd/1), but raises a [`File.Error`](file.error) exception if it fails. [cd!(path, function)](#cd!/2) Changes the current directory to the given `path`, executes the given function and then reverts back to the previous path regardless of whether there is an exception. [chgrp(path, gid)](#chgrp/2) Changes the group given by the group ID `gid` for a given `file`. Returns `:ok` on success, or `{:error, reason}` on failure. [chgrp!(path, gid)](#chgrp!/2) Same as [`chgrp/2`](#chgrp/2), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [chmod(path, mode)](#chmod/2) Changes the `mode` for a given `file`. [chmod!(path, mode)](#chmod!/2) Same as [`chmod/2`](#chmod/2), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [chown(path, uid)](#chown/2) Changes the owner given by the user ID `uid` for a given `file`. Returns `:ok` on success, or `{:error, reason}` on failure. [chown!(path, uid)](#chown!/2) Same as [`chown/2`](#chown/2), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [close(io\_device)](#close/1) Closes the file referenced by `io_device`. It mostly returns `:ok`, except for some severe errors such as out of memory. [copy(source, destination, bytes\_count \\ :infinity)](#copy/3) Copies the contents of `source` to `destination`. [copy!(source, destination, bytes\_count \\ :infinity)](#copy!/3) The same as [`copy/3`](#copy/3) but raises a [`File.CopyError`](file.copyerror) exception if it fails. Returns the `bytes_copied` otherwise. [cp(source\_file, destination\_file, callback \\ fn \_, \_ -> true end)](#cp/3) Copies the contents in `source_file` to `destination_file` preserving its modes. [cp!(source\_file, destination\_file, callback \\ fn \_, \_ -> true end)](#cp!/3) The same as [`cp/3`](#cp/3), but raises a [`File.CopyError`](file.copyerror) exception if it fails. Returns `:ok` otherwise. [cp\_r(source, destination, callback \\ fn \_, \_ -> true end)](#cp_r/3) Copies the contents in `source` to `destination` recursively, maintaining the source directory structure and modes. [cp\_r!(source, destination, callback \\ fn \_, \_ -> true end)](#cp_r!/3) The same as [`cp_r/3`](#cp_r/3), but raises a [`File.CopyError`](file.copyerror) exception if it fails. Returns the list of copied files otherwise. [cwd()](#cwd/0) Gets the current working directory. [cwd!()](#cwd!/0) The same as [`cwd/0`](#cwd/0), but raises a [`File.Error`](file.error) exception if it fails. [dir?(path, opts \\ [])](#dir?/2) Returns `true` if the given path is a directory. [exists?(path, opts \\ [])](#exists?/2) Returns `true` if the given path exists. [ln(existing, new)](#ln/2) Creates a hard link `new` to the file `existing`. [ln!(existing, new)](#ln!/2) Same as [`ln/2`](#ln/2) but raises a [`File.LinkError`](file.linkerror) exception if it fails. Returns `:ok` otherwise. [ln\_s(existing, new)](#ln_s/2) Creates a symbolic link `new` to the file or directory `existing`. [ln\_s!(existing, new)](#ln_s!/2) Same as [`ln_s/2`](#ln_s/2) but raises a [`File.LinkError`](file.linkerror) exception if it fails. Returns `:ok` otherwise. [ls(path \\ ".")](#ls/1) Returns the list of files in the given directory. [ls!(path \\ ".")](#ls!/1) The same as [`ls/1`](#ls/1) but raises a [`File.Error`](file.error) exception in case of an error. [lstat(path, opts \\ [])](#lstat/2) Returns information about the `path`. If the file is a symlink, sets the `type` to `:symlink` and returns a [`File.Stat`](file.stat) struct for the link. For any other file, returns exactly the same values as [`stat/2`](#stat/2). [lstat!(path, opts \\ [])](#lstat!/2) Same as [`lstat/2`](#lstat/2) but returns the [`File.Stat`](file.stat) struct directly, or raises a [`File.Error`](file.error) exception if an error is returned. [mkdir(path)](#mkdir/1) Tries to create the directory `path`. [mkdir!(path)](#mkdir!/1) Same as [`mkdir/1`](#mkdir/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [mkdir\_p(path)](#mkdir_p/1) Tries to create the directory `path`. [mkdir\_p!(path)](#mkdir_p!/1) Same as [`mkdir_p/1`](#mkdir_p/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [open(path, modes\_or\_function \\ [])](#open/2) Opens the given `path`. [open(path, modes, function)](#open/3) Similar to [`open/2`](#open/2) but expects a function as its last argument. [open!(path, modes\_or\_function \\ [])](#open!/2) Similar to [`open/2`](#open/2) but raises a [`File.Error`](file.error) exception if the file could not be opened. Returns the IO device otherwise. [open!(path, modes, function)](#open!/3) Similar to [`open/3`](#open/3) but raises a [`File.Error`](file.error) exception if the file could not be opened. [read(path)](#read/1) Returns `{:ok, binary}`, where `binary` is a binary data object that contains the contents of `path`, or `{:error, reason}` if an error occurs. [read!(path)](#read!/1) Returns a binary with the contents of the given filename, or raises a [`File.Error`](file.error) exception if an error occurs. [read\_link(path)](#read_link/1) Reads the symbolic link at `path`. [read\_link!(path)](#read_link!/1) Same as [`read_link/1`](#read_link/1) but returns the target directly, or raises a [`File.Error`](file.error) exception if an error is returned. [regular?(path, opts \\ [])](#regular?/2) Returns `true` if the path is a regular file. [rename(source, destination)](#rename/2) Renames the `source` file to `destination` file. It can be used to move files (and directories) between directories. If moving a file, you must fully specify the `destination` filename, it is not sufficient to simply specify its directory. [rename!(source, destination)](#rename!/2) The same as [`rename/2`](#rename/2) but raises a [`File.RenameError`](file.renameerror) exception if it fails. Returns `:ok` otherwise. [rm(path)](#rm/1) Tries to delete the file `path`. [rm!(path)](#rm!/1) Same as [`rm/1`](#rm/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [rm\_rf(path)](#rm_rf/1) Removes files and directories recursively at the given `path`. Symlinks are not followed but simply removed, non-existing files are simply ignored (i.e. doesn't make this function fail). [rm\_rf!(path)](#rm_rf!/1) Same as [`rm_rf/1`](#rm_rf/1) but raises a [`File.Error`](file.error) exception in case of failures, otherwise the list of files or directories removed. [rmdir(path)](#rmdir/1) Tries to delete the dir at `path`. [rmdir!(path)](#rmdir!/1) Same as [`rmdir/1`](#rmdir/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. [stat(path, opts \\ [])](#stat/2) Returns information about the `path`. If it exists, it returns a `{:ok, info}` tuple, where info is a [`File.Stat`](file.stat) struct. Returns `{:error, reason}` with the same reasons as [`read/1`](#read/1) if a failure occurs. [stat!(path, opts \\ [])](#stat!/2) Same as [`stat/2`](#stat/2) but returns the [`File.Stat`](file.stat) directly, or raises a [`File.Error`](file.error) exception if an error is returned. [stream!(path, modes \\ [], line\_or\_bytes \\ :line)](#stream!/3) Returns a [`File.Stream`](file.stream) for the given `path` with the given `modes`. [touch(path, time \\ System.os\_time(:second))](#touch/2) Updates modification time (mtime) and access time (atime) of the given file. [touch!(path, time \\ System.os\_time(:second))](#touch!/2) Same as [`touch/2`](#touch/2) but raises a [`File.Error`](file.error) exception if it fails. Returns `:ok` otherwise. [write(path, content, modes \\ [])](#write/3) Writes `content` to the file `path`. [write!(path, content, modes \\ [])](#write!/3) Same as [`write/3`](#write/3) but raises a [`File.Error`](file.error) exception if it fails. Returns `:ok` otherwise. [write\_stat(path, stat, opts \\ [])](#write_stat/3) Writes the given [`File.Stat`](file.stat) back to the file system at the given path. Returns `:ok` or `{:error, reason}`. [write\_stat!(path, stat, opts \\ [])](#write_stat!/3) Same as [`write_stat/3`](#write_stat/3) but raises a [`File.Error`](file.error) exception if it fails. Returns `:ok` otherwise. Types ====== ### encoding\_mode() #### Specs ``` encoding_mode() :: :utf8 | {:encoding, :latin1 | :unicode | :utf8 | :utf16 | :utf32 | {:utf16, :big | :little} | {:utf32, :big | :little}} ``` ### erlang\_time() #### Specs ``` erlang_time() :: {{year :: non_neg_integer(), month :: 1..12, day :: 1..31}, {hour :: 0..23, minute :: 0..59, second :: 0..59}} ``` ### io\_device() #### Specs ``` io_device() :: :file.io_device() ``` ### mode() #### Specs ``` mode() :: :append | :binary | :charlist | :compressed | :delayed_write | :exclusive | :raw | :read | :read_ahead | :sync | :write | {:read_ahead, pos_integer()} | {:delayed_write, non_neg_integer(), non_neg_integer()} | encoding_mode() ``` ### posix() #### Specs ``` posix() :: :file.posix() ``` ### posix\_time() #### Specs ``` posix_time() :: integer() ``` ### stat\_options() #### Specs ``` stat_options() :: [{:time, :local | :universal | :posix}] ``` ### stream\_mode() #### Specs ``` stream_mode() :: encoding_mode() | :trim_bom | {:read_ahead, pos_integer() | false} | {:delayed_write, non_neg_integer(), non_neg_integer()} ``` Functions ========== ### cd(path) #### Specs ``` cd(Path.t()) :: :ok | {:error, posix()} ``` Sets the current working directory. Returns `:ok` if successful, `{:error, reason}` otherwise. ### cd!(path) #### Specs ``` cd!(Path.t()) :: :ok ``` The same as [`cd/1`](#cd/1), but raises a [`File.Error`](file.error) exception if it fails. ### cd!(path, function) #### Specs ``` cd!(Path.t(), (() -> res)) :: res when res: var ``` Changes the current directory to the given `path`, executes the given function and then reverts back to the previous path regardless of whether there is an exception. Raises an error if retrieving or changing the current directory fails. ### chgrp(path, gid) #### Specs ``` chgrp(Path.t(), non_neg_integer()) :: :ok | {:error, posix()} ``` Changes the group given by the group ID `gid` for a given `file`. Returns `:ok` on success, or `{:error, reason}` on failure. ### chgrp!(path, gid) #### Specs ``` chgrp!(Path.t(), non_neg_integer()) :: :ok ``` Same as [`chgrp/2`](#chgrp/2), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### chmod(path, mode) #### Specs ``` chmod(Path.t(), non_neg_integer()) :: :ok | {:error, posix()} ``` Changes the `mode` for a given `file`. Returns `:ok` on success, or `{:error, reason}` on failure. #### Permissions File permissions are specified by adding together the following octal modes: * `0o400` - read permission: owner * `0o200` - write permission: owner * `0o100` - execute permission: owner * `0o040` - read permission: group * `0o020` - write permission: group * `0o010` - execute permission: group * `0o004` - read permission: other * `0o002` - write permission: other * `0o001` - execute permission: other For example, setting the mode `0o755` gives it write, read and execute permission to the owner and both read and execute permission to group and others. ### chmod!(path, mode) #### Specs ``` chmod!(Path.t(), non_neg_integer()) :: :ok ``` Same as [`chmod/2`](#chmod/2), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### chown(path, uid) #### Specs ``` chown(Path.t(), non_neg_integer()) :: :ok | {:error, posix()} ``` Changes the owner given by the user ID `uid` for a given `file`. Returns `:ok` on success, or `{:error, reason}` on failure. ### chown!(path, uid) #### Specs ``` chown!(Path.t(), non_neg_integer()) :: :ok ``` Same as [`chown/2`](#chown/2), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### close(io\_device) #### Specs ``` close(io_device()) :: :ok | {:error, posix() | :badarg | :terminated} ``` Closes the file referenced by `io_device`. It mostly returns `:ok`, except for some severe errors such as out of memory. Note that if the option `:delayed_write` was used when opening the file, [`close/1`](#close/1) might return an old write error and not even try to close the file. See [`open/2`](#open/2) for more information. ### copy(source, destination, bytes\_count \\ :infinity) #### Specs ``` copy(Path.t() | io_device(), Path.t() | io_device(), pos_integer() | :infinity) :: {:ok, non_neg_integer()} | {:error, posix()} ``` Copies the contents of `source` to `destination`. Both parameters can be a filename or an IO device opened with [`open/2`](#open/2). `bytes_count` specifies the number of bytes to copy, the default being `:infinity`. If file `destination` already exists, it is overwritten by the contents in `source`. Returns `{:ok, bytes_copied}` if successful, `{:error, reason}` otherwise. Compared to the [`cp/3`](#cp/3), this function is more low-level, allowing a copy from device to device limited by a number of bytes. On the other hand, [`cp/3`](#cp/3) performs more extensive checks on both source and destination and it also preserves the file mode after copy. Typical error reasons are the same as in [`open/2`](#open/2), [`read/1`](#read/1) and [`write/3`](#write/3). ### copy!(source, destination, bytes\_count \\ :infinity) #### Specs ``` copy!(Path.t() | io_device(), Path.t() | io_device(), pos_integer() | :infinity) :: non_neg_integer() ``` The same as [`copy/3`](#copy/3) but raises a [`File.CopyError`](file.copyerror) exception if it fails. Returns the `bytes_copied` otherwise. ### cp(source\_file, destination\_file, callback \\ fn \_, \_ -> true end) #### Specs ``` cp(Path.t(), Path.t(), (Path.t(), Path.t() -> boolean())) :: :ok | {:error, posix()} ``` Copies the contents in `source_file` to `destination_file` preserving its modes. `source_file` and `destination_file` must be a file or a symbolic link to one, or in the case of destination, a path to a non-existent file. If either one of them is a directory, `{:error, :eisdir}` will be returned. If a file already exists in the destination, it invokes a callback which should return `true` if the existing file should be overwritten, `false` otherwise. The callback defaults to return `true`. The function returns `:ok` in case of success. Otherwise, it returns `{:error, reason}`. If you want to copy contents from an IO device to another device or do a straight copy from a source to a destination without preserving modes, check [`copy/3`](#copy/3) instead. Note: The command `cp` in Unix systems behaves differently depending on whether the destination is an existing directory or not. We have chosen to explicitly disallow copying to a destination which is a directory, and an error will be returned if tried. ### cp!(source\_file, destination\_file, callback \\ fn \_, \_ -> true end) #### Specs ``` cp!(Path.t(), Path.t(), (Path.t(), Path.t() -> boolean())) :: :ok ``` The same as [`cp/3`](#cp/3), but raises a [`File.CopyError`](file.copyerror) exception if it fails. Returns `:ok` otherwise. ### cp\_r(source, destination, callback \\ fn \_, \_ -> true end) #### Specs ``` cp_r(Path.t(), Path.t(), (Path.t(), Path.t() -> boolean())) :: {:ok, [binary()]} | {:error, posix(), binary()} ``` Copies the contents in `source` to `destination` recursively, maintaining the source directory structure and modes. If `source` is a file or a symbolic link to it, `destination` must be a path to an existent file, a symbolic link to one, or a path to a non-existent file. If `source` is a directory, or a symbolic link to it, then `destination` must be an existent `directory` or a symbolic link to one, or a path to a non-existent directory. If the source is a file, it copies `source` to `destination`. If the `source` is a directory, it copies the contents inside source into the `destination` directory. If a file already exists in the destination, it invokes `callback`. `callback` must be a function that takes two arguments: `source` and `destination`. The callback should return `true` if the existing file should be overwritten and `false` otherwise. This function may fail while copying files, in such cases, it will leave the destination directory in a dirty state, where file which have already been copied won't be removed. The function returns `{:ok, files_and_directories}` in case of success, `files_and_directories` lists all files and directories copied in no specific order. It returns `{:error, reason, file}` otherwise. Note: The command `cp` in Unix systems behaves differently depending on whether `destination` is an existing directory or not. We have chosen to explicitly disallow this behaviour. If `source` is a `file` and `destination` is a directory, `{:error, :eisdir}` will be returned. #### Examples ``` # Copies file "a.txt" to "b.txt" File.cp_r("a.txt", "b.txt") # Copies all files in "samples" to "tmp" File.cp_r("samples", "tmp") # Same as before, but asks the user how to proceed in case of conflicts File.cp_r("samples", "tmp", fn source, destination -> IO.gets("Overwriting #{destination} by #{source}. Type y to confirm. ") == "y\n" end) ``` ### cp\_r!(source, destination, callback \\ fn \_, \_ -> true end) #### Specs ``` cp_r!(Path.t(), Path.t(), (Path.t(), Path.t() -> boolean())) :: [binary()] ``` The same as [`cp_r/3`](#cp_r/3), but raises a [`File.CopyError`](file.copyerror) exception if it fails. Returns the list of copied files otherwise. ### cwd() #### Specs ``` cwd() :: {:ok, binary()} | {:error, posix()} ``` Gets the current working directory. In rare circumstances, this function can fail on Unix. It may happen if read permissions do not exist for the parent directories of the current directory. For this reason, returns `{:ok, cwd}` in case of success, `{:error, reason}` otherwise. ### cwd!() #### Specs ``` cwd!() :: binary() ``` The same as [`cwd/0`](#cwd/0), but raises a [`File.Error`](file.error) exception if it fails. ### dir?(path, opts \\ []) #### Specs ``` dir?(Path.t(), [dir_option]) :: boolean() when dir_option: :raw ``` Returns `true` if the given path is a directory. This function follows symbolic links, so if a symbolic link points to a directory, `true` is returned. #### Options The supported options are: * `:raw` - a single atom to bypass the file server and only check for the file locally #### Examples ``` File.dir?("./test") #=> true File.dir?("test") #=> true File.dir?("/usr/bin") #=> true File.dir?("~/Downloads") #=> false "~/Downloads" |> Path.expand() |> File.dir?() #=> true ``` ### exists?(path, opts \\ []) #### Specs ``` exists?(Path.t(), [exists_option]) :: boolean() when exists_option: :raw ``` Returns `true` if the given path exists. It can be a regular file, directory, socket, symbolic link, named pipe, or device file. Returns `false` for symbolic links pointing to non-existing targets. #### Options The supported options are: * `:raw` - a single atom to bypass the file server and only check for the file locally #### Examples ``` File.exists?("test/") #=> true File.exists?("missing.txt") #=> false File.exists?("/dev/null") #=> true ``` ### ln(existing, new) #### Specs ``` ln(Path.t(), Path.t()) :: :ok | {:error, posix()} ``` Creates a hard link `new` to the file `existing`. Returns `:ok` if successful, `{:error, reason}` otherwise. If the operating system does not support hard links, returns `{:error, :enotsup}`. ### ln!(existing, new) #### Specs ``` ln!(Path.t(), Path.t()) :: :ok ``` Same as [`ln/2`](#ln/2) but raises a [`File.LinkError`](file.linkerror) exception if it fails. Returns `:ok` otherwise. ### ln\_s(existing, new) #### Specs ``` ln_s(Path.t(), Path.t()) :: :ok | {:error, posix()} ``` Creates a symbolic link `new` to the file or directory `existing`. Returns `:ok` if successful, `{:error, reason}` otherwise. If the operating system does not support symlinks, returns `{:error, :enotsup}`. ### ln\_s!(existing, new) #### Specs ``` ln_s!(Path.t(), Path.t()) :: :ok ``` Same as [`ln_s/2`](#ln_s/2) but raises a [`File.LinkError`](file.linkerror) exception if it fails. Returns `:ok` otherwise. ### ls(path \\ ".") #### Specs ``` ls(Path.t()) :: {:ok, [binary()]} | {:error, posix()} ``` Returns the list of files in the given directory. Returns `{:ok, files}` in case of success, `{:error, reason}` otherwise. ### ls!(path \\ ".") #### Specs ``` ls!(Path.t()) :: [binary()] ``` The same as [`ls/1`](#ls/1) but raises a [`File.Error`](file.error) exception in case of an error. ### lstat(path, opts \\ []) #### Specs ``` lstat(Path.t(), stat_options()) :: {:ok, File.Stat.t()} | {:error, posix()} ``` Returns information about the `path`. If the file is a symlink, sets the `type` to `:symlink` and returns a [`File.Stat`](file.stat) struct for the link. For any other file, returns exactly the same values as [`stat/2`](#stat/2). For more details, see [`:file.read_link_info/2`](http://www.erlang.org/doc/man/file.html#read_link_info-2). #### Options The accepted options are: * `:time` - configures how the file timestamps are returned The values for `:time` can be: * `:universal` - returns a `{date, time}` tuple in UTC (default) * `:local` - returns a `{date, time}` tuple using the machine time * `:posix` - returns the time as integer seconds since epoch Note: Since file times are stored in POSIX time format on most operating systems, it is faster to retrieve file information with the `time: :posix` option. ### lstat!(path, opts \\ []) #### Specs ``` lstat!(Path.t(), stat_options()) :: File.Stat.t() ``` Same as [`lstat/2`](#lstat/2) but returns the [`File.Stat`](file.stat) struct directly, or raises a [`File.Error`](file.error) exception if an error is returned. ### mkdir(path) #### Specs ``` mkdir(Path.t()) :: :ok | {:error, posix()} ``` Tries to create the directory `path`. Missing parent directories are not created. Returns `:ok` if successful, or `{:error, reason}` if an error occurs. Typical error reasons are: * `:eacces` - missing search or write permissions for the parent directories of `path` * `:eexist` - there is already a file or directory named `path` * `:enoent` - a component of `path` does not exist * `:enospc` - there is no space left on the device * `:enotdir` - a component of `path` is not a directory; on some platforms, `:enoent` is returned instead ### mkdir!(path) #### Specs ``` mkdir!(Path.t()) :: :ok ``` Same as [`mkdir/1`](#mkdir/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### mkdir\_p(path) #### Specs ``` mkdir_p(Path.t()) :: :ok | {:error, posix()} ``` Tries to create the directory `path`. Missing parent directories are created. Returns `:ok` if successful, or `{:error, reason}` if an error occurs. Typical error reasons are: * `:eacces` - missing search or write permissions for the parent directories of `path` * `:enospc` - there is no space left on the device * `:enotdir` - a component of `path` is not a directory ### mkdir\_p!(path) #### Specs ``` mkdir_p!(Path.t()) :: :ok ``` Same as [`mkdir_p/1`](#mkdir_p/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### open(path, modes\_or\_function \\ []) #### Specs ``` open(Path.t(), [mode() | :ram]) :: {:ok, io_device()} | {:error, posix()} ``` ``` open(Path.t(), (io_device() -> res)) :: {:ok, res} | {:error, posix()} when res: var ``` Opens the given `path`. In order to write and read files, one must use the functions in the [`IO`](io) module. By default, a file is opened in `:binary` mode, which requires the functions [`IO.binread/2`](io#binread/2) and [`IO.binwrite/2`](io#binwrite/2) to interact with the file. A developer may pass `:utf8` as an option when opening the file and then all other functions from [`IO`](io) are available, since they work directly with Unicode data. `modes_or_function` can either be a list of modes or a function. If it's a list, it's considered to be a list of modes (that are documented below). If it's a function, then it's equivalent to calling `open(path, [], modes_or_function)`. See the documentation for [`open/3`](#open/3) for more information on this function. The allowed modes: * `:binary` - opens the file in binary mode, disabling special handling of unicode sequences (default mode). * `:read` - the file, which must exist, is opened for reading. * `:write` - the file is opened for writing. It is created if it does not exist. If the file does exists, and if write is not combined with read, the file will be truncated. * `:append` - the file will be opened for writing, and it will be created if it does not exist. Every write operation to a file opened with append will take place at the end of the file. * `:exclusive` - the file, when opened for writing, is created if it does not exist. If the file exists, open will return `{:error, :eexist}`. * `:charlist` - when this term is given, read operations on the file will return charlists rather than binaries. * `:compressed` - makes it possible to read or write gzip compressed files. The compressed option must be combined with either read or write, but not both. Note that the file size obtained with [`stat/1`](#stat/1) will most probably not match the number of bytes that can be read from a compressed file. * `:utf8` - this option denotes how data is actually stored in the disk file and makes the file perform automatic translation of characters to and from UTF-8. If data is sent to a file in a format that cannot be converted to the UTF-8 or if data is read by a function that returns data in a format that cannot cope with the character range of the data, an error occurs and the file will be closed. * `:delayed_write`, `:raw`, `:ram`, `:read_ahead`, `:sync`, `{:encoding, ...}`, `{:read_ahead, pos_integer}`, `{:delayed_write, non_neg_integer, non_neg_integer}` - for more information about these options see [`:file.open/2`](http://www.erlang.org/doc/man/file.html#open-2). This function returns: * `{:ok, io_device}` - the file has been opened in the requested mode. `io_device` is actually the PID of the process which handles the file. This process is linked to the process which originally opened the file. If any process to which the `io_device` is linked terminates, the file will be closed and the process itself will be terminated. An `io_device` returned from this call can be used as an argument to the [`IO`](io) module functions. * `{:error, reason}` - the file could not be opened. #### Examples ``` {:ok, file} = File.open("foo.tar.gz", [:read, :compressed]) IO.read(file, :line) File.close(file) ``` ### open(path, modes, function) #### Specs ``` open(Path.t(), [mode() | :ram], (io_device() -> res)) :: {:ok, res} | {:error, posix()} when res: var ``` Similar to [`open/2`](#open/2) but expects a function as its last argument. The file is opened, given to the function as an argument and automatically closed after the function returns, regardless if there was an error when executing the function. Returns `{:ok, function_result}` in case of success, `{:error, reason}` otherwise. This function expects the file to be closed with success, which is usually the case unless the `:delayed_write` option is given. For this reason, we do not recommend passing `:delayed_write` to this function. #### Examples ``` File.open("file.txt", [:read, :write], fn file -> IO.read(file, :line) end) ``` See [`open/2`](#open/2) for the list of available `modes`. ### open!(path, modes\_or\_function \\ []) #### Specs ``` open!(Path.t(), [mode() | :ram]) :: io_device() ``` ``` open!(Path.t(), (io_device() -> res)) :: res when res: var ``` Similar to [`open/2`](#open/2) but raises a [`File.Error`](file.error) exception if the file could not be opened. Returns the IO device otherwise. See [`open/2`](#open/2) for the list of available modes. ### open!(path, modes, function) #### Specs ``` open!(Path.t(), [mode() | :ram], (io_device() -> res)) :: res when res: var ``` Similar to [`open/3`](#open/3) but raises a [`File.Error`](file.error) exception if the file could not be opened. If it succeeds opening the file, it returns the `function` result on the IO device. See [`open/2`](#open/2) for the list of available `modes`. ### read(path) #### Specs ``` read(Path.t()) :: {:ok, binary()} | {:error, posix()} ``` Returns `{:ok, binary}`, where `binary` is a binary data object that contains the contents of `path`, or `{:error, reason}` if an error occurs. Typical error reasons: * `:enoent` - the file does not exist * `:eacces` - missing permission for reading the file, or for searching one of the parent directories * `:eisdir` - the named file is a directory * `:enotdir` - a component of the file name is not a directory; on some platforms, `:enoent` is returned instead * `:enomem` - there is not enough memory for the contents of the file You can use [`:file.format_error/1`](http://www.erlang.org/doc/man/file.html#format_error-1) to get a descriptive string of the error. ### read!(path) #### Specs ``` read!(Path.t()) :: binary() ``` Returns a binary with the contents of the given filename, or raises a [`File.Error`](file.error) exception if an error occurs. ### read\_link(path) #### Specs ``` read_link(Path.t()) :: {:ok, binary()} | {:error, posix()} ``` Reads the symbolic link at `path`. If `path` exists and is a symlink, returns `{:ok, target}`, otherwise returns `{:error, reason}`. For more details, see [`:file.read_link/1`](http://www.erlang.org/doc/man/file.html#read_link-1). Typical error reasons are: * `:einval` - path is not a symbolic link * `:enoent` - path does not exist * `:enotsup` - symbolic links are not supported on the current platform ### read\_link!(path) #### Specs ``` read_link!(Path.t()) :: binary() ``` Same as [`read_link/1`](#read_link/1) but returns the target directly, or raises a [`File.Error`](file.error) exception if an error is returned. ### regular?(path, opts \\ []) #### Specs ``` regular?(Path.t(), [regular_option]) :: boolean() when regular_option: :raw ``` Returns `true` if the path is a regular file. This function follows symbolic links, so if a symbolic link points to a regular file, `true` is returned. #### Options The supported options are: * `:raw` - a single atom to bypass the file server and only check for the file locally #### Examples ``` File.regular?(__ENV__.file) #=> true ``` ### rename(source, destination) #### Specs ``` rename(Path.t(), Path.t()) :: :ok | {:error, posix()} ``` Renames the `source` file to `destination` file. It can be used to move files (and directories) between directories. If moving a file, you must fully specify the `destination` filename, it is not sufficient to simply specify its directory. Returns `:ok` in case of success, `{:error, reason}` otherwise. Note: The command `mv` in Unix systems behaves differently depending on whether `source` is a file and the `destination` is an existing directory. We have chosen to explicitly disallow this behaviour. #### Examples ``` # Rename file "a.txt" to "b.txt" File.rename("a.txt", "b.txt") # Rename directory "samples" to "tmp" File.rename("samples", "tmp") ``` ### rename!(source, destination) #### Specs ``` rename!(Path.t(), Path.t()) :: :ok ``` The same as [`rename/2`](#rename/2) but raises a [`File.RenameError`](file.renameerror) exception if it fails. Returns `:ok` otherwise. ### rm(path) #### Specs ``` rm(Path.t()) :: :ok | {:error, posix()} ``` Tries to delete the file `path`. Returns `:ok` if successful, or `{:error, reason}` if an error occurs. Note the file is deleted even if in read-only mode. Typical error reasons are: * `:enoent` - the file does not exist * `:eacces` - missing permission for the file or one of its parents * `:eperm` - the file is a directory and user is not super-user * `:enotdir` - a component of the file name is not a directory; on some platforms, `:enoent` is returned instead * `:einval` - filename had an improper type, such as tuple #### Examples ``` File.rm("file.txt") #=> :ok File.rm("tmp_dir/") #=> {:error, :eperm} ``` ### rm!(path) #### Specs ``` rm!(Path.t()) :: :ok ``` Same as [`rm/1`](#rm/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### rm\_rf(path) #### Specs ``` rm_rf(Path.t()) :: {:ok, [binary()]} | {:error, posix(), binary()} ``` Removes files and directories recursively at the given `path`. Symlinks are not followed but simply removed, non-existing files are simply ignored (i.e. doesn't make this function fail). Returns `{:ok, files_and_directories}` with all files and directories removed in no specific order, `{:error, reason, file}` otherwise. #### Examples ``` File.rm_rf("samples") #=> {:ok, ["samples", "samples/1.txt"]} File.rm_rf("unknown") #=> {:ok, []} ``` ### rm\_rf!(path) #### Specs ``` rm_rf!(Path.t()) :: [binary()] ``` Same as [`rm_rf/1`](#rm_rf/1) but raises a [`File.Error`](file.error) exception in case of failures, otherwise the list of files or directories removed. ### rmdir(path) #### Specs ``` rmdir(Path.t()) :: :ok | {:error, posix()} ``` Tries to delete the dir at `path`. Returns `:ok` if successful, or `{:error, reason}` if an error occurs. It returns `{:error, :eexist}` if the directory is not empty. #### Examples ``` File.rmdir("tmp_dir") #=> :ok File.rmdir("non_empty_dir") #=> {:error, :eexist} File.rmdir("file.txt") #=> {:error, :enotdir} ``` ### rmdir!(path) #### Specs ``` rmdir!(Path.t()) :: :ok | {:error, posix()} ``` Same as [`rmdir/1`](#rmdir/1), but raises a [`File.Error`](file.error) exception in case of failure. Otherwise `:ok`. ### stat(path, opts \\ []) #### Specs ``` stat(Path.t(), stat_options()) :: {:ok, File.Stat.t()} | {:error, posix()} ``` Returns information about the `path`. If it exists, it returns a `{:ok, info}` tuple, where info is a [`File.Stat`](file.stat) struct. Returns `{:error, reason}` with the same reasons as [`read/1`](#read/1) if a failure occurs. #### Options The accepted options are: * `:time` - configures how the file timestamps are returned The values for `:time` can be: * `:universal` - returns a `{date, time}` tuple in UTC (default) * `:local` - returns a `{date, time}` tuple using the same time zone as the machine * `:posix` - returns the time as integer seconds since epoch Note: Since file times are stored in POSIX time format on most operating systems, it is faster to retrieve file information with the `time: :posix` option. ### stat!(path, opts \\ []) #### Specs ``` stat!(Path.t(), stat_options()) :: File.Stat.t() ``` Same as [`stat/2`](#stat/2) but returns the [`File.Stat`](file.stat) directly, or raises a [`File.Error`](file.error) exception if an error is returned. ### stream!(path, modes \\ [], line\_or\_bytes \\ :line) #### Specs ``` stream!(Path.t(), stream_mode(), :line | pos_integer()) :: File.Stream.t() ``` Returns a [`File.Stream`](file.stream) for the given `path` with the given `modes`. The stream implements both [`Enumerable`](enumerable) and [`Collectable`](collectable) protocols, which means it can be used both for read and write. The `line_or_bytes` argument configures how the file is read when streaming, by `:line` (default) or by a given number of bytes. Operating the stream can fail on open for the same reasons as [`File.open!/2`](file#open!/2). Note that the file is automatically opened each time streaming begins. There is no need to pass `:read` and `:write` modes, as those are automatically set by Elixir. #### Raw files Since Elixir controls when the streamed file is opened, the underlying device cannot be shared and as such it is convenient to open the file in raw mode for performance reasons. Therefore, Elixir **will** open streams in `:raw` mode with the `:read_ahead` option unless an encoding is specified. This means any data streamed into the file must be converted to [`iodata/0`](typespecs#built-in-types) type. If you pass e.g. `[encoding: :utf8]` or `[encoding: {:utf16, :little}]` in the modes parameter, the underlying stream will use [`IO.write/2`](io#write/2) and the [`String.Chars`](string.chars) protocol to convert the data. See [`IO.binwrite/2`](io#binwrite/2) and [`IO.write/2`](io#write/2) . One may also consider passing the `:delayed_write` option if the stream is meant to be written to under a tight loop. #### Byte order marks If you pass `:trim_bom` in the modes parameter, the stream will trim UTF-8, UTF-16 and UTF-32 byte order marks when reading from file. Note that this function does not try to discover the file encoding basing on BOM. #### Examples ``` # Read in 2048 byte chunks rather than lines File.stream!("./test/test.data", [], 2048) #=> %File.Stream{line_or_bytes: 2048, modes: [:raw, :read_ahead, :binary], #=> path: "./test/test.data", raw: true} ``` See [`Stream.run/1`](stream#run/1) for an example of streaming into a file. ### touch(path, time \\ System.os\_time(:second)) #### Specs ``` touch(Path.t(), erlang_time() | posix_time()) :: :ok | {:error, posix()} ``` Updates modification time (mtime) and access time (atime) of the given file. The file is created if it doesn't exist. Requires datetime in UTC (as returned by `:erlang.universaltime()`) or an integer representing the POSIX timestamp (as returned by `System.os_time(:second)`). In Unix-like systems, changing the modification time may require you to be either `root` or the owner of the file. Having write access may not be enough. In those cases, touching the file the first time (to create it) will succeed, but touching an existing file with fail with `{:error, :eperm}`. #### Examples ``` File.touch("/tmp/a.txt", {{2018, 1, 30}, {13, 59, 59}}) #=> :ok File.touch("/fakedir/b.txt", {{2018, 1, 30}, {13, 59, 59}}) {:error, :enoent} File.touch("/tmp/a.txt", 1544519753) #=> :ok ``` ### touch!(path, time \\ System.os\_time(:second)) #### Specs ``` touch!(Path.t(), erlang_time() | posix_time()) :: :ok ``` Same as [`touch/2`](#touch/2) but raises a [`File.Error`](file.error) exception if it fails. Returns `:ok` otherwise. The file is created if it doesn't exist. Requires datetime in UTC (as returned by `:erlang.universaltime()`) or an integer representing the POSIX timestamp (as returned by `System.os_time(:second)`). #### Examples ``` File.touch!("/tmp/a.txt", {{2018, 1, 30}, {13, 59, 59}}) #=> :ok File.touch!("/fakedir/b.txt", {{2018, 1, 30}, {13, 59, 59}}) #=> ** (File.Error) could not touch "/fakedir/b.txt": no such file or directory File.touch!("/tmp/a.txt", 1544519753) ``` ### write(path, content, modes \\ []) #### Specs ``` write(Path.t(), iodata(), [mode()]) :: :ok | {:error, posix()} ``` Writes `content` to the file `path`. The file is created if it does not exist. If it exists, the previous contents are overwritten. Returns `:ok` if successful, or `{:error, reason}` if an error occurs. `content` must be `iodata` (a list of bytes or a binary). Setting the encoding for this function has no effect. **Warning:** Every time this function is invoked, a file descriptor is opened and a new process is spawned to write to the file. For this reason, if you are doing multiple writes in a loop, opening the file via [`File.open/2`](file#open/2) and using the functions in [`IO`](io) to write to the file will yield much better performance than calling this function multiple times. Typical error reasons are: * `:enoent` - a component of the file name does not exist * `:enotdir` - a component of the file name is not a directory; on some platforms, `:enoent` is returned instead * `:enospc` - there is no space left on the device * `:eacces` - missing permission for writing the file or searching one of the parent directories * `:eisdir` - the named file is a directory Check [`File.open/2`](file#open/2) for other available options. ### write!(path, content, modes \\ []) #### Specs ``` write!(Path.t(), iodata(), [mode()]) :: :ok ``` Same as [`write/3`](#write/3) but raises a [`File.Error`](file.error) exception if it fails. Returns `:ok` otherwise. ### write\_stat(path, stat, opts \\ []) #### Specs ``` write_stat(Path.t(), File.Stat.t(), stat_options()) :: :ok | {:error, posix()} ``` Writes the given [`File.Stat`](file.stat) back to the file system at the given path. Returns `:ok` or `{:error, reason}`. ### write\_stat!(path, stat, opts \\ []) #### Specs ``` write_stat!(Path.t(), File.Stat.t(), stat_options()) :: :ok ``` Same as [`write_stat/3`](#write_stat/3) but raises a [`File.Error`](file.error) exception if it fails. Returns `:ok` otherwise.
programming_docs
elixir Pattern matching Getting Started Pattern matching ================ In this chapter, we will show how the `=` operator in Elixir is actually a match operator and how to use it to pattern match inside data structures. Finally, we will learn about the pin operator `^` used to access previously bound values. The match operator ------------------ We have used the `=` operator a couple times to assign variables in Elixir: ``` iex> x = 1 1 iex> x 1 ``` In Elixir, the `=` operator is actually called *the match operator*. Let’s see why: ``` iex> x = 1 1 iex> 1 = x 1 iex> 2 = x ** (MatchError) no match of right hand side value: 1 ``` Notice that `1 = x` is a valid expression, and it matched because both the left and right side are equal to 1. When the sides do not match, a `MatchError` is raised. A variable can only be assigned on the left side of `=`: ``` iex> 1 = unknown ** (CompileError) iex:1: undefined function unknown/0 ``` Since there is no variable `unknown` previously defined, Elixir assumed you were trying to call a function named `unknown/0`, but such a function does not exist. Pattern matching ---------------- The match operator is not only used to match against simple values, but it is also useful for destructuring more complex data types. For example, we can pattern match on tuples: ``` iex> {a, b, c} = {:hello, "world", 42} {:hello, "world", 42} iex> a :hello iex> b "world" ``` A pattern match error will occur if the sides can’t be matched, for example if the tuples have different sizes: ``` iex> {a, b, c} = {:hello, "world"} ** (MatchError) no match of right hand side value: {:hello, "world"} ``` And also when comparing different types: ``` iex> {a, b, c} = [:hello, "world", 42] ** (MatchError) no match of right hand side value: [:hello, "world", 42] ``` More interestingly, we can match on specific values. The example below asserts that the left side will only match the right side when the right side is a tuple that starts with the atom `:ok`: ``` iex> {:ok, result} = {:ok, 13} {:ok, 13} iex> result 13 iex> {:ok, result} = {:error, :oops} ** (MatchError) no match of right hand side value: {:error, :oops} ``` We can pattern match on lists: ``` iex> [a, b, c] = [1, 2, 3] [1, 2, 3] iex> a 1 ``` A list also supports matching on its own head and tail: ``` iex> [head | tail] = [1, 2, 3] [1, 2, 3] iex> head 1 iex> tail [2, 3] ``` Similar to the `hd/1` and `tl/1` functions, we can’t match an empty list with a head and tail pattern: ``` iex> [head | tail] = [] ** (MatchError) no match of right hand side value: [] ``` The `[head | tail]` format is not only used on pattern matching but also for prepending items to a list: ``` iex> list = [1, 2, 3] [1, 2, 3] iex> [0 | list] [0, 1, 2, 3] ``` Pattern matching allows developers to easily destructure data types such as tuples and lists. As we will see in the following chapters, it is one of the foundations of recursion in Elixir and applies to other types as well, like maps and binaries. The pin operator ---------------- Variables in Elixir can be rebound: ``` iex> x = 1 1 iex> x = 2 2 ``` However, there are times when we don’t want variables to be rebound. Use the pin operator `^` when you want to pattern match against a variable’s *existing value* rather than rebinding the variable. ``` iex> x = 1 1 iex> ^x = 2 ** (MatchError) no match of right hand side value: 2 ``` Because we have pinned `x` when it was bound to the value of `1`, it is equivalent to the following: ``` iex> 1 = 2 ** (MatchError) no match of right hand side value: 2 ``` Notice that we even see the exact same error message. We can use the pin operator inside other pattern matches, such as tuples or lists: ``` iex> x = 1 1 iex> [^x, 2, 3] = [1, 2, 3] [1, 2, 3] iex> {y, ^x} = {2, 1} {2, 1} iex> y 2 iex> {y, ^x} = {2, 2} ** (MatchError) no match of right hand side value: {2, 2} ``` Because `x` was bound to the value of `1` when it was pinned, this last example could have been written as: ``` iex> {y, 1} = {2, 2} ** (MatchError) no match of right hand side value: {2, 2} ``` If a variable is mentioned more than once in a pattern, all references should bind to the same value: ``` iex> {x, x} = {1, 1} {1, 1} iex> {x, x} = {1, 2} ** (MatchError) no match of right hand side value: {1, 2} ``` In some cases, you don’t care about a particular value in a pattern. It is a common practice to bind those values to the underscore, `_`. For example, if only the head of the list matters to us, we can assign the tail to underscore: ``` iex> [head | _] = [1, 2, 3] [1, 2, 3] iex> head 1 ``` The variable `_` is special in that it can never be read from. Trying to read from it gives a compile error: ``` iex> _ ** (CompileError) iex:1: invalid use of _. "_" represents a value to be ignored in a pattern and cannot be used in expressions ``` Although pattern matching allows us to build powerful constructs, its usage is limited. For instance, you cannot make function calls on the left side of a match. The following example is invalid: ``` iex> length([1, [2], 3]) = 3 ** (CompileError) iex:1: cannot invoke remote function :erlang.length/1 inside match ``` This finishes our introduction to pattern matching. As we will see in the next chapter, pattern matching is very common in many language constructs. elixir Module attributes Getting Started Module attributes ================= Module attributes in Elixir serve three purposes: 1. They serve to annotate the module, often with information to be used by the user or the VM. 2. They work as constants. 3. They work as a temporary module storage to be used during compilation. Let’s check each case, one by one. As annotations -------------- Elixir brings the concept of module attributes from Erlang. For example: ``` defmodule MyServer do @vsn 2 end ``` In the example above, we are explicitly setting the version attribute for that module. `@vsn` is used by the code reloading mechanism in the Erlang VM to check if a module has been updated or not. If no version is specified, the version is set to the MD5 checksum of the module functions. Elixir has a handful of reserved attributes. Here are a few of them, the most commonly used ones: * `@moduledoc` - provides documentation for the current module. * `@doc` - provides documentation for the function or macro that follows the attribute. * `@behaviour` - (notice the British spelling) used for specifying an OTP or user-defined behaviour. * `@before_compile` - provides a hook that will be invoked before the module is compiled. This makes it possible to inject functions inside the module exactly before compilation. `@moduledoc` and `@doc` are by far the most used attributes, and we expect you to use them a lot. Elixir treats documentation as first-class and provides many functions to access documentation. You can read more about [writing documentation in Elixir in our official documentation](https://hexdocs.pm/elixir/writing-documentation.html). Let’s go back to the `Math` module defined in the previous chapters, add some documentation and save it to the `math.ex` file: ``` defmodule Math do @moduledoc """ Provides math-related functions. ## Examples iex> Math.sum(1, 2) 3 """ @doc """ Calculates the sum of two numbers. """ def sum(a, b), do: a + b end ``` Elixir promotes the use of Markdown with heredocs to write readable documentation. Heredocs are multi-line strings, they start and end with triple double-quotes, keeping the formatting of the inner text. We can access the documentation of any compiled module directly from IEx: ``` $ elixirc math.ex $ iex ``` ``` iex> h Math # Access the docs for the module Math ... iex> h Math.sum # Access the docs for the sum function ... ``` We also provide a tool called [ExDoc](https://github.com/elixir-lang/ex_doc) which is used to generate HTML pages from the documentation. You can take a look at the docs for [Module](https://hexdocs.pm/elixir/Module.html) for a complete list of supported attributes. Elixir also uses attributes to define [typespecs](typespecs-and-behaviours). This section covers built-in attributes. However, attributes can also be used by developers or extended by libraries to support custom behaviour. As “constants” -------------- Elixir developers often use module attributes when they wish to make a value more visible or reusable: ``` defmodule MyServer do @initial_state %{host: "127.0.0.1", port: 3456} IO.inspect @initial_state end ``` > Note: Unlike Erlang, user defined attributes are not stored in the module by default. The value exists only during compilation time. A developer can configure an attribute to behave closer to Erlang by calling [`Module.register_attribute/3`](https://hexdocs.pm/elixir/Module.html#register_attribute/3). > > Trying to access an attribute that was not defined will print a warning: ``` defmodule MyServer do @unknown end warning: undefined module attribute @unknown, please remove access to @unknown or explicitly set it before access ``` Attributes can also be read inside functions: ``` defmodule MyServer do @my_data 14 def first_data, do: @my_data @my_data 13 def second_data, do: @my_data end MyServer.first_data #=> 14 MyServer.second_data #=> 13 ``` Every time an attribute is read inside a function, a snapshot of its current value is taken. In other words, the value is read at compilation time and not at runtime. As we are going to see, this also makes attributes useful as storage during module compilation. Normally, repeating a module attribute will cause its value to be reassigned, but there are circumstances where you may want to [configure the module attribute](https://hexdocs.pm/elixir/Module.html#register_attribute/3) so that its values are accumulated: ``` defmodule Foo do Module.register_attribute __MODULE__, :param, accumulate: true @param :foo @param :bar # here @param == [:bar, :foo] end ``` Functions may be called when defining a module attribute, e.g. ``` defmodule MyApp.Status do @service URI.parse("https://example.com") def status(email), do: SomeHttpClient.get(@service) end ``` Be careful, however: *functions defined in the same module as the attribute itself cannot be called* because they have not yet been compiled when the attribute is being defined. When defining an attribute, do not leave a line break between the attribute name and its value. As temporary storage -------------------- To see an example of using module attributes as for storage, look no further than Elixir’s unit test framework called [ExUnit](https://hexdocs.pm/ex_unit/). ExUnit uses module attributes for multiple different purposes: ``` defmodule MyTest do use ExUnit.Case, async: true @tag :external @tag os: :unix test "contacts external service" do # ... end end ``` In the example above, `ExUnit` stores the value of `async: true` in a module attribute to change how the module is compiled. Tags are also defined as `accumulate: true` attributes, and they store tags that can be used to setup and filter tests. For example, you can avoid running external tests on your machine because they are slow and dependent on other services, while they can still be enabled in your build system. In order to understand the underlying code, we’d need macros, so we will revisit this pattern in the meta-programming guide and learn how to use module attributes as storage to allow developers to create DSLs. In the next chapters, we’ll explore structs and protocols before moving to exception handling and other constructs like sigils and comprehensions. elixir Inspect.Opts Inspect.Opts ============= Defines the options used by the [`Inspect`](inspect) protocol. The following fields are available: * `:structs` - when `false`, structs are not formatted by the inspect protocol, they are instead printed as maps, defaults to `true`. * `:binaries` - when `:as_strings` all binaries will be printed as strings, non-printable bytes will be escaped. When `:as_binaries` all binaries will be printed in bit syntax. When the default `:infer`, the binary will be printed as a string if it is printable, otherwise in bit syntax. See [`String.printable?/1`](string#printable?/1) to learn when a string is printable. * `:charlists` - when `:as_charlists` all lists will be printed as charlists, non-printable elements will be escaped. When `:as_lists` all lists will be printed as lists. When the default `:infer`, the list will be printed as a charlist if it is printable, otherwise as list. See [`List.ascii_printable?/1`](list#ascii_printable?/1) to learn when a charlist is printable. * `:limit` - limits the number of items that are inspected for tuples, bitstrings, maps, lists and any other collection of items. It does not apply to printable strings nor printable charlists and defaults to 50. If you don't want to limit the number of items to a particular number, use `:infinity`. * `:printable_limit` - limits the number of characters that are inspected on printable strings and printable charlists. You can use [`String.printable?/1`](string#printable?/1) and [`List.ascii_printable?/1`](list#ascii_printable?/1) to check if a given string or charlist is printable. Defaults to 4096. If you don't want to limit the number of characters to a particular number, use `:infinity`. * `:pretty` - if set to `true` enables pretty printing, defaults to `false`. * `:width` - defaults to 80 characters, used when pretty is `true` or when printing to IO devices. Set to 0 to force each item to be printed on its own line. If you don't want to limit the number of items to a particular number, use `:infinity`. * `:base` - prints integers as `:binary`, `:octal`, `:decimal`, or `:hex`, defaults to `:decimal`. When inspecting binaries any `:base` other than `:decimal` implies `binaries: :as_binaries`. * `:safe` - when `false`, failures while inspecting structs will be raised as errors instead of being wrapped in the [`Inspect.Error`](inspect.error) exception. This is useful when debugging failures and crashes for custom inspect implementations. * `:syntax_colors` - when set to a keyword list of colors the output is colorized. The keys are types and the values are the colors to use for each type (for example, `[number: :red, atom: :blue]`). Types can include `:number`, `:atom`, `regex`, `:tuple`, `:map`, `:list`, and `:reset`. Colors can be any [`IO.ANSI.ansidata/0`](io.ansi#t:ansidata/0) as accepted by [`IO.ANSI.format/1`](io.ansi#format/1). * `:inspect_fun` (since v1.9.0) - a function to build algebra documents, defaults to [`Inspect.inspect/2`](inspect#inspect/2) * `:custom_options` (since v1.9.0) - a keyword list storing custom user-defined options. Useful when implementing the [`Inspect`](inspect) protocol for nested structs to pass the custom options through. Summary ======== Types ------ [color\_key()](#t:color_key/0) [t()](#t:t/0) Types ====== ### color\_key() #### Specs ``` color_key() :: atom() ``` ### t() #### Specs ``` t() :: %Inspect.Opts{ base: :decimal | :binary | :hex | :octal, binaries: :infer | :as_binaries | :as_strings, char_lists: :infer | :as_lists | :as_char_lists, charlists: :infer | :as_lists | :as_charlists, custom_options: keyword(), inspect_fun: (any(), t() -> Inspect.Algebra.t()), limit: pos_integer() | :infinity, pretty: boolean(), printable_limit: pos_integer() | :infinity, safe: boolean(), structs: boolean(), syntax_colors: [{color_key(), IO.ANSI.ansidata()}], width: pos_integer() | :infinity } ``` elixir mix compile.erlang mix compile.erlang =================== Compiles Erlang source files. When this task runs, it will first check the modification times of all files to be compiled and if they haven't been changed since the last compilation, it will not compile them. If any of them have changed, it compiles everything. For this reason, the task touches your `:compile_path` directory and sets the modification time to the current time and date at the end of each compilation. You can force compilation regardless of modification times by passing the `--force` option. Command line options --------------------- * `--force` - forces compilation regardless of modification times * `--all-warnings` - prints warnings even from files that do not need to be recompiled Configuration -------------- * `ERL_COMPILER_OPTIONS` - can be used to give default compile options. The value must be a valid Erlang term. If the value is a list, it will be used as is. If it is not a list, it will be put into a list. * `:erlc_paths` - directories to find source files. Defaults to `["src"]`. * `:erlc_include_path` - directory for adding include files. Defaults to `"include"`. * `:erlc_options` - compilation options that apply to Erlang's compiler. Defaults to `[:debug_info]`. For a complete list of options, see [`:compile.file/2`](http://www.erlang.org/doc/man/compile.html#file-2). For example, to configure the `erlc_options` for your Erlang project you may run: ``` erlc_options: [:debug_info, {:i, 'path/to/include'}] ``` elixir Enum Enum ===== Provides a set of algorithms to work with enumerables. In Elixir, an enumerable is any data type that implements the [`Enumerable`](enumerable) protocol. [`List`](list)s (`[1, 2, 3]`), [`Map`](map)s (`%{foo: 1, bar: 2}`) and [`Range`](range)s (`1..3`) are common data types used as enumerables: ``` iex> Enum.map([1, 2, 3], fn x -> x * 2 end) [2, 4, 6] iex> Enum.sum([1, 2, 3]) 6 iex> Enum.map(1..3, fn x -> x * 2 end) [2, 4, 6] iex> Enum.sum(1..3) 6 iex> map = %{"a" => 1, "b" => 2} iex> Enum.map(map, fn {k, v} -> {k, v * 2} end) [{"a", 2}, {"b", 4}] ``` However, many other enumerables exist in the language, such as [`MapSet`](mapset)s and the data type returned by [`File.stream!/3`](file#stream!/3) which allows a file to be traversed as if it was an enumerable. The functions in this module work in linear time. This means that, the time it takes to perform an operation grows at the same rate as the length of the enumerable. This is expected on operations such as [`Enum.map/2`](enum#map/2). After all, if we want to traverse every element on a list, the longer the list, the more elements we need to traverse, and the longer it will take. This linear behaviour should also be expected on operations like [`count/1`](#count/1), [`member?/2`](#member?/2), [`at/2`](#at/2) and similar. While Elixir does allow data types to provide performant variants for such operations, you should not expect it to always be available, since the [`Enum`](#content) module is meant to work with a large variety of data types and not all data types can provide optimized behaviour. Finally, note the functions in the [`Enum`](#content) module are eager: they will traverse the enumerable as soon as they are invoked. This is particularly dangerous when working with infinite enumerables. In such cases, you should use the [`Stream`](stream) module, which allows you to lazily express computations, without traversing collections, and work with possibly infinite collections. See the [`Stream`](stream) module for examples and documentation. Summary ======== Types ------ [acc()](#t:acc/0) [default()](#t:default/0) [element()](#t:element/0) [index()](#t:index/0) Zero-based index. It can also be a negative integer. [t()](#t:t/0) Functions ---------- [all?(enumerable, fun \\ fn x -> x end)](#all?/2) Returns `true` if `fun.(element)` is truthy for all elements in `enumerable`. [any?(enumerable, fun \\ fn x -> x end)](#any?/2) Returns `true` if `fun.(element)` is truthy for at least one element in `enumerable`. [at(enumerable, index, default \\ nil)](#at/3) Finds the element at the given `index` (zero-based). [chunk\_by(enumerable, fun)](#chunk_by/2) Splits enumerable on every element for which `fun` returns a new value. [chunk\_every(enumerable, count)](#chunk_every/2) Shortcut to `chunk_every(enumerable, count, count)`. [chunk\_every(enumerable, count, step, leftover \\ [])](#chunk_every/4) Returns list of lists containing `count` elements each, where each new chunk starts `step` elements into the `enumerable`. [chunk\_while(enumerable, acc, chunk\_fun, after\_fun)](#chunk_while/4) Chunks the `enumerable` with fine grained control when every chunk is emitted. [concat(enumerables)](#concat/1) Given an enumerable of enumerables, concatenates the `enumerables` into a single list. [concat(left, right)](#concat/2) Concatenates the enumerable on the `right` with the enumerable on the `left`. [count(enumerable)](#count/1) Returns the size of the `enumerable`. [count(enumerable, fun)](#count/2) Returns the count of elements in the `enumerable` for which `fun` returns a truthy value. [dedup(enumerable)](#dedup/1) Enumerates the `enumerable`, returning a list where all consecutive duplicated elements are collapsed to a single element. [dedup\_by(enumerable, fun)](#dedup_by/2) Enumerates the `enumerable`, returning a list where all consecutive duplicated elements are collapsed to a single element. [drop(enumerable, amount)](#drop/2) Drops the `amount` of elements from the `enumerable`. [drop\_every(enumerable, nth)](#drop_every/2) Returns a list of every `nth` element in the `enumerable` dropped, starting with the first element. [drop\_while(enumerable, fun)](#drop_while/2) Drops elements at the beginning of the `enumerable` while `fun` returns a truthy value. [each(enumerable, fun)](#each/2) Invokes the given `fun` for each element in the `enumerable`. [empty?(enumerable)](#empty?/1) Determines if the `enumerable` is empty. [fetch(enumerable, index)](#fetch/2) Finds the element at the given `index` (zero-based). [fetch!(enumerable, index)](#fetch!/2) Finds the element at the given `index` (zero-based). [filter(enumerable, fun)](#filter/2) Filters the `enumerable`, i.e. returns only those elements for which `fun` returns a truthy value. [find(enumerable, default \\ nil, fun)](#find/3) Returns the first element for which `fun` returns a truthy value. If no such element is found, returns `default`. [find\_index(enumerable, fun)](#find_index/2) Similar to [`find/3`](#find/3), but returns the index (zero-based) of the element instead of the element itself. [find\_value(enumerable, default \\ nil, fun)](#find_value/3) Similar to [`find/3`](#find/3), but returns the value of the function invocation instead of the element itself. [flat\_map(enumerable, fun)](#flat_map/2) Maps the given `fun` over `enumerable` and flattens the result. [flat\_map\_reduce(enumerable, acc, fun)](#flat_map_reduce/3) Maps and reduces an `enumerable`, flattening the given results (only one level deep). [group\_by(enumerable, key\_fun, value\_fun \\ fn x -> x end)](#group_by/3) Splits the `enumerable` into groups based on `key_fun`. [intersperse(enumerable, element)](#intersperse/2) Intersperses `element` between each element of the enumeration. [into(enumerable, collectable)](#into/2) Inserts the given `enumerable` into a `collectable`. [into(enumerable, collectable, transform)](#into/3) Inserts the given `enumerable` into a `collectable` according to the transformation function. [join(enumerable, joiner \\ "")](#join/2) Joins the given `enumerable` into a binary using `joiner` as a separator. [map(enumerable, fun)](#map/2) Returns a list where each element is the result of invoking `fun` on each corresponding element of `enumerable`. [map\_every(enumerable, nth, fun)](#map_every/3) Returns a list of results of invoking `fun` on every `nth` element of `enumerable`, starting with the first element. [map\_join(enumerable, joiner \\ "", mapper)](#map_join/3) Maps and joins the given `enumerable` in one pass. [map\_reduce(enumerable, acc, fun)](#map_reduce/3) Invokes the given function to each element in the `enumerable` to reduce it to a single element, while keeping an accumulator. [max(enumerable, empty\_fallback \\ fn -> raise(Enum.EmptyError) end)](#max/2) Returns the maximal element in the `enumerable` according to Erlang's term ordering. [max\_by(enumerable, fun, empty\_fallback \\ fn -> raise(Enum.EmptyError) end)](#max_by/3) Returns the maximal element in the `enumerable` as calculated by the given function. [member?(enumerable, element)](#member?/2) Checks if `element` exists within the `enumerable`. [min(enumerable, empty\_fallback \\ fn -> raise(Enum.EmptyError) end)](#min/2) Returns the minimal element in the `enumerable` according to Erlang's term ordering. [min\_by(enumerable, fun, empty\_fallback \\ fn -> raise(Enum.EmptyError) end)](#min_by/3) Returns the minimal element in the `enumerable` as calculated by the given function. [min\_max(enumerable, empty\_fallback \\ fn -> raise(Enum.EmptyError) end)](#min_max/2) Returns a tuple with the minimal and the maximal elements in the enumerable according to Erlang's term ordering. [min\_max\_by(enumerable, fun, empty\_fallback \\ fn -> raise(Enum.EmptyError) end)](#min_max_by/3) Returns a tuple with the minimal and the maximal elements in the enumerable as calculated by the given function. [random(enumerable)](#random/1) Returns a random element of an `enumerable`. [reduce(enumerable, fun)](#reduce/2) Invokes `fun` for each element in the `enumerable` with the accumulator. [reduce(enumerable, acc, fun)](#reduce/3) Invokes `fun` for each element in the `enumerable` with the accumulator. [reduce\_while(enumerable, acc, fun)](#reduce_while/3) Reduces `enumerable` until `fun` returns `{:halt, term}`. [reject(enumerable, fun)](#reject/2) Returns a list of elements in `enumerable` excluding those for which the function `fun` returns a truthy value. [reverse(enumerable)](#reverse/1) Returns a list of elements in `enumerable` in reverse order. [reverse(enumerable, tail)](#reverse/2) Reverses the elements in `enumerable`, appends the `tail`, and returns it as a list. [reverse\_slice(enumerable, start\_index, count)](#reverse_slice/3) Reverses the `enumerable` in the range from initial `start_index` through `count` elements. [scan(enumerable, fun)](#scan/2) Applies the given function to each element in the `enumerable`, storing the result in a list and passing it as the accumulator for the next computation. Uses the first element in the `enumerable` as the starting value. [scan(enumerable, acc, fun)](#scan/3) Applies the given function to each element in the `enumerable`, storing the result in a list and passing it as the accumulator for the next computation. Uses the given `acc` as the starting value. [shuffle(enumerable)](#shuffle/1) Returns a list with the elements of `enumerable` shuffled. [slice(enumerable, index\_range)](#slice/2) Returns a subset list of the given `enumerable` by `index_range`. [slice(enumerable, start\_index, amount)](#slice/3) Returns a subset list of the given `enumerable`, from `start_index` (zero-based) with `amount` number of elements if available. [sort(enumerable)](#sort/1) Sorts the `enumerable` according to Erlang's term ordering. [sort(enumerable, fun)](#sort/2) Sorts the `enumerable` by the given function. [sort\_by(enumerable, mapper, sorter \\ &<=/2)](#sort_by/3) Sorts the mapped results of the `enumerable` according to the provided `sorter` function. [split(enumerable, count)](#split/2) Splits the `enumerable` into two enumerables, leaving `count` elements in the first one. [split\_while(enumerable, fun)](#split_while/2) Splits enumerable in two at the position of the element for which `fun` returns a falsy value (`false` or `nil`) for the first time. [split\_with(enumerable, fun)](#split_with/2) Splits the `enumerable` in two lists according to the given function `fun`. [sum(enumerable)](#sum/1) Returns the sum of all elements. [take(enumerable, amount)](#take/2) Takes an `amount` of elements from the beginning or the end of the `enumerable`. [take\_every(enumerable, nth)](#take_every/2) Returns a list of every `nth` element in the `enumerable`, starting with the first element. [take\_random(enumerable, count)](#take_random/2) Takes `count` random elements from `enumerable`. [take\_while(enumerable, fun)](#take_while/2) Takes the elements from the beginning of the `enumerable` while `fun` returns a truthy value. [to\_list(enumerable)](#to_list/1) Converts `enumerable` to a list. [uniq(enumerable)](#uniq/1) Enumerates the `enumerable`, removing all duplicated elements. [uniq\_by(enumerable, fun)](#uniq_by/2) Enumerates the `enumerable`, by removing the elements for which function `fun` returned duplicate elements. [unzip(enumerable)](#unzip/1) Opposite of [`zip/2`](#zip/2). Extracts two-element tuples from the given `enumerable` and groups them together. [with\_index(enumerable, offset \\ 0)](#with_index/2) Returns the `enumerable` with each element wrapped in a tuple alongside its index. [zip(enumerables)](#zip/1) Zips corresponding elements from a finite collection of enumerables into one list of tuples. [zip(enumerable1, enumerable2)](#zip/2) Zips corresponding elements from two enumerables into one list of tuples. Types ====== ### acc() #### Specs ``` acc() :: any() ``` ### default() #### Specs ``` default() :: any() ``` ### element() #### Specs ``` element() :: any() ``` ### index() #### Specs ``` index() :: integer() ``` Zero-based index. It can also be a negative integer. ### t() #### Specs ``` t() :: Enumerable.t() ``` Functions ========== ### all?(enumerable, fun \\ fn x -> x end) #### Specs ``` all?(t(), (element() -> as_boolean(term()))) :: boolean() ``` Returns `true` if `fun.(element)` is truthy for all elements in `enumerable`. Iterates over the `enumerable` and invokes `fun` on each element. When an invocation of `fun` returns a falsy value (`false` or `nil`) iteration stops immediately and `false` is returned. In all other cases `true` is returned. #### Examples ``` iex> Enum.all?([2, 4, 6], fn x -> rem(x, 2) == 0 end) true iex> Enum.all?([2, 3, 4], fn x -> rem(x, 2) == 0 end) false iex> Enum.all?([], fn x -> x > 0 end) true ``` If no function is given, the truthiness of each element is checked during iteration. When an element has a falsy value (`false` or `nil`) iteration stops immediately and `false` is returned. In all other cases `true` is returned. ``` iex> Enum.all?([1, 2, 3]) true iex> Enum.all?([1, nil, 3]) false iex> Enum.all?([]) true ``` ### any?(enumerable, fun \\ fn x -> x end) #### Specs ``` any?(t(), (element() -> as_boolean(term()))) :: boolean() ``` Returns `true` if `fun.(element)` is truthy for at least one element in `enumerable`. Iterates over the `enumerable` and invokes `fun` on each element. When an invocation of `fun` returns a truthy value (neither `false` nor `nil`) iteration stops immediately and `true` is returned. In all other cases `false` is returned. #### Examples ``` iex> Enum.any?([2, 4, 6], fn x -> rem(x, 2) == 1 end) false iex> Enum.any?([2, 3, 4], fn x -> rem(x, 2) == 1 end) true iex> Enum.any?([], fn x -> x > 0 end) false ``` If no function is given, the truthiness of each element is checked during iteration. When an element has a truthy value (neither `false` nor `nil`) iteration stops immediately and `true` is returned. In all other cases `false` is returned. ``` iex> Enum.any?([false, false, false]) false iex> Enum.any?([false, true, false]) true iex> Enum.any?([]) false ``` ### at(enumerable, index, default \\ nil) #### Specs ``` at(t(), index(), default()) :: element() | default() ``` Finds the element at the given `index` (zero-based). Returns `default` if `index` is out of bounds. A negative `index` can be passed, which means the `enumerable` is enumerated once and the `index` is counted from the end (for example, `-1` finds the last element). #### Examples ``` iex> Enum.at([2, 4, 6], 0) 2 iex> Enum.at([2, 4, 6], 2) 6 iex> Enum.at([2, 4, 6], 4) nil iex> Enum.at([2, 4, 6], 4, :none) :none ``` ### chunk\_by(enumerable, fun) #### Specs ``` chunk_by(t(), (element() -> any())) :: [list()] ``` Splits enumerable on every element for which `fun` returns a new value. Returns a list of lists. #### Examples ``` iex> Enum.chunk_by([1, 2, 2, 3, 4, 4, 6, 7, 7], &(rem(&1, 2) == 1)) [[1], [2, 2], [3], [4, 4, 6], [7, 7]] ``` ### chunk\_every(enumerable, count) #### Specs ``` chunk_every(t(), pos_integer()) :: [list()] ``` Shortcut to `chunk_every(enumerable, count, count)`. ### chunk\_every(enumerable, count, step, leftover \\ []) #### Specs ``` chunk_every(t(), pos_integer(), pos_integer(), t() | :discard) :: [list()] ``` Returns list of lists containing `count` elements each, where each new chunk starts `step` elements into the `enumerable`. `step` is optional and, if not passed, defaults to `count`, i.e. chunks do not overlap. If the last chunk does not have `count` elements to fill the chunk, elements are taken from `leftover` to fill in the chunk. If `leftover` does not have enough elements to fill the chunk, then a partial chunk is returned with less than `count` elements. If `:discard` is given in `leftover`, the last chunk is discarded unless it has exactly `count` elements. #### Examples ``` iex> Enum.chunk_every([1, 2, 3, 4, 5, 6], 2) [[1, 2], [3, 4], [5, 6]] iex> Enum.chunk_every([1, 2, 3, 4, 5, 6], 3, 2, :discard) [[1, 2, 3], [3, 4, 5]] iex> Enum.chunk_every([1, 2, 3, 4, 5, 6], 3, 2, [7]) [[1, 2, 3], [3, 4, 5], [5, 6, 7]] iex> Enum.chunk_every([1, 2, 3, 4], 3, 3, []) [[1, 2, 3], [4]] iex> Enum.chunk_every([1, 2, 3, 4], 10) [[1, 2, 3, 4]] iex> Enum.chunk_every([1, 2, 3, 4, 5], 2, 3, []) [[1, 2], [4, 5]] ``` ### chunk\_while(enumerable, acc, chunk\_fun, after\_fun) #### Specs ``` chunk_while( t(), acc(), (element(), acc() -> {:cont, chunk, acc()} | {:cont, acc()} | {:halt, acc()}), (acc() -> {:cont, chunk, acc()} | {:cont, acc()}) ) :: Enumerable.t() when chunk: any() ``` Chunks the `enumerable` with fine grained control when every chunk is emitted. `chunk_fun` receives the current element and the accumulator and must return `{:cont, element, acc}` to emit the given chunk and continue with accumulator or `{:cont, acc}` to not emit any chunk and continue with the return accumulator. `after_fun` is invoked when iteration is done and must also return `{:cont, element, acc}` or `{:cont, acc}`. Returns a list of lists. #### Examples ``` iex> chunk_fun = fn element, acc -> ...> if rem(element, 2) == 0 do ...> {:cont, Enum.reverse([element | acc]), []} ...> else ...> {:cont, [element | acc]} ...> end ...> end iex> after_fun = fn ...> [] -> {:cont, []} ...> acc -> {:cont, Enum.reverse(acc), []} ...> end iex> Enum.chunk_while(1..10, [], chunk_fun, after_fun) [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] ``` ### concat(enumerables) #### Specs ``` concat(t()) :: t() ``` Given an enumerable of enumerables, concatenates the `enumerables` into a single list. #### Examples ``` iex> Enum.concat([1..3, 4..6, 7..9]) [1, 2, 3, 4, 5, 6, 7, 8, 9] iex> Enum.concat([[1, [2], 3], [4], [5, 6]]) [1, [2], 3, 4, 5, 6] ``` ### concat(left, right) #### Specs ``` concat(t(), t()) :: t() ``` Concatenates the enumerable on the `right` with the enumerable on the `left`. This function produces the same result as the [`Kernel.++/2`](kernel#++/2) operator for lists. #### Examples ``` iex> Enum.concat(1..3, 4..6) [1, 2, 3, 4, 5, 6] iex> Enum.concat([1, 2, 3], [4, 5, 6]) [1, 2, 3, 4, 5, 6] ``` ### count(enumerable) #### Specs ``` count(t()) :: non_neg_integer() ``` Returns the size of the `enumerable`. #### Examples ``` iex> Enum.count([1, 2, 3]) 3 ``` ### count(enumerable, fun) #### Specs ``` count(t(), (element() -> as_boolean(term()))) :: non_neg_integer() ``` Returns the count of elements in the `enumerable` for which `fun` returns a truthy value. #### Examples ``` iex> Enum.count([1, 2, 3, 4, 5], fn x -> rem(x, 2) == 0 end) 2 ``` ### dedup(enumerable) #### Specs ``` dedup(t()) :: list() ``` Enumerates the `enumerable`, returning a list where all consecutive duplicated elements are collapsed to a single element. Elements are compared using [`===/2`](kernel#===/2). If you want to remove all duplicated elements, regardless of order, see [`uniq/1`](#uniq/1). #### Examples ``` iex> Enum.dedup([1, 2, 3, 3, 2, 1]) [1, 2, 3, 2, 1] iex> Enum.dedup([1, 1, 2, 2.0, :three, :three]) [1, 2, 2.0, :three] ``` ### dedup\_by(enumerable, fun) #### Specs ``` dedup_by(t(), (element() -> term())) :: list() ``` Enumerates the `enumerable`, returning a list where all consecutive duplicated elements are collapsed to a single element. The function `fun` maps every element to a term which is used to determine if two elements are duplicates. #### Examples ``` iex> Enum.dedup_by([{1, :a}, {2, :b}, {2, :c}, {1, :a}], fn {x, _} -> x end) [{1, :a}, {2, :b}, {1, :a}] iex> Enum.dedup_by([5, 1, 2, 3, 2, 1], fn x -> x > 2 end) [5, 1, 3, 2] ``` ### drop(enumerable, amount) #### Specs ``` drop(t(), integer()) :: list() ``` Drops the `amount` of elements from the `enumerable`. If a negative `amount` is given, the `amount` of last values will be dropped. The `enumerable` will be enumerated once to retrieve the proper index and the remaining calculation is performed from the end. #### Examples ``` iex> Enum.drop([1, 2, 3], 2) [3] iex> Enum.drop([1, 2, 3], 10) [] iex> Enum.drop([1, 2, 3], 0) [1, 2, 3] iex> Enum.drop([1, 2, 3], -1) [1, 2] ``` ### drop\_every(enumerable, nth) #### Specs ``` drop_every(t(), non_neg_integer()) :: list() ``` Returns a list of every `nth` element in the `enumerable` dropped, starting with the first element. The first element is always dropped, unless `nth` is 0. The second argument specifying every `nth` element must be a non-negative integer. #### Examples ``` iex> Enum.drop_every(1..10, 2) [2, 4, 6, 8, 10] iex> Enum.drop_every(1..10, 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] iex> Enum.drop_every([1, 2, 3], 1) [] ``` ### drop\_while(enumerable, fun) #### Specs ``` drop_while(t(), (element() -> as_boolean(term()))) :: list() ``` Drops elements at the beginning of the `enumerable` while `fun` returns a truthy value. #### Examples ``` iex> Enum.drop_while([1, 2, 3, 2, 1], fn x -> x < 3 end) [3, 2, 1] ``` ### each(enumerable, fun) #### Specs ``` each(t(), (element() -> any())) :: :ok ``` Invokes the given `fun` for each element in the `enumerable`. Returns `:ok`. #### Examples ``` Enum.each(["some", "example"], fn x -> IO.puts(x) end) "some" "example" #=> :ok ``` ### empty?(enumerable) #### Specs ``` empty?(t()) :: boolean() ``` Determines if the `enumerable` is empty. Returns `true` if `enumerable` is empty, otherwise `false`. #### Examples ``` iex> Enum.empty?([]) true iex> Enum.empty?([1, 2, 3]) false ``` ### fetch(enumerable, index) #### Specs ``` fetch(t(), index()) :: {:ok, element()} | :error ``` Finds the element at the given `index` (zero-based). Returns `{:ok, element}` if found, otherwise `:error`. A negative `index` can be passed, which means the `enumerable` is enumerated once and the `index` is counted from the end (for example, `-1` fetches the last element). #### Examples ``` iex> Enum.fetch([2, 4, 6], 0) {:ok, 2} iex> Enum.fetch([2, 4, 6], -3) {:ok, 2} iex> Enum.fetch([2, 4, 6], 2) {:ok, 6} iex> Enum.fetch([2, 4, 6], 4) :error ``` ### fetch!(enumerable, index) #### Specs ``` fetch!(t(), index()) :: element() ``` Finds the element at the given `index` (zero-based). Raises `OutOfBoundsError` if the given `index` is outside the range of the `enumerable`. #### Examples ``` iex> Enum.fetch!([2, 4, 6], 0) 2 iex> Enum.fetch!([2, 4, 6], 2) 6 iex> Enum.fetch!([2, 4, 6], 4) ** (Enum.OutOfBoundsError) out of bounds error ``` ### filter(enumerable, fun) #### Specs ``` filter(t(), (element() -> as_boolean(term()))) :: list() ``` Filters the `enumerable`, i.e. returns only those elements for which `fun` returns a truthy value. See also [`reject/2`](#reject/2) which discards all elements where the function returns a truthy value. #### Examples ``` iex> Enum.filter([1, 2, 3], fn x -> rem(x, 2) == 0 end) [2] ``` Keep in mind that `filter` is not capable of filtering and transforming an element at the same time. If you would like to do so, consider using [`flat_map/2`](#flat_map/2). For example, if you want to convert all strings that represent an integer and discard the invalid one in one pass: ``` strings = ["1234", "abc", "12ab"] Enum.flat_map(strings, fn string -> case Integer.parse(string) do # transform to integer {int, _rest} -> [int] # skip the value :error -> [] end end) ``` ### find(enumerable, default \\ nil, fun) #### Specs ``` find(t(), default(), (element() -> any())) :: element() | default() ``` Returns the first element for which `fun` returns a truthy value. If no such element is found, returns `default`. #### Examples ``` iex> Enum.find([2, 4, 6], fn x -> rem(x, 2) == 1 end) nil iex> Enum.find([2, 4, 6], 0, fn x -> rem(x, 2) == 1 end) 0 iex> Enum.find([2, 3, 4], fn x -> rem(x, 2) == 1 end) 3 ``` ### find\_index(enumerable, fun) #### Specs ``` find_index(t(), (element() -> any())) :: non_neg_integer() | nil ``` Similar to [`find/3`](#find/3), but returns the index (zero-based) of the element instead of the element itself. #### Examples ``` iex> Enum.find_index([2, 4, 6], fn x -> rem(x, 2) == 1 end) nil iex> Enum.find_index([2, 3, 4], fn x -> rem(x, 2) == 1 end) 1 ``` ### find\_value(enumerable, default \\ nil, fun) #### Specs ``` find_value(t(), any(), (element() -> any())) :: any() | nil ``` Similar to [`find/3`](#find/3), but returns the value of the function invocation instead of the element itself. #### Examples ``` iex> Enum.find_value([2, 4, 6], fn x -> rem(x, 2) == 1 end) nil iex> Enum.find_value([2, 3, 4], fn x -> rem(x, 2) == 1 end) true iex> Enum.find_value([1, 2, 3], "no bools!", &is_boolean/1) "no bools!" ``` ### flat\_map(enumerable, fun) #### Specs ``` flat_map(t(), (element() -> t())) :: list() ``` Maps the given `fun` over `enumerable` and flattens the result. This function returns a new enumerable built by appending the result of invoking `fun` on each element of `enumerable` together; conceptually, this is similar to a combination of [`map/2`](#map/2) and [`concat/1`](#concat/1). #### Examples ``` iex> Enum.flat_map([:a, :b, :c], fn x -> [x, x] end) [:a, :a, :b, :b, :c, :c] iex> Enum.flat_map([{1, 3}, {4, 6}], fn {x, y} -> x..y end) [1, 2, 3, 4, 5, 6] iex> Enum.flat_map([:a, :b, :c], fn x -> [[x]] end) [[:a], [:b], [:c]] ``` ### flat\_map\_reduce(enumerable, acc, fun) #### Specs ``` flat_map_reduce(t(), acc(), fun) :: {[any()], acc()} when fun: (element(), acc() -> {t(), acc()} | {:halt, acc()}) ``` Maps and reduces an `enumerable`, flattening the given results (only one level deep). It expects an accumulator and a function that receives each enumerable element, and must return a tuple containing a new enumerable (often a list) with the new accumulator or a tuple with `:halt` as first element and the accumulator as second. #### Examples ``` iex> enumerable = 1..100 iex> n = 3 iex> Enum.flat_map_reduce(enumerable, 0, fn x, acc -> ...> if acc < n, do: {[x], acc + 1}, else: {:halt, acc} ...> end) {[1, 2, 3], 3} iex> Enum.flat_map_reduce(1..5, 0, fn x, acc -> {[[x]], acc + x} end) {[[1], [2], [3], [4], [5]], 15} ``` ### group\_by(enumerable, key\_fun, value\_fun \\ fn x -> x end) #### Specs ``` group_by(t(), (element() -> any()), (element() -> any())) :: map() ``` Splits the `enumerable` into groups based on `key_fun`. The result is a map where each key is given by `key_fun` and each value is a list of elements given by `value_fun`. The order of elements within each list is preserved from the `enumerable`. However, like all maps, the resulting map is unordered. #### Examples ``` iex> Enum.group_by(~w{ant buffalo cat dingo}, &String.length/1) %{3 => ["ant", "cat"], 5 => ["dingo"], 7 => ["buffalo"]} iex> Enum.group_by(~w{ant buffalo cat dingo}, &String.length/1, &String.first/1) %{3 => ["a", "c"], 5 => ["d"], 7 => ["b"]} ``` ### intersperse(enumerable, element) #### Specs ``` intersperse(t(), element()) :: list() ``` Intersperses `element` between each element of the enumeration. #### Examples ``` iex> Enum.intersperse([1, 2, 3], 0) [1, 0, 2, 0, 3] iex> Enum.intersperse([1], 0) [1] iex> Enum.intersperse([], 0) [] ``` ### into(enumerable, collectable) #### Specs ``` into(Enumerable.t(), Collectable.t()) :: Collectable.t() ``` Inserts the given `enumerable` into a `collectable`. Note that passing a non-empty list as the `collectable` is deprecated. If you're collecting into a non-empty keyword list, consider using [`Keyword.merge/2`](keyword#merge/2). If you're collecting into a non-empty list, consider something like `to_list(enumerable) ++ collectable`. #### Examples ``` iex> Enum.into([1, 2], []) [1, 2] iex> Enum.into([a: 1, b: 2], %{}) %{a: 1, b: 2} iex> Enum.into(%{a: 1}, %{b: 2}) %{a: 1, b: 2} iex> Enum.into([a: 1, a: 2], %{}) %{a: 2} ``` ### into(enumerable, collectable, transform) #### Specs ``` into(Enumerable.t(), Collectable.t(), (term() -> term())) :: Collectable.t() ``` Inserts the given `enumerable` into a `collectable` according to the transformation function. #### Examples ``` iex> Enum.into([2, 3], [3], fn x -> x * 3 end) [3, 6, 9] iex> Enum.into(%{a: 1, b: 2}, %{c: 3}, fn {k, v} -> {k, v * 2} end) %{a: 2, b: 4, c: 3} ``` ### join(enumerable, joiner \\ "") #### Specs ``` join(t(), String.t()) :: String.t() ``` Joins the given `enumerable` into a binary using `joiner` as a separator. If `joiner` is not passed at all, it defaults to the empty binary. All elements in the `enumerable` must be convertible to a binary, otherwise an error is raised. #### Examples ``` iex> Enum.join([1, 2, 3]) "123" iex> Enum.join([1, 2, 3], " = ") "1 = 2 = 3" ``` ### map(enumerable, fun) #### Specs ``` map(t(), (element() -> any())) :: list() ``` Returns a list where each element is the result of invoking `fun` on each corresponding element of `enumerable`. For maps, the function expects a key-value tuple. #### Examples ``` iex> Enum.map([1, 2, 3], fn x -> x * 2 end) [2, 4, 6] iex> Enum.map([a: 1, b: 2], fn {k, v} -> {k, -v} end) [a: -1, b: -2] ``` ### map\_every(enumerable, nth, fun) #### Specs ``` map_every(t(), non_neg_integer(), (element() -> any())) :: list() ``` Returns a list of results of invoking `fun` on every `nth` element of `enumerable`, starting with the first element. The first element is always passed to the given function, unless `nth` is `0`. The second argument specifying every `nth` element must be a non-negative integer. If `nth` is `0`, then `enumerable` is directly converted to a list, without `fun` being ever applied. #### Examples ``` iex> Enum.map_every(1..10, 2, fn x -> x + 1000 end) [1001, 2, 1003, 4, 1005, 6, 1007, 8, 1009, 10] iex> Enum.map_every(1..10, 3, fn x -> x + 1000 end) [1001, 2, 3, 1004, 5, 6, 1007, 8, 9, 1010] iex> Enum.map_every(1..5, 0, fn x -> x + 1000 end) [1, 2, 3, 4, 5] iex> Enum.map_every([1, 2, 3], 1, fn x -> x + 1000 end) [1001, 1002, 1003] ``` ### map\_join(enumerable, joiner \\ "", mapper) #### Specs ``` map_join(t(), String.t(), (element() -> String.Chars.t())) :: String.t() ``` Maps and joins the given `enumerable` in one pass. `joiner` can be either a binary or a list and the result will be of the same type as `joiner`. If `joiner` is not passed at all, it defaults to an empty binary. All elements returned from invoking the `mapper` must be convertible to a binary, otherwise an error is raised. #### Examples ``` iex> Enum.map_join([1, 2, 3], &(&1 * 2)) "246" iex> Enum.map_join([1, 2, 3], " = ", &(&1 * 2)) "2 = 4 = 6" ``` ### map\_reduce(enumerable, acc, fun) #### Specs ``` map_reduce(t(), acc(), (element(), acc() -> {element(), acc()})) :: {list(), acc()} ``` Invokes the given function to each element in the `enumerable` to reduce it to a single element, while keeping an accumulator. Returns a tuple where the first element is the mapped enumerable and the second one is the final accumulator. The function, `fun`, receives two arguments: the first one is the element, and the second one is the accumulator. `fun` must return a tuple with two elements in the form of `{result, accumulator}`. For maps, the first tuple element must be a `{key, value}` tuple. #### Examples ``` iex> Enum.map_reduce([1, 2, 3], 0, fn x, acc -> {x * 2, x + acc} end) {[2, 4, 6], 6} ``` ### max(enumerable, empty\_fallback \\ fn -> raise(Enum.EmptyError) end) #### Specs ``` max(t(), (() -> empty_result)) :: element() | empty_result when empty_result: any() ``` Returns the maximal element in the `enumerable` according to Erlang's term ordering. If multiple elements are considered maximal, the first one that was found is returned. Calls the provided `empty_fallback` function and returns its value if `enumerable` is empty. The default `empty_fallback` raises [`Enum.EmptyError`](enum.emptyerror). #### Examples ``` iex> Enum.max([1, 2, 3]) 3 iex> Enum.max([], fn -> 0 end) 0 ``` The fact this function uses Erlang's term ordering means that the comparison is structural and not semantic. For example: ``` iex> Enum.max([~D[2017-03-31], ~D[2017-04-01]]) ~D[2017-03-31] ``` In the example above, [`max/1`](#max/1) returned March 31st instead of April 1st because the structural comparison compares the day before the year. This can be addressed by using [`max_by/3`](#max_by/3) and by relying on structures where the most significant digits come first. In this particular case, we can use [`Date.to_erl/1`](date#to_erl/1) to get a tuple representation with year, month and day fields: ``` iex> Enum.max_by([~D[2017-03-31], ~D[2017-04-01]], &Date.to_erl/1) ~D[2017-04-01] ``` For selecting a maximum value out of two consider using [`Kernel.max/2`](kernel#max/2). ### max\_by(enumerable, fun, empty\_fallback \\ fn -> raise(Enum.EmptyError) end) #### Specs ``` max_by(t(), (element() -> any()), (() -> empty_result)) :: element() | empty_result when empty_result: any() ``` Returns the maximal element in the `enumerable` as calculated by the given function. If multiple elements are considered maximal, the first one that was found is returned. Calls the provided `empty_fallback` function and returns its value if `enumerable` is empty. The default `empty_fallback` raises [`Enum.EmptyError`](enum.emptyerror). #### Examples ``` iex> Enum.max_by(["a", "aa", "aaa"], fn x -> String.length(x) end) "aaa" iex> Enum.max_by(["a", "aa", "aaa", "b", "bbb"], &String.length/1) "aaa" iex> Enum.max_by([], &String.length/1, fn -> nil end) nil ``` ### member?(enumerable, element) #### Specs ``` member?(t(), element()) :: boolean() ``` Checks if `element` exists within the `enumerable`. Membership is tested with the match ([`===/2`](kernel#===/2)) operator. #### Examples ``` iex> Enum.member?(1..10, 5) true iex> Enum.member?(1..10, 5.0) false iex> Enum.member?([1.0, 2.0, 3.0], 2) false iex> Enum.member?([1.0, 2.0, 3.0], 2.000) true iex> Enum.member?([:a, :b, :c], :d) false ``` ### min(enumerable, empty\_fallback \\ fn -> raise(Enum.EmptyError) end) #### Specs ``` min(t(), (() -> empty_result)) :: element() | empty_result when empty_result: any() ``` Returns the minimal element in the `enumerable` according to Erlang's term ordering. If multiple elements are considered minimal, the first one that was found is returned. Calls the provided `empty_fallback` function and returns its value if `enumerable` is empty. The default `empty_fallback` raises [`Enum.EmptyError`](enum.emptyerror). #### Examples ``` iex> Enum.min([1, 2, 3]) 1 iex> Enum.min([], fn -> 0 end) 0 ``` The fact this function uses Erlang's term ordering means that the comparison is structural and not semantic. For example: ``` iex> Enum.min([~D[2017-03-31], ~D[2017-04-01]]) ~D[2017-04-01] ``` In the example above, [`min/1`](#min/1) returned April 1st instead of March 31st because the structural comparison compares the day before the year. This can be addressed by using [`min_by/3`](#min_by/3) and by relying on structures where the most significant digits come first. In this particular case, we can use [`Date.to_erl/1`](date#to_erl/1) to get a tuple representation with year, month and day fields: ``` iex> Enum.min_by([~D[2017-03-31], ~D[2017-04-01]], &Date.to_erl/1) ~D[2017-03-31] ``` For selecting a minimal value out of two consider using [`Kernel.min/2`](kernel#min/2). ### min\_by(enumerable, fun, empty\_fallback \\ fn -> raise(Enum.EmptyError) end) #### Specs ``` min_by(t(), (element() -> any()), (() -> empty_result)) :: element() | empty_result when empty_result: any() ``` Returns the minimal element in the `enumerable` as calculated by the given function. If multiple elements are considered minimal, the first one that was found is returned. Calls the provided `empty_fallback` function and returns its value if `enumerable` is empty. The default `empty_fallback` raises [`Enum.EmptyError`](enum.emptyerror). #### Examples ``` iex> Enum.min_by(["a", "aa", "aaa"], fn x -> String.length(x) end) "a" iex> Enum.min_by(["a", "aa", "aaa", "b", "bbb"], &String.length/1) "a" iex> Enum.min_by([], &String.length/1, fn -> nil end) nil ``` ### min\_max(enumerable, empty\_fallback \\ fn -> raise(Enum.EmptyError) end) #### Specs ``` min_max(t(), (() -> empty_result)) :: {element(), element()} | empty_result when empty_result: any() ``` Returns a tuple with the minimal and the maximal elements in the enumerable according to Erlang's term ordering. If multiple elements are considered maximal or minimal, the first one that was found is returned. Calls the provided `empty_fallback` function and returns its value if `enumerable` is empty. The default `empty_fallback` raises [`Enum.EmptyError`](enum.emptyerror). #### Examples ``` iex> Enum.min_max([2, 3, 1]) {1, 3} iex> Enum.min_max([], fn -> {nil, nil} end) {nil, nil} ``` ### min\_max\_by(enumerable, fun, empty\_fallback \\ fn -> raise(Enum.EmptyError) end) #### Specs ``` min_max_by(t(), (element() -> any()), (() -> empty_result)) :: {element(), element()} | empty_result when empty_result: any() ``` Returns a tuple with the minimal and the maximal elements in the enumerable as calculated by the given function. If multiple elements are considered maximal or minimal, the first one that was found is returned. Calls the provided `empty_fallback` function and returns its value if `enumerable` is empty. The default `empty_fallback` raises [`Enum.EmptyError`](enum.emptyerror). #### Examples ``` iex> Enum.min_max_by(["aaa", "bb", "c"], fn x -> String.length(x) end) {"c", "aaa"} iex> Enum.min_max_by(["aaa", "a", "bb", "c", "ccc"], &String.length/1) {"a", "aaa"} iex> Enum.min_max_by([], &String.length/1, fn -> {nil, nil} end) {nil, nil} ``` ### random(enumerable) #### Specs ``` random(t()) :: element() ``` Returns a random element of an `enumerable`. Raises [`Enum.EmptyError`](enum.emptyerror) if `enumerable` is empty. This function uses Erlang's [`:rand` module](http://www.erlang.org/doc/man/rand.html) to calculate the random value. Check its documentation for setting a different random algorithm or a different seed. The implementation is based on the [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling#Relation_to_Fisher-Yates_shuffle) algorithm. It assumes that the sample being returned can fit into memory; the input `enumerable` doesn't have to, as it is traversed just once. If a range is passed into the function, this function will pick a random value between the range limits, without traversing the whole range (thus executing in constant time and constant memory). #### Examples ``` # Although not necessary, let's seed the random algorithm iex> :rand.seed(:exsplus, {101, 102, 103}) iex> Enum.random([1, 2, 3]) 2 iex> Enum.random([1, 2, 3]) 1 iex> Enum.random(1..1_000) 776 ``` ### reduce(enumerable, fun) #### Specs ``` reduce(t(), (element(), acc() -> acc())) :: acc() ``` Invokes `fun` for each element in the `enumerable` with the accumulator. Raises [`Enum.EmptyError`](enum.emptyerror) if `enumerable` is empty. The first element of the `enumerable` is used as the initial value of the accumulator. Then the function is invoked with the next element and the accumulator. The result returned by the function is used as the accumulator for the next iteration, recursively. When the `enumerable` is done, the last accumulator is returned. Since the first element of the enumerable is used as the initial value of the accumulator, `fun` will only be executed `n - 1` times where `n` is the length of the enumerable. This function won't call the specified function for enumerables that are one-element long. If you wish to use another value for the accumulator, use [`Enum.reduce/3`](enum#reduce/3). #### Examples ``` iex> Enum.reduce([1, 2, 3, 4], fn x, acc -> x * acc end) 24 ``` ### reduce(enumerable, acc, fun) #### Specs ``` reduce(t(), any(), (element(), any() -> any())) :: any() ``` Invokes `fun` for each element in the `enumerable` with the accumulator. The initial value of the accumulator is `acc`. The function is invoked for each element in the enumerable with the accumulator. The result returned by the function is used as the accumulator for the next iteration. The function returns the last accumulator. #### Examples ``` iex> Enum.reduce([1, 2, 3], 0, fn x, acc -> x + acc end) 6 ``` #### Reduce as a building block Reduce (sometimes called `fold`) is a basic building block in functional programming. Almost all of the functions in the [`Enum`](#content) module can be implemented on top of reduce. Those functions often rely on other operations, such as [`Enum.reverse/1`](enum#reverse/1), which are optimized by the runtime. For example, we could implement [`map/2`](#map/2) in terms of [`reduce/3`](#reduce/3) as follows: ``` def my_map(enumerable, fun) do enumerable |> Enum.reduce([], fn x, acc -> [fun.(x) | acc] end) |> Enum.reverse() end ``` In the example above, [`Enum.reduce/3`](enum#reduce/3) accumulates the result of each call to `fun` into a list in reverse order, which is correctly ordered at the end by calling [`Enum.reverse/1`](enum#reverse/1). Implementing functions like [`map/2`](#map/2), [`filter/2`](#filter/2) and others are a good exercise for understanding the power behind [`Enum.reduce/3`](enum#reduce/3). When an operation cannot be expressed by any of the functions in the [`Enum`](#content) module, developers will most likely resort to [`reduce/3`](#reduce/3). ### reduce\_while(enumerable, acc, fun) #### Specs ``` reduce_while(t(), any(), (element(), any() -> {:cont, any()} | {:halt, any()})) :: any() ``` Reduces `enumerable` until `fun` returns `{:halt, term}`. The return value for `fun` is expected to be * `{:cont, acc}` to continue the reduction with `acc` as the new accumulator or * `{:halt, acc}` to halt the reduction If `fun` returns `{:halt, acc}` the reduction is halted and the function returns `acc`. Otherwise, if the enumerable is exhausted, the function returns the accumulator of the last `{:cont, acc}`. #### Examples ``` iex> Enum.reduce_while(1..100, 0, fn x, acc -> ...> if x < 5, do: {:cont, acc + x}, else: {:halt, acc} ...> end) 10 iex> Enum.reduce_while(1..100, 0, fn x, acc -> ...> if x > 0, do: {:cont, acc + x}, else: {:halt, acc} ...> end) 5050 ``` ### reject(enumerable, fun) #### Specs ``` reject(t(), (element() -> as_boolean(term()))) :: list() ``` Returns a list of elements in `enumerable` excluding those for which the function `fun` returns a truthy value. See also [`filter/2`](#filter/2). #### Examples ``` iex> Enum.reject([1, 2, 3], fn x -> rem(x, 2) == 0 end) [1, 3] ``` ### reverse(enumerable) #### Specs ``` reverse(t()) :: list() ``` Returns a list of elements in `enumerable` in reverse order. #### Examples ``` iex> Enum.reverse([1, 2, 3]) [3, 2, 1] ``` ### reverse(enumerable, tail) #### Specs ``` reverse(t(), t()) :: list() ``` Reverses the elements in `enumerable`, appends the `tail`, and returns it as a list. This is an optimization for `enumerable |> Enum.reverse() |> Enum.concat(tail)`. #### Examples ``` iex> Enum.reverse([1, 2, 3], [4, 5, 6]) [3, 2, 1, 4, 5, 6] ``` ### reverse\_slice(enumerable, start\_index, count) #### Specs ``` reverse_slice(t(), non_neg_integer(), non_neg_integer()) :: list() ``` Reverses the `enumerable` in the range from initial `start_index` through `count` elements. If `count` is greater than the size of the rest of the `enumerable`, then this function will reverse the rest of the enumerable. #### Examples ``` iex> Enum.reverse_slice([1, 2, 3, 4, 5, 6], 2, 4) [1, 2, 6, 5, 4, 3] ``` ### scan(enumerable, fun) #### Specs ``` scan(t(), (element(), any() -> any())) :: list() ``` Applies the given function to each element in the `enumerable`, storing the result in a list and passing it as the accumulator for the next computation. Uses the first element in the `enumerable` as the starting value. #### Examples ``` iex> Enum.scan(1..5, &(&1 + &2)) [1, 3, 6, 10, 15] ``` ### scan(enumerable, acc, fun) #### Specs ``` scan(t(), any(), (element(), any() -> any())) :: list() ``` Applies the given function to each element in the `enumerable`, storing the result in a list and passing it as the accumulator for the next computation. Uses the given `acc` as the starting value. #### Examples ``` iex> Enum.scan(1..5, 0, &(&1 + &2)) [1, 3, 6, 10, 15] ``` ### shuffle(enumerable) #### Specs ``` shuffle(t()) :: list() ``` Returns a list with the elements of `enumerable` shuffled. This function uses Erlang's [`:rand` module](http://www.erlang.org/doc/man/rand.html) to calculate the random value. Check its documentation for setting a different random algorithm or a different seed. #### Examples ``` # Although not necessary, let's seed the random algorithm iex> :rand.seed(:exsplus, {1, 2, 3}) iex> Enum.shuffle([1, 2, 3]) [2, 1, 3] iex> Enum.shuffle([1, 2, 3]) [2, 3, 1] ``` ### slice(enumerable, index\_range) #### Specs ``` slice(t(), Range.t()) :: list() ``` Returns a subset list of the given `enumerable` by `index_range`. `index_range` must be a [`Range`](range). Given an `enumerable`, it drops elements before `index_range.first` (zero-base), then takes elements until element `index_range.last` (inclusively). Indexes are normalized, meaning that negative indexes will be counted from the end (for example, `-1` means the last element of the `enumerable`). If `index_range.last` is out of bounds, then it is assigned as the index of the last element. If the normalized `index_range.first` is out of bounds of the given `enumerable`, or this one is greater than the normalized `index_range.last`, then `[]` is returned. #### Examples ``` iex> Enum.slice(1..100, 5..10) [6, 7, 8, 9, 10, 11] iex> Enum.slice(1..10, 5..20) [6, 7, 8, 9, 10] # last five elements (negative indexes) iex> Enum.slice(1..30, -5..-1) [26, 27, 28, 29, 30] # last five elements (mixed positive and negative indexes) iex> Enum.slice(1..30, 25..-1) [26, 27, 28, 29, 30] # out of bounds iex> Enum.slice(1..10, 11..20) [] # index_range.first is greater than index_range.last iex> Enum.slice(1..10, 6..5) [] ``` ### slice(enumerable, start\_index, amount) #### Specs ``` slice(t(), index(), non_neg_integer()) :: list() ``` Returns a subset list of the given `enumerable`, from `start_index` (zero-based) with `amount` number of elements if available. Given an `enumerable`, it drops elements right before element `start_index`, then takes `amount` of elements, returning as many elements as possible if there are not enough elements. A negative `start_index` can be passed, which means the `enumerable` is enumerated once and the index is counted from the end (for example, `-1` starts slicing from the last element). It returns `[]` if `amount` is `0` or if `start_index` is out of bounds. #### Examples ``` iex> Enum.slice(1..100, 5, 10) [6, 7, 8, 9, 10, 11, 12, 13, 14, 15] # amount to take is greater than the number of elements iex> Enum.slice(1..10, 5, 100) [6, 7, 8, 9, 10] iex> Enum.slice(1..10, 5, 0) [] # using a negative start index iex> Enum.slice(1..10, -6, 3) [5, 6, 7] # out of bound start index (positive) iex> Enum.slice(1..10, 10, 5) [] # out of bound start index (negative) iex> Enum.slice(1..10, -11, 5) [] ``` ### sort(enumerable) #### Specs ``` sort(t()) :: list() ``` Sorts the `enumerable` according to Erlang's term ordering. Uses the merge sort algorithm. #### Examples ``` iex> Enum.sort([3, 2, 1]) [1, 2, 3] ``` ### sort(enumerable, fun) #### Specs ``` sort(t(), (element(), element() -> boolean())) :: list() ``` Sorts the `enumerable` by the given function. This function uses the merge sort algorithm. The given function should compare two arguments, and return `true` if the first argument precedes the second one. #### Examples ``` iex> Enum.sort([1, 2, 3], &(&1 >= &2)) [3, 2, 1] ``` The sorting algorithm will be stable as long as the given function returns `true` for values considered equal: ``` iex> Enum.sort(["some", "kind", "of", "monster"], &(byte_size(&1) <= byte_size(&2))) ["of", "some", "kind", "monster"] ``` If the function does not return `true` for equal values, the sorting is not stable and the order of equal terms may be shuffled. For example: ``` iex> Enum.sort(["some", "kind", "of", "monster"], &(byte_size(&1) < byte_size(&2))) ["of", "kind", "some", "monster"] ``` ### sort\_by(enumerable, mapper, sorter \\ &<=/2) #### Specs ``` sort_by( t(), (element() -> mapped_element), (mapped_element, mapped_element -> boolean()) ) :: list() when mapped_element: element() ``` Sorts the mapped results of the `enumerable` according to the provided `sorter` function. This function maps each element of the `enumerable` using the provided `mapper` function. The enumerable is then sorted by the mapped elements using the `sorter` function, which defaults to [`Kernel.<=/2`](kernel#%3C=/2). [`sort_by/3`](#sort_by/3) differs from [`sort/2`](#sort/2) in that it only calculates the comparison value for each element in the enumerable once instead of once for each element in each comparison. If the same function is being called on both elements, it's also more compact to use [`sort_by/3`](#sort_by/3). #### Examples Using the default `sorter` of [`<=/2`](kernel#%3C=/2): ``` iex> Enum.sort_by(["some", "kind", "of", "monster"], &byte_size/1) ["of", "some", "kind", "monster"] ``` Using a custom `sorter` to override the order: ``` iex> Enum.sort_by(["some", "kind", "of", "monster"], &byte_size/1, &>=/2) ["monster", "some", "kind", "of"] ``` Sorting by multiple properties - first by size, then by first letter (this takes advantage of the fact that tuples are compared element-by-element): ``` iex> Enum.sort_by(["some", "kind", "of", "monster"], &{byte_size(&1), String.first(&1)}) ["of", "kind", "some", "monster"] ``` ### split(enumerable, count) #### Specs ``` split(t(), integer()) :: {list(), list()} ``` Splits the `enumerable` into two enumerables, leaving `count` elements in the first one. If `count` is a negative number, it starts counting from the back to the beginning of the `enumerable`. Be aware that a negative `count` implies the `enumerable` will be enumerated twice: once to calculate the position, and a second time to do the actual splitting. #### Examples ``` iex> Enum.split([1, 2, 3], 2) {[1, 2], [3]} iex> Enum.split([1, 2, 3], 10) {[1, 2, 3], []} iex> Enum.split([1, 2, 3], 0) {[], [1, 2, 3]} iex> Enum.split([1, 2, 3], -1) {[1, 2], [3]} iex> Enum.split([1, 2, 3], -5) {[], [1, 2, 3]} ``` ### split\_while(enumerable, fun) #### Specs ``` split_while(t(), (element() -> as_boolean(term()))) :: {list(), list()} ``` Splits enumerable in two at the position of the element for which `fun` returns a falsy value (`false` or `nil`) for the first time. It returns a two-element tuple with two lists of elements. The element that triggered the split is part of the second list. #### Examples ``` iex> Enum.split_while([1, 2, 3, 4], fn x -> x < 3 end) {[1, 2], [3, 4]} iex> Enum.split_while([1, 2, 3, 4], fn x -> x < 0 end) {[], [1, 2, 3, 4]} iex> Enum.split_while([1, 2, 3, 4], fn x -> x > 0 end) {[1, 2, 3, 4], []} ``` ### split\_with(enumerable, fun) #### Specs ``` split_with(t(), (element() -> as_boolean(term()))) :: {list(), list()} ``` Splits the `enumerable` in two lists according to the given function `fun`. Splits the given `enumerable` in two lists by calling `fun` with each element in the `enumerable` as its only argument. Returns a tuple with the first list containing all the elements in `enumerable` for which applying `fun` returned a truthy value, and a second list with all the elements for which applying `fun` returned a falsy value (`false` or `nil`). The elements in both the returned lists are in the same relative order as they were in the original enumerable (if such enumerable was ordered, like a list). See the examples below. #### Examples ``` iex> Enum.split_with([5, 4, 3, 2, 1, 0], fn x -> rem(x, 2) == 0 end) {[4, 2, 0], [5, 3, 1]} iex> Enum.split_with(%{a: 1, b: -2, c: 1, d: -3}, fn {_k, v} -> v < 0 end) {[b: -2, d: -3], [a: 1, c: 1]} iex> Enum.split_with(%{a: 1, b: -2, c: 1, d: -3}, fn {_k, v} -> v > 50 end) {[], [a: 1, b: -2, c: 1, d: -3]} iex> Enum.split_with(%{}, fn {_k, v} -> v > 50 end) {[], []} ``` ### sum(enumerable) #### Specs ``` sum(t()) :: number() ``` Returns the sum of all elements. Raises [`ArithmeticError`](arithmeticerror) if `enumerable` contains a non-numeric value. #### Examples ``` iex> Enum.sum([1, 2, 3]) 6 ``` ### take(enumerable, amount) #### Specs ``` take(t(), integer()) :: list() ``` Takes an `amount` of elements from the beginning or the end of the `enumerable`. If a positive `amount` is given, it takes the `amount` elements from the beginning of the `enumerable`. If a negative `amount` is given, the `amount` of elements will be taken from the end. The `enumerable` will be enumerated once to retrieve the proper index and the remaining calculation is performed from the end. If amount is `0`, it returns `[]`. #### Examples ``` iex> Enum.take([1, 2, 3], 2) [1, 2] iex> Enum.take([1, 2, 3], 10) [1, 2, 3] iex> Enum.take([1, 2, 3], 0) [] iex> Enum.take([1, 2, 3], -1) [3] ``` ### take\_every(enumerable, nth) #### Specs ``` take_every(t(), non_neg_integer()) :: list() ``` Returns a list of every `nth` element in the `enumerable`, starting with the first element. The first element is always included, unless `nth` is 0. The second argument specifying every `nth` element must be a non-negative integer. #### Examples ``` iex> Enum.take_every(1..10, 2) [1, 3, 5, 7, 9] iex> Enum.take_every(1..10, 0) [] iex> Enum.take_every([1, 2, 3], 1) [1, 2, 3] ``` ### take\_random(enumerable, count) #### Specs ``` take_random(t(), non_neg_integer()) :: list() ``` Takes `count` random elements from `enumerable`. Notice this function will traverse the whole `enumerable` to get the random sublist. See [`random/1`](#random/1) for notes on implementation and random seed. #### Examples ``` # Although not necessary, let's seed the random algorithm iex> :rand.seed(:exsplus, {1, 2, 3}) iex> Enum.take_random(1..10, 2) [5, 4] iex> Enum.take_random(?a..?z, 5) 'ipybz' ``` ### take\_while(enumerable, fun) #### Specs ``` take_while(t(), (element() -> as_boolean(term()))) :: list() ``` Takes the elements from the beginning of the `enumerable` while `fun` returns a truthy value. #### Examples ``` iex> Enum.take_while([1, 2, 3], fn x -> x < 3 end) [1, 2] ``` ### to\_list(enumerable) #### Specs ``` to_list(t()) :: [element()] ``` Converts `enumerable` to a list. #### Examples ``` iex> Enum.to_list(1..3) [1, 2, 3] ``` ### uniq(enumerable) #### Specs ``` uniq(t()) :: list() ``` Enumerates the `enumerable`, removing all duplicated elements. #### Examples ``` iex> Enum.uniq([1, 2, 3, 3, 2, 1]) [1, 2, 3] ``` ### uniq\_by(enumerable, fun) #### Specs ``` uniq_by(t(), (element() -> term())) :: list() ``` Enumerates the `enumerable`, by removing the elements for which function `fun` returned duplicate elements. The function `fun` maps every element to a term. Two elements are considered duplicates if the return value of `fun` is equal for both of them. The first occurrence of each element is kept. #### Example ``` iex> Enum.uniq_by([{1, :x}, {2, :y}, {1, :z}], fn {x, _} -> x end) [{1, :x}, {2, :y}] iex> Enum.uniq_by([a: {:tea, 2}, b: {:tea, 2}, c: {:coffee, 1}], fn {_, y} -> y end) [a: {:tea, 2}, c: {:coffee, 1}] ``` ### unzip(enumerable) #### Specs ``` unzip(t()) :: {[element()], [element()]} ``` Opposite of [`zip/2`](#zip/2). Extracts two-element tuples from the given `enumerable` and groups them together. It takes an `enumerable` with elements being two-element tuples and returns a tuple with two lists, each of which is formed by the first and second element of each tuple, respectively. This function fails unless `enumerable` is or can be converted into a list of tuples with *exactly* two elements in each tuple. #### Examples ``` iex> Enum.unzip([{:a, 1}, {:b, 2}, {:c, 3}]) {[:a, :b, :c], [1, 2, 3]} iex> Enum.unzip(%{a: 1, b: 2}) {[:a, :b], [1, 2]} ``` ### with\_index(enumerable, offset \\ 0) #### Specs ``` with_index(t(), integer()) :: [{element(), index()}] ``` Returns the `enumerable` with each element wrapped in a tuple alongside its index. If an `offset` is given, we will index from the given offset instead of from zero. #### Examples ``` iex> Enum.with_index([:a, :b, :c]) [a: 0, b: 1, c: 2] iex> Enum.with_index([:a, :b, :c], 3) [a: 3, b: 4, c: 5] ``` ### zip(enumerables) #### Specs ``` zip(enumerables) :: [tuple()] when enumerables: [t()] | t() ``` Zips corresponding elements from a finite collection of enumerables into one list of tuples. The zipping finishes as soon as any enumerable in the given collection completes. #### Examples ``` iex> Enum.zip([[1, 2, 3], [:a, :b, :c], ["foo", "bar", "baz"]]) [{1, :a, "foo"}, {2, :b, "bar"}, {3, :c, "baz"}] iex> Enum.zip([[1, 2, 3, 4, 5], [:a, :b, :c]]) [{1, :a}, {2, :b}, {3, :c}] ``` ### zip(enumerable1, enumerable2) #### Specs ``` zip(t(), t()) :: [{any(), any()}] ``` Zips corresponding elements from two enumerables into one list of tuples. The zipping finishes as soon as any enumerable completes. #### Examples ``` iex> Enum.zip([1, 2, 3], [:a, :b, :c]) [{1, :a}, {2, :b}, {3, :c}] iex> Enum.zip([1, 2, 3, 4, 5], [:a, :b, :c]) [{1, :a}, {2, :b}, {3, :c}] ```
programming_docs