response
stringlengths 1
33.1k
| instruction
stringlengths 22
582k
|
---|---|
Returns a Residual Shuffle Exchange Network model. | def ResidualShuffleExchange(vocab_size,
d_model,
input_dropout,
dropout,
mode='train',
n_blocks=2):
"""Returns a Residual Shuffle Exchange Network model."""
benes_blocks = [BenesBlock(d_model, dropout, mode) for _ in range(n_blocks)]
return tl.Serial(
tl.Embedding(vocab_size, d_model),
tl.Dropout(rate=input_dropout, mode=mode),
# Apply Benes Block n_blocks times.
*benes_blocks,
ResidualSwitchUnit(d_model, dropout, mode),
# Produce probabilities.
tl.Dense(vocab_size),
tl.LogSoftmax(),
) |
Retuns np.array of size (1, length, dim) where x[0, a, b] = a. | def _input_with_indice_as_values(length, dim):
"""Retuns np.array of size (1, length, dim) where x[0, a, b] = a."""
positions = []
for i in range(length):
positions.append([i] * dim)
positions_input = np.array(positions)
positions_input = np.expand_dims(positions_input, axis=0)
return positions_input |
Returns a highly configurable Terraformer encoder-decoder model.
This model maps paired text sequences (source and target) to float-valued
losses. If ``input_vocab_size`` is not ``None``, the layer takes
two input sequences:
- inputs (2):
- source: 2-D int array representing a batch of text strings via token
IDs plus padding markers; shape is `(batch_size, sequence_length)`,
where sequence_length <= ``max_len``. Array elements are in
``range(input_vocab_size)``, and 0 values mark padding positions.
- target: 2-D int array representing a batch of text strings via token
IDs plus padding markers; shape is `(batch_size, sequence_length)`,
where sequence_length <= ``max_len``. Array elements are in
``range(output_vocab_size)``, and 0 values mark padding positions.
- output: 1-D float array of losses; shape is `(batch_size)`.
If ``input_vocab_size`` is ``None``, the layer takes three input sequences:
- inputs (3):
- source: 3-D float array representing a batch of already-embedded text
strings; shape is `(batch_size, sequence_length, d_model)`, where
sequence_length <= ``max_len``.
- mask: 2-D int array representing active versus masked positions; 0
values mark masked (padding) positions.
- target: 2-D int array representing a batch of text strings via token
IDs plus padding markers; shape is `(batch_size, sequence_length)`,
where sequence_length <= ``max_len``. Array elements are in
``range(output_vocab_size)``, and 0 values mark padding positions.
- output: 1-D float array of losses; shape is `(batch_size)`.
Args:
input_vocab_size: Input vocabulary size -- each element of the input tensor
should be an integer in ``range(vocab_size)``. These integers typically
represent token IDs from a vocabulary-based tokenizer.
output_vocab_size: If specified, gives the vocabulary size for the targets;
if ``None``, then input and target integers (token IDs) are assumed to
come from the same vocabulary.
d_model: Last/innermost dimension of activation arrays at most points in
the model, including the initial embedding output.
d_ff: Last/innermost dimension of special (typically wider)
:py:class:`Dense` layer in the feedforward part of each encoder block.
d_attention_key: Depth of key vectors in each attention head.
d_attention_value: Depth of value vectors in each attention head.
n_encoder_layers: Number of encoder blocks.
n_decoder_layers: Number of decoder blocks.
n_heads: Number of attention heads.
dropout: Stochastic rate (probability) for dropping an activation value
when applying dropout within encoder/decoder blocks. The same rate is
also used for attention dropout in encoder/decoder blocks.
max_len: Maximum symbol length for positional encoding.
encoder_attention_type: Type of attention to use in the encoder; must be
an attention-type subclass of :py:class:`trax.layers.Layer`.
encoder_decoder_attention_type: Type of attention to use in the decoder;
must be an attention-type subclass of :py:class:`trax.layers.Layer`.
pos_type: String indicating the type of positional embeddings to use.
pos_axial_shape: Shape (tuple of ints) to use for the axial position
encoding. If unset, axial position encoding is disabled.
pos_d_axial_embs: Tuple of ints specifying the depth of position embedding
for each axis. Tuple length must match ``pos_axial_shape``, and values
must sum to ``d_model``.
pos_start_from_zero_prob: Stochastic rate (probability) for starting
positional encoding at position 0 during training. If 1.0, always start
from position 0; if < 1.0, the non-zero starts will be uniformly
distributed up to ``pos_max_offset_to_add``.
pos_max_offset_to_add: Maximum offset to add to positions during training
when randomizing. This offset plus input length must be less than
``max_len`` for all training examples.
ff_activation: Type of activation function at the end of each block; must
be an activation-type subclass of :py:class:`trax.layers.Layer`.
ff_use_sru: If > 0, use this number of SRU layers in place of feedforward
layers.
ff_chunk_size: If > 0, chunk each feedforward layer into chunks of this
size.
ff_dropout: Stochastic rate (probability) for dropping an activation value
at feedforward nonlinearities.
ff_sparsity: If > 0, use sparse feedforward blocks with this level of
sparsity.
loss_sparsity_type: String indicating the type of sparsity to used in loss
layer; see :py:class:`SparseDenseWithOptions` for options. If ``None``,
use no sparsity.
loss_sparsity: If > 0, use this level of sparsity in the loss layer.
loss_d_lowrank: If > 0, use a (low-rank) intermediate layer, with this
dimension, in the loss.
loss_sparsity_prob: Stochastic rate (probability) for using the sparse
version of the loss. If ``None``, use the sparse version exclusively.
attention_chunk_size: If > 0, compute attention using chunks of this size.
n_layers_forget: How often to have a forgetting block between layers.
forget_dense: If True, use :py:class:`Dense` instances as forget layers;
else use no-ops.
n_decoder_attention_layers: Number of attention layers in a decoder block.
use_bfloat16: If True, use bfloat16 for weights; else use float32.
reversible_encoder: If True, make the encoder be reversible.
use_two_swaps_per_encoder_block: If True, ensure that there is a an even
number of swaps across the encoder.
center_layernorm: If True, use centering in :py:class:`LayerNorm` (the
default); else omit centering (which is known as RMS normalization).
half_before_layer: If not None, specifies an n'th layer such that all
layers before the n'th use half the normal values for ``d_model`` and
``d_ff``.
double_after_layer: If not None, specifies an n'th layer such that all
layers after the n'th use double the normal values for ``d_model`` and
``d_ff``.
mode: If ``'train'``, include dropout in each encoder/decoder block; else
dropout layers have no effect.
Returns:
A Terraformer encoder-decoder as a layer that maps from target and source
text sequences to a scalar loss. | def ConfigurableTerraformer(input_vocab_size,
output_vocab_size=None,
d_model=512,
d_ff=2048,
d_attention_key=None,
d_attention_value=None,
n_encoder_layers=6,
n_decoder_layers=6,
n_heads=8,
dropout=0.1,
max_len=2048,
encoder_attention_type=tl.SelfAttention,
encoder_decoder_attention_type=tl.SelfAttention,
pos_type='fixed-base',
pos_axial_shape=(),
pos_d_axial_embs=None,
pos_start_from_zero_prob=1.0,
pos_max_offset_to_add=0,
ff_activation=tl.Relu,
ff_use_sru=0,
ff_chunk_size=0,
ff_dropout=None,
ff_sparsity=0,
loss_sparsity_type='mult',
loss_sparsity=0,
loss_d_lowrank=0,
loss_sparsity_prob=None,
attention_chunk_size=0,
n_layers_forget=0,
forget_dense=True,
n_decoder_attention_layers=2,
use_bfloat16=False,
reversible_encoder=False,
use_two_swaps_per_encoder_block=True,
center_layernorm=True,
half_before_layer=None,
double_after_layer=None,
mode='train'):
"""Returns a highly configurable Terraformer encoder-decoder model.
This model maps paired text sequences (source and target) to float-valued
losses. If ``input_vocab_size`` is not ``None``, the layer takes
two input sequences:
- inputs (2):
- source: 2-D int array representing a batch of text strings via token
IDs plus padding markers; shape is `(batch_size, sequence_length)`,
where sequence_length <= ``max_len``. Array elements are in
``range(input_vocab_size)``, and 0 values mark padding positions.
- target: 2-D int array representing a batch of text strings via token
IDs plus padding markers; shape is `(batch_size, sequence_length)`,
where sequence_length <= ``max_len``. Array elements are in
``range(output_vocab_size)``, and 0 values mark padding positions.
- output: 1-D float array of losses; shape is `(batch_size)`.
If ``input_vocab_size`` is ``None``, the layer takes three input sequences:
- inputs (3):
- source: 3-D float array representing a batch of already-embedded text
strings; shape is `(batch_size, sequence_length, d_model)`, where
sequence_length <= ``max_len``.
- mask: 2-D int array representing active versus masked positions; 0
values mark masked (padding) positions.
- target: 2-D int array representing a batch of text strings via token
IDs plus padding markers; shape is `(batch_size, sequence_length)`,
where sequence_length <= ``max_len``. Array elements are in
``range(output_vocab_size)``, and 0 values mark padding positions.
- output: 1-D float array of losses; shape is `(batch_size)`.
Args:
input_vocab_size: Input vocabulary size -- each element of the input tensor
should be an integer in ``range(vocab_size)``. These integers typically
represent token IDs from a vocabulary-based tokenizer.
output_vocab_size: If specified, gives the vocabulary size for the targets;
if ``None``, then input and target integers (token IDs) are assumed to
come from the same vocabulary.
d_model: Last/innermost dimension of activation arrays at most points in
the model, including the initial embedding output.
d_ff: Last/innermost dimension of special (typically wider)
:py:class:`Dense` layer in the feedforward part of each encoder block.
d_attention_key: Depth of key vectors in each attention head.
d_attention_value: Depth of value vectors in each attention head.
n_encoder_layers: Number of encoder blocks.
n_decoder_layers: Number of decoder blocks.
n_heads: Number of attention heads.
dropout: Stochastic rate (probability) for dropping an activation value
when applying dropout within encoder/decoder blocks. The same rate is
also used for attention dropout in encoder/decoder blocks.
max_len: Maximum symbol length for positional encoding.
encoder_attention_type: Type of attention to use in the encoder; must be
an attention-type subclass of :py:class:`trax.layers.Layer`.
encoder_decoder_attention_type: Type of attention to use in the decoder;
must be an attention-type subclass of :py:class:`trax.layers.Layer`.
pos_type: String indicating the type of positional embeddings to use.
pos_axial_shape: Shape (tuple of ints) to use for the axial position
encoding. If unset, axial position encoding is disabled.
pos_d_axial_embs: Tuple of ints specifying the depth of position embedding
for each axis. Tuple length must match ``pos_axial_shape``, and values
must sum to ``d_model``.
pos_start_from_zero_prob: Stochastic rate (probability) for starting
positional encoding at position 0 during training. If 1.0, always start
from position 0; if < 1.0, the non-zero starts will be uniformly
distributed up to ``pos_max_offset_to_add``.
pos_max_offset_to_add: Maximum offset to add to positions during training
when randomizing. This offset plus input length must be less than
``max_len`` for all training examples.
ff_activation: Type of activation function at the end of each block; must
be an activation-type subclass of :py:class:`trax.layers.Layer`.
ff_use_sru: If > 0, use this number of SRU layers in place of feedforward
layers.
ff_chunk_size: If > 0, chunk each feedforward layer into chunks of this
size.
ff_dropout: Stochastic rate (probability) for dropping an activation value
at feedforward nonlinearities.
ff_sparsity: If > 0, use sparse feedforward blocks with this level of
sparsity.
loss_sparsity_type: String indicating the type of sparsity to used in loss
layer; see :py:class:`SparseDenseWithOptions` for options. If ``None``,
use no sparsity.
loss_sparsity: If > 0, use this level of sparsity in the loss layer.
loss_d_lowrank: If > 0, use a (low-rank) intermediate layer, with this
dimension, in the loss.
loss_sparsity_prob: Stochastic rate (probability) for using the sparse
version of the loss. If ``None``, use the sparse version exclusively.
attention_chunk_size: If > 0, compute attention using chunks of this size.
n_layers_forget: How often to have a forgetting block between layers.
forget_dense: If True, use :py:class:`Dense` instances as forget layers;
else use no-ops.
n_decoder_attention_layers: Number of attention layers in a decoder block.
use_bfloat16: If True, use bfloat16 for weights; else use float32.
reversible_encoder: If True, make the encoder be reversible.
use_two_swaps_per_encoder_block: If True, ensure that there is a an even
number of swaps across the encoder.
center_layernorm: If True, use centering in :py:class:`LayerNorm` (the
default); else omit centering (which is known as RMS normalization).
half_before_layer: If not None, specifies an n'th layer such that all
layers before the n'th use half the normal values for ``d_model`` and
``d_ff``.
double_after_layer: If not None, specifies an n'th layer such that all
layers after the n'th use double the normal values for ``d_model`` and
``d_ff``.
mode: If ``'train'``, include dropout in each encoder/decoder block; else
dropout layers have no effect.
Returns:
A Terraformer encoder-decoder as a layer that maps from target and source
text sequences to a scalar loss.
"""
if mode == 'predict':
portal_mask = _PortalInput()
else:
portal_mask = None
# Set default dimensions for attention head key and value sizes.
if (d_model / 2) % n_heads != 0:
raise ValueError(f'n_heads ({n_heads}) must divide d_model/2 ({d_model/2})')
if d_attention_key is None:
d_attention_key = d_model // n_heads
if d_attention_value is None:
d_attention_value = d_model // n_heads
# Set values of d_model, d_ff and d_qkv for the first stage.
d_model1, d_ff1 = d_model, d_ff
d_attention_key1, d_attention_value1 = d_attention_key, d_attention_value
if half_before_layer:
d_model1, d_ff1 = d_model / 2, d_ff / 2
d_attention_key1 = d_attention_key / 2
d_attention_value1 = d_attention_value / 2
# Set values of d_model, d_ff and d_qkv for the final stage.
d_model2, d_ff2 = d_model, d_ff
d_attention_key2, d_attention_value2 = d_attention_key, d_attention_value
if double_after_layer:
d_model2, d_ff2 = d_model * 2, d_ff * 2
d_attention_key2 = d_attention_key * 2
d_attention_value2 = d_attention_value * 2
# Vector embeddings.
in_encoder, out_encoder, output_vocab_size = (
ct.EmbeddingAndPositionalEncodings(
input_vocab_size,
d_model1,
mode,
dropout,
[-2], # dropout_shared_axes
max_len,
output_vocab_size=output_vocab_size,
pos_type=pos_type,
pos_axial_shape=pos_axial_shape,
pos_d_axial_embs=pos_d_axial_embs,
pos_start_from_zero_prob=pos_start_from_zero_prob,
pos_max_offset_to_add=pos_max_offset_to_add,
use_bfloat16=use_bfloat16)
)
def _EncoderBlock():
return reformer.EncoderBlock(
d_model1,
d_ff1,
n_heads,
encoder_attention_type,
dropout=dropout,
ff_activation=ff_activation,
ff_dropout=ff_dropout,
ff_use_sru=ff_use_sru,
ff_chunk_size=ff_chunk_size,
ff_sparsity=ff_sparsity,
attention_chunk_size=attention_chunk_size,
center_layernorm=center_layernorm,
use_bfloat16=use_bfloat16,
use_two_swaps_per_block=use_two_swaps_per_encoder_block,
mode=mode)
def _Encoder(): # vec_e mask_e tok_e tok_d tok_d
layers = [
tl.ReversibleSelect([0, 0]),
_ReversibleSerialForget(
[_EncoderBlock() for _ in range(n_encoder_layers)],
d_model1,
n_layers_forget,
forget_dense)
]
if not reversible_encoder:
layers += [
_XYAvg(),
tl.Dense(d_model1, use_bfloat16=use_bfloat16),
tl.LayerNorm(),
]
if mode == 'predict':
return tl.Cache(tl.Serial(layers))
else:
return tl.Serial(layers)
if mode == 'predict':
# TODO(jaszczur): Remove temporary fix of Terraformer padding in predict.
# In predict mode Terraformer needs masking for merged encoder-decoder
# sequence. This monkey patches the layer with a mask to neccessary places.
# This shouldn't be a permanent solution - mask should be passed through
# the stack and all the layers.
tl.attention.DotProductCausalAttention.monkey_patched_mask = (
lambda x: portal_mask)
tl.research.sparsity._RememberPad.monkey_patched_mask = ( # pylint: disable=protected-access
lambda x: portal_mask)
originalScanSRUCell = tl.rnn.ScanSRUCell
tl.rnn.ScanSRUCell = functools.partial(tl.rnn.ScanSRUCell,
monkey_patched_mask=portal_mask)
decoder_blocks = []
if isinstance(encoder_decoder_attention_type, (tuple, list)):
assert n_decoder_layers % len(encoder_decoder_attention_type) == 0
else:
encoder_decoder_attention_type = [encoder_decoder_attention_type]
for layer_idx in range(n_decoder_layers):
layer_attention_type = encoder_decoder_attention_type[
layer_idx % len(encoder_decoder_attention_type)]
# Grow d_model, d_ff, and d_qkv if requested.
d_m, d_f, d_k, d_v = d_model1, d_ff1, d_attention_key1, d_attention_value1
if half_before_layer and layer_idx >= half_before_layer:
d_m, d_f, d_k, d_v = d_model, d_ff, d_attention_key, d_attention_value
if double_after_layer and layer_idx > double_after_layer:
d_m, d_f, d_k, d_v = d_model2, d_ff2, d_attention_key2, d_attention_value2
decoder_block = reformer.DecoderBlock(
d_m, d_f, d_k, d_v, n_heads,
attention_type=layer_attention_type,
dropout=dropout,
ff_activation=ff_activation,
ff_dropout=ff_dropout,
ff_use_sru=ff_use_sru,
ff_chunk_size=ff_chunk_size,
ff_sparsity=ff_sparsity,
attention_chunk_size=attention_chunk_size,
n_attention_layers=n_decoder_attention_layers,
center_layernorm=center_layernorm,
use_bfloat16=use_bfloat16,
mode=mode)
decoder_blocks.append(decoder_block)
if half_before_layer and layer_idx == half_before_layer - 1:
decoder_blocks.append(tl.ReversibleConcatenatePair())
if double_after_layer and layer_idx == double_after_layer:
decoder_blocks.append(tl.ReversibleConcatenatePair())
if mode == 'predict':
# After initializing the decoder we can revert to original state of
# previously monkey-patched classes/functions.
tl.attention.DotProductCausalAttention.monkey_patched_mask = (
lambda x: None)
tl.research.sparsity._RememberPad.monkey_patched_mask = (lambda x: None) # pylint: disable=protected-access
tl.rnn.ScanSRUCell = originalScanSRUCell
def _Loss():
return tl.SparseDenseWithOptions(
output_vocab_size,
d_input=d_model2,
sparsity_type=loss_sparsity_type,
sparsity=loss_sparsity,
d_lowrank=loss_d_lowrank,
prob_sparse=loss_sparsity_prob,
use_bfloat16=use_bfloat16,
mode=mode)
def _enc_dec_concat():
"""Layers to merge encoder and decoder."""
if reversible_encoder:
return [
tl.ReversibleSelect([0, 1, 4, 2, 3]), # v_e v_d mask_e tok_e tok_d
t2.ConcatWithPadding2(mode=mode), # v_ed v_ed tok_e tok_d
]
else:
return [
tl.ReversibleSelect([0, 3, 1, 2]), # v_e v_d mask_e tok_e tok_d
t2.ConcatWithPadding(mode=mode), # v_ed tok_e tok_d
tl.ReversibleSelect([0, 0]), # v_ed v_ed tok_e tok_d
]
def _inp_layers():
if input_vocab_size is not None:
return tl.AssertFunction(
'bl,br->bld,bl,bl,br', # b: batch, l/r: enc/dec length, d: vec depth
tl.Serial( # tok_e tok_d
tl.Select([0, 0, 0, 1]),
tl.Parallel(in_encoder, [tl.PaddingMask(),
_RemoveAxes12()])
)) # vec_e mask_e tok_e tok_d
else:
# Input in this case is vec_e, mask_e, tok_d. Where all downstream
# operations expect tok_e, we give it instead mask_e, expecting that
# downstream ops only are looking for padding/not padding.
return tl.AssertFunction(
'blf,bl,br->bld,bl,bl,br', # f: in-feature depth, d: out-vector depth
tl.Serial( # vec_e mask_e tok_d
tl.Select([0, 1, 1, 2]),
tl.Parallel(in_encoder, [], _AsTokenIDs())
)) # vec_e mask_e tok_e tok_d
# Assemble and return the model.
return tl.Serial(
_inp_layers(), # vec_e mask_e tok_e tok_d
tl.Parallel([], portal_mask),
tl.Select([0, 1, 2, 3, 3]), # Copy decoder tokens for use in loss.
# Embed in and out tokens; done together as weights may be shared.
tl.Parallel([], [], [], [tl.ShiftRight(mode=mode),
out_encoder]), # vec_e mask_e tok_e vec_d tok_d
# Encode; then concat encoder and decoder, given encoder mask.
_Encoder(), # vec_e mask_e tok_e vec_d tok_d
_enc_dec_concat(),
# Run decoder blocks.
_ReversibleSerialForget(decoder_blocks, d_model2, n_layers_forget,
forget_dense), # vec_ed1 vec_ed2 tok_e tok_d
_XYAvg(), # vec_ed tok_e tok_d
tl.LayerNorm(), # vec_ed tok_e tok_d
# Separate out the encoder part from the concatenated vector,
# then compute loss.
tl.Select([0, 1, 2, 2]), # vec_ed tok_e tok_d tok_d
t2.StripFromConcatenateWithPadding(mode=mode), # vec_d tok_d
_Loss(), # vec_d tok_d
) |
Returns a layer that inserts two internal size-1 axes into an array. | def _InsertAxes12():
"""Returns a layer that inserts two internal size-1 axes into an array."""
return tl.Fn('InsertAxes12',
lambda x: jnp.reshape(x, (x.shape[0], 1, 1, x.shape[1]))) |
Returns a layer that removes two internal size-1 axes from an array. | def _RemoveAxes12():
"""Returns a layer that removes two internal size-1 axes from an array."""
return tl.Fn('RemoveAxes12', lambda x: jnp.squeeze(x, (1, 2))) |
Returns a layer that makes mask values look like token ID ints. | def _AsTokenIDs():
"""Returns a layer that makes mask values look like token ID ints."""
return tl.Fn('AsTokenIDs', lambda x: x.astype(jnp.int32)) |
Returns a layer that computes the element-wise average of two arrays. | def _XYAvg():
"""Returns a layer that computes the element-wise average of two arrays."""
return tl.Fn('XYAvg', lambda x, y: (x + y) / 2.0) |
ReversibleSerial but with a forgetting block every n_layers. | def _ReversibleSerialForget(layers, d_model, n_layers, forget_dense=True):
"""ReversibleSerial but with a forgetting block every n_layers."""
if not n_layers or len(layers) <= n_layers + 1:
return tl.ReversibleSerial(layers)
layers1, layers2 = layers[:n_layers], layers[n_layers:]
if forget_dense:
forgetting_layer = tl.Serial(
_XYAvg(),
tl.Dense(d_model),
tl.Dup(),
)
else:
forgetting_layer = tl.Select([0, 1])
return tl.Serial(
tl.ReversibleSerial(layers1),
forgetting_layer,
_ReversibleSerialForget(layers2, d_model, n_layers, forget_dense)
) |
Returns a Transformer model.
This model expects an input pair: target, source.
Args:
input_vocab_size: int: vocab size of the source.
output_vocab_size: int (optional): vocab size of the target. If None, the
source and target are assumed to have the same vocab.
d_model: int: depth of embedding
d_ff: int: depth of feed-forward layer
n_encoder_layers: int: number of encoder layers
n_decoder_layers: int: number of decoder layers
n_heads: int: number of attention heads
dropout: float: dropout rate (how much to drop out)
dropout_shared_axes: axes on which to share dropout mask
max_len: int: maximum symbol length for positional encoding
mode: str: 'train' or 'eval'
ff_activation: the non-linearity in feed-forward layer
ff_dropout: Stochastic rate (probability) for dropping an activation value
when applying dropout after the FF dense layer.
ff_chunk_size: int; if > 0, chunk feed-forward into this-sized chunks
ff_use_sru: int; if > 0, we use this many SRU layers instead of feed-forward
ff_sparsity: int, if > 0 use sparse feed-forward block with this sparsity
ff_sparsity_type: string, if ff_sparsity >0,
use SparseFF if ff_sparsity_type=`'1inN'` and
use BlockSparseFF if ff_sparsity_type=`'Block'`
attention_chunk_size: int, if > 0 run attention chunked at this size
encoder_attention_type: The attention layer to use for the encoder part.
n_encoder_attention_layers: int, within each encoder block, how many
attention layers to have.
decoder_attention_type: The attention layer to use for the
encoder-decoder attention.
n_decoder_attention_layers: int, within each decoder block, how many
attention layers to have.
pos_type: string, the type of positional embeddings to use.
pos_axial_shape: tuple of ints: input shape to use for the axial position
encoding. If unset, axial position encoding is disabled.
pos_d_axial_embs: tuple of ints: depth of position embedding for each axis.
Tuple length must match pos_axial_shape, and values must sum to d_model.
Returns:
A Transformer model as a layer that maps from a target, source pair to
activations over a vocab set. | def Transformer2(input_vocab_size,
output_vocab_size=None,
d_model=512,
d_ff=2048,
n_encoder_layers=6,
n_decoder_layers=6,
n_heads=8,
dropout=0.1,
dropout_shared_axes=None,
max_len=2048,
mode='train',
ff_activation=tl.Relu,
ff_dropout=0.1,
ff_chunk_size=0,
ff_use_sru=0,
ff_sparsity=0,
ff_sparsity_type='1inN',
attention_chunk_size=0,
encoder_attention_type=tl.Attention,
n_encoder_attention_layers=1,
decoder_attention_type=tl.CausalAttention,
n_decoder_attention_layers=2,
pos_type=None,
pos_axial_shape=None,
pos_d_axial_embs=None):
"""Returns a Transformer model.
This model expects an input pair: target, source.
Args:
input_vocab_size: int: vocab size of the source.
output_vocab_size: int (optional): vocab size of the target. If None, the
source and target are assumed to have the same vocab.
d_model: int: depth of embedding
d_ff: int: depth of feed-forward layer
n_encoder_layers: int: number of encoder layers
n_decoder_layers: int: number of decoder layers
n_heads: int: number of attention heads
dropout: float: dropout rate (how much to drop out)
dropout_shared_axes: axes on which to share dropout mask
max_len: int: maximum symbol length for positional encoding
mode: str: 'train' or 'eval'
ff_activation: the non-linearity in feed-forward layer
ff_dropout: Stochastic rate (probability) for dropping an activation value
when applying dropout after the FF dense layer.
ff_chunk_size: int; if > 0, chunk feed-forward into this-sized chunks
ff_use_sru: int; if > 0, we use this many SRU layers instead of feed-forward
ff_sparsity: int, if > 0 use sparse feed-forward block with this sparsity
ff_sparsity_type: string, if ff_sparsity >0,
use SparseFF if ff_sparsity_type=`'1inN'` and
use BlockSparseFF if ff_sparsity_type=`'Block'`
attention_chunk_size: int, if > 0 run attention chunked at this size
encoder_attention_type: The attention layer to use for the encoder part.
n_encoder_attention_layers: int, within each encoder block, how many
attention layers to have.
decoder_attention_type: The attention layer to use for the
encoder-decoder attention.
n_decoder_attention_layers: int, within each decoder block, how many
attention layers to have.
pos_type: string, the type of positional embeddings to use.
pos_axial_shape: tuple of ints: input shape to use for the axial position
encoding. If unset, axial position encoding is disabled.
pos_d_axial_embs: tuple of ints: depth of position embedding for each axis.
Tuple length must match pos_axial_shape, and values must sum to d_model.
Returns:
A Transformer model as a layer that maps from a target, source pair to
activations over a vocab set.
"""
in_encoder, out_encoder, output_vocab_size = (
ct.EmbeddingAndPositionalEncodings(
input_vocab_size,
d_model,
mode,
dropout,
dropout_shared_axes,
max_len,
output_vocab_size=output_vocab_size,
pos_type=pos_type,
pos_axial_shape=pos_axial_shape,
pos_d_axial_embs=pos_d_axial_embs)
)
# pylint: disable=g-complex-comprehension
encoder_blocks = [
ct.EncoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes,
mode, ff_activation, ff_dropout, ff_chunk_size,
ff_use_sru, ff_sparsity, ff_sparsity_type,
attention_chunk_size, encoder_attention_type,
n_encoder_attention_layers)
for i in range(n_encoder_layers)]
# pylint: enable=g-complex-comprehension
encoder = tl.Serial(
in_encoder,
encoder_blocks,
tl.LayerNorm()
)
if mode == 'predict':
encoder = tl.Cache(encoder)
# pylint: disable=g-complex-comprehension
decoder_blocks = [
ct.DecoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes,
mode, ff_activation, ff_dropout, ff_chunk_size,
ff_use_sru, ff_sparsity, ff_sparsity_type,
attention_chunk_size, decoder_attention_type,
n_decoder_attention_layers)
for i in range(n_decoder_layers)]
# pylint: enable=g-complex-comprehension
# Assemble and return the model.
return tl.Serial(
# Input: encoder_side_tokens, decoder_side_tokens
# Copy decoder tokens for use in loss.
tl.Select([0, 0, 1, 1]), # tok_e tok_e tok_d tok_d
# Encode.
tl.Branch([], tl.PaddingMask()), # tok_e mask_e tok_e tok_d tok_d
encoder, # vec_e mask_e tok_e tok_d tok_d
# Simple encoder mask, doesn't contain extra dims.
tl.Select([2, 0, 2], n_in=3), # tok_e vec_e tok_e tok_d tok_d
tl.Fn('EncoderMask', # mask_e vec_e tok_e tok_d tok_d
lambda x: x != 0, n_out=1),
# Decode.
tl.Select([3, 1, 0, 2]), # tok_d vec_e mask_e tok_e tok_d
tl.ShiftRight(mode=mode), # stok_d vec_e mask_e tok_e tok_d
out_encoder, # svec_d vec_e mask_e tok_e tok_d
# Concat encoder and decoder.
tl.Select([1, 0]), # vec_e svec_d mask_e tok_e tok_d
ConcatWithPadding(mode=mode), # vec_ed tok_e tok_d
# Decoder blocks with causal attention
decoder_blocks, # vec_ed tok_e tok_d
tl.LayerNorm(), # vec_ed tok_e tok_d
# Separate out the encoder part from the concatenated vector.
tl.Select([0, 1, 2, 2]), # vec_ed tok_e tok_d tok_d
StripFromConcatenateWithPadding(mode=mode), # vec_d tok_d
# Map to output vocab.
tl.Dense(output_vocab_size), # vec_d tok_d
) |
Concatenate with padding: see the ConcatWithPadding layer for details. | def _ConcatWithPadding(vec_e, vec_d, mask_e):
"""Concatenate with padding: see the ConcatWithPadding layer for details."""
# pylint: disable=invalid-name
B, L1, H = vec_e.shape
L2 = vec_d.shape[1]
# pylint: enable=invalid-name
if vec_d.shape != (B, L2, H):
raise ValueError(f'Shape of decoder vector, {vec_d.shape}, does not'
f' equal {(B, L2, H)}.')
if mask_e.shape != (B, L1):
raise ValueError(f'Shape of encoder mask, {mask_e.shape}, does not'
f' equal {(B, L1)}.')
def _UpdateRow(x):
# row_e - (L1, H), row_d - (L2, H), row_mask_e - (L1,)
row_e, row_d, row_mask_e = x
# final_row - (L1+L2, H)
final_row = jnp.concatenate([row_e, jnp.zeros_like(row_d)], axis=0)
# Find the last real token/vector of the encoder.
e_idx = jnp.sum(row_mask_e, dtype=jnp.int32)
# Starting after that index, update with the decoder row.
zero = jnp.array(0, dtype=e_idx.dtype) # avoid int32/int64 mismatch
return fastmath.dynamic_update_slice(final_row, row_d, (e_idx, zero))
return fastmath.map(_UpdateRow, [vec_e, vec_d, mask_e]) |
Strip concatenate with padding: see the layer below for details. | def _StripFromConcatenateWithPadding(vec_ed, tok_e, tok_d):
"""Strip concatenate with padding: see the layer below for details."""
# pylint: disable=invalid-name
B, L, H = vec_ed.shape
L1 = tok_e.shape[1]
L2 = tok_d.shape[1]
# pylint: enable=invalid-name
if L != L1 + L2:
raise ValueError(f'Length from encoder-decoder vectors ({L}) does not'
f' equal sum of lengths from encoder ({L1}) and decoder'
f' ({L2}).')
if tok_e.shape != (B, L1):
raise ValueError(f'Shape of encoder tokens, {tok_e.shape}, does not'
f' equal {(B, L1)}.')
if tok_d.shape != (B, L2):
raise ValueError(f'Shape of decoder tokens, {tok_d.shape}, does not'
f' equal {(B, L2)}.')
def _UpdateRow(x):
# (L, H), (L1, H) & (L2, H)
row_ed, row_e, _ = x
mask_e = row_e != 0
len_e = jnp.sum(mask_e, dtype=jnp.int32)
# In `row_ed` start where encoder tokens/vecs end, i.e. are index `len_e`
# and pick up (L2, H) tensor slice from there.
zero = jnp.array(0, dtype=len_e.dtype) # avoid int32/int64 mismatch
return fastmath.dynamic_slice(row_ed, (len_e, zero), (L2, H))
return fastmath.map(_UpdateRow, [vec_ed, tok_e, tok_d]) |
Returns an L2 norm computed over all elements of all tensors in `tree`.
Args:
tree: Tree-structured collection of tensors, e.g., model weights matching
the model's layer structure.
Returns:
A scalar value computed as if all the tensors in `tree` were combined
and flattened into a single vector, and then the L2 norm of that vector
was calculated. | def l2_norm(tree):
"""Returns an L2 norm computed over all elements of all tensors in `tree`.
Args:
tree: Tree-structured collection of tensors, e.g., model weights matching
the model's layer structure.
Returns:
A scalar value computed as if all the tensors in `tree` were combined
and flattened into a single vector, and then the L2 norm of that vector
was calculated.
"""
leaves = fastmath.tree_flatten(tree)
return jnp.sqrt(sum(jnp.vdot(x, x) for x in leaves)) |
Proportionally reduces each gradient value to respect an aggregate limit.
Args:
grad_tree: Gradient values structured as a tree of tensors matching the
model's layer structure.
max_norm: The aggregate limit on gradient values. All gradient elements in
`grad_tree` are treated as if they belonged to a single vector and
that vector is shortened if needed so that its L2 norm does not exceed
`clip_grad_norm`.
Returns:
A new tree of tensors matching the structure of `grad_tree`, but with
element values proportionally rescaled as needed to respect the `max_norm`
limit. | def clip_grads(grad_tree, max_norm):
"""Proportionally reduces each gradient value to respect an aggregate limit.
Args:
grad_tree: Gradient values structured as a tree of tensors matching the
model's layer structure.
max_norm: The aggregate limit on gradient values. All gradient elements in
`grad_tree` are treated as if they belonged to a single vector and
that vector is shortened if needed so that its L2 norm does not exceed
`clip_grad_norm`.
Returns:
A new tree of tensors matching the structure of `grad_tree`, but with
element values proportionally rescaled as needed to respect the `max_norm`
limit.
"""
norm = l2_norm(grad_tree)
normalize = lambda g: jnp.where(norm < max_norm, g, g * (max_norm / norm))
return fastmath.nested_map(grad_tree, normalize) |
Adasum gradient composition, see https://arxiv.org/pdf/2006.02924.pdf. | def _adasum_merge(g1, g2):
"""Adasum gradient composition, see https://arxiv.org/pdf/2006.02924.pdf."""
frac1 = jnp.vdot(g1, g2) / (2 * jnp.vdot(g1, g1) + 1e-30)
frac2 = jnp.vdot(g1, g2) / (2 * jnp.vdot(g2, g2) + 1e-30)
return (1 - frac1) * g1 + (1 - frac2) * g2 |
Averages gradients over all the devices across different hosts. | def _average_multidevice_gradients(gradients, adasum=False):
"""Averages gradients over all the devices across different hosts."""
n = fastmath.global_device_count() // base.N_WEIGHTS_SHARDS
if adasum:
# This implements a version of the Adasum algorithm from the following
# paper: https://arxiv.org/pdf/2006.02924.pdf
lg = max([i for i in range(20) if 2**i <= n])
for lg_i in range(lg):
shift = 2**lg_i
perm = []
for i in range(n):
block_i = i % (2*shift) # we do blocks of 2*shift size
if block_i < shift:
perm.append((i, i+shift))
else:
perm.append((i, i-shift))
perm_grad = jax.lax.ppermute(gradients, perm=perm, axis_name='batch')
gradients = fastmath.nested_map_multiarg(
_adasum_merge, gradients, perm_grad)
if base.N_WEIGHTS_SHARDS > 1: # only sum gradients from matching shards
groups = [[base.N_WEIGHTS_SHARDS * i + d for i in range(int(n))]
for d in range(base.N_WEIGHTS_SHARDS)]
gradients_psum = fastmath.psum(gradients, 'batch',
axis_index_groups=groups)
else:
gradients_psum = fastmath.psum(gradients, 'batch') # sum all gradients
n = jnp.array(n, dtype=jnp.float32)
return fastmath.nested_map(lambda g: g / n, gradients_psum) |
Accelerates the given forward_and_backward_fn function. | def _accelerate_update_fn(forward_and_backward_fn,
optimizer,
n_devices,
accelerate=True,
adasum=False):
"""Accelerates the given forward_and_backward_fn function."""
if n_devices == 1:
def single_device_update_fn(
weights_and_slots, step, opt_params, batch, state, rng):
step = jnp.array(step, dtype=jnp.int32) # Needed in TFNP backend.
weights, slots = weights_and_slots
(loss, state), gradients = forward_and_backward_fn(
batch, weights, state, rng)
weights, slots, stats = optimizer.tree_update(
step, gradients, weights, slots, opt_params, store_slots=False)
stats['loss'] = loss
return (weights, slots), state, stats
if accelerate:
# TODO(afrozm): Find out the status of buffer donation on GPUs, then do
# donate_argnums=(0,).
single_device_update_fn = fastmath.jit(single_device_update_fn)
return single_device_update_fn
# More than one device (core), i.e. all of TPU configurations etc.
assert n_devices > 1, f'{n_devices} should be greater than 1.'
@functools.partial(fastmath.pmap, axis_name='batch', donate_argnums=(0,))
def _multi_device_update_fn(
weights_and_slots, step, opt_params, batch, state, rng):
# All tensors should have the first dimension = n_devices.
weights, slots = weights_and_slots
(loss, state), gradients = (
forward_and_backward_fn(batch, weights, state, rng))
gradients = _average_multidevice_gradients(gradients, adasum=adasum)
weights, slots, stats = optimizer.tree_update(
step, gradients, weights, slots, opt_params, store_slots=False)
stats['loss'] = loss
return (weights, slots), state, stats
def multi_device_update_fn(
weights_and_slots, step, opt_params, batch, state, rng):
# Need to replicate step to n_devices leading dimension.
return _multi_device_update_fn(weights_and_slots,
jnp.repeat(step, n_devices), opt_params,
batch, state, rng)
return multi_device_update_fn |
Create the fbo function for a given layer and optimizer. | def _fbo_with_layer_and_opt(layer, optimizer, n_devices,
stats_name=None, adasum=False):
"""Create the fbo function for a given layer and optimizer."""
def fbo(inputs, weights, grads, state, slots, opt_params, rng, step):
"""FBO of the layer."""
# We need a layer pure_fn but only for inputs and weights.
def pure_fn_without_state_and_rng(x, w):
return layer.pure_fn(x, w, state, rng)
# Calculate the vector-Jacobian product of the reduced pure fn.
activations, vjp_fn, new_state = fastmath.vjp(
pure_fn_without_state_and_rng, inputs, weights, has_aux=True)
# In the loss layer, set gradients to 1 with the dtype of activations=loss.
if grads is None and stats_name is not None:
grads = jnp.ones((), dtype=activations.dtype)
# The vjp function returns gradients with respect to inputs and weights.
grads_inputs, grads_weights = vjp_fn(grads)
# For non-trainable layers, return the calculated arguments.
if _is_empty_tuple(weights):
stats = {}
if stats_name is not None:
stats[stats_name] = activations
return weights, new_state, slots, grads_inputs, stats
# In multi-device setting, average gradients from multiple devices.
if n_devices > 1:
grads_weights = _average_multidevice_gradients(
grads_weights, adasum=adasum)
# Run the optimizer.
new_weights, new_slots, stats = optimizer.tree_update(
step, grads_weights, weights, slots, opt_params, store_slots=False)
if stats_name is not None:
stats[stats_name] = activations
return new_weights, new_state, new_slots, grads_inputs, stats
return fbo |
Create the reverse_and_fbo function for a given layer and optimizer. | def _reverse_and_fbo_with_layer_and_opt(layer, optimizer, n_devices, adasum):
"""Create the reverse_and_fbo function for a given layer and optimizer."""
def reverse_and_fbo(output, weights, grads, state, new_state,
slots, opt_params, rng, step):
"""Reverse and FBO of the layer."""
# Call the reverse_and_grad method of the layer.
inputs, (grads_inputs, grads_weights) = layer.reverse_and_grad(
output, grads, weights, state, new_state, rng=rng)
# For non-trainable layers, return the calculated arguments.
if _is_empty_tuple(weights):
return weights, slots, inputs, grads_inputs, {}
# In multi-device setting, average gradients from multiple devices.
if n_devices > 1:
grads_weights = _average_multidevice_gradients(
grads_weights, adasum=adasum)
# Run the optimizer.
new_weights, new_slots, stats = optimizer.tree_update(
step, grads_weights, weights, slots, opt_params, store_slots=False)
return new_weights, new_slots, inputs, grads_inputs, stats
return reverse_and_fbo |
Check if x is either empty or a tuple of (tuples of) empty things. | def _is_empty_tuple(x):
"""Check if x is either empty or a tuple of (tuples of) empty things."""
if not isinstance(x, (list, tuple)):
return False
for y in x:
if not _is_empty_tuple(y):
return False
return True |
Extracts blocks and loss layer for use with ReversibleSerialTrainer.
Args:
layers: a list of layers of a single layer to extract blocks from;
should end with a loss, e.g., [model, loss] or tl.Serial(model, loss).
loss_chunk_size: int, if > 0 creates a chunked loss layer to save memory
in models with larger vocabulary; requires the last sublayers of loss
are [Dense, LogSoftmax, _CrossEntropy, _WeightedMean] in that order.
Returns:
a pair (blocks, loss_layer) to use with ReversibleSerialTrainer. | def extract_reversible_blocks(layers, loss_chunk_size=0):
"""Extracts blocks and loss layer for use with ReversibleSerialTrainer.
Args:
layers: a list of layers of a single layer to extract blocks from;
should end with a loss, e.g., [model, loss] or tl.Serial(model, loss).
loss_chunk_size: int, if > 0 creates a chunked loss layer to save memory
in models with larger vocabulary; requires the last sublayers of loss
are [Dense, LogSoftmax, _CrossEntropy, _WeightedMean] in that order.
Returns:
a pair (blocks, loss_layer) to use with ReversibleSerialTrainer.
"""
def _flatten(l):
"""Flatten all Serial layers and sub(sub-...) layers into a list."""
if isinstance(l, (list, tuple)):
return [x for layer in l for x in _flatten(layer)] # pylint: disable=g-complex-comprehension
elif isinstance(l, tl.Serial):
return _flatten(l.sublayers)
else:
return [l]
# Extract standard and reversible layer blocks.
blocks, std_layers, rev_layers = [], [], []
for layer in _flatten(layers):
if isinstance(layer, tl.ReversibleLayer):
rev_layers.append(layer)
elif not rev_layers:
std_layers.append(layer)
else:
blocks.append((std_layers, rev_layers))
std_layers, rev_layers = [], []
std_layers.append(layer)
if rev_layers:
raise ValueError('The final layer must be a standard loss, not reversible.')
if loss_chunk_size > 0:
# For now we only do chunking of [Dense, LogSoftmax, CrossEntopy, Mean]
# Let's check that these are the last 4 layers.
border_layers = ['StripFromConcatenateWithPadding', 'Select']
loss_start = None
for index, layer in enumerate(std_layers):
if layer.name in border_layers:
loss_start = index + 1
if loss_start is None:
raise ValueError('Loss layer should be preceeded by one of {}; got {}'
.format(border_layers, [l.name for l in std_layers]))
if len(std_layers) - loss_start < 4:
raise ValueError('Too short loss layer for chunking')
last_3_names = ' '.join([l.name for l in std_layers[-3:]])
if last_3_names != 'LogSoftmax _CrossEntropy _WeightedMean':
raise ValueError('Loss chunking only works with last layers being "'
'LogSoftmax, _CrossEntropy, _WeightedMean" but got: ' +
last_3_names)
# Create chunked dense+logsoftmax+cross-entropy-loss.
chunked_xent = tl.Chunk(tl.Serial(std_layers[loss_start:-1]),
loss_chunk_size)
# The chunked loss should operate on a merged batch dimension, e.g.,
# including both length and batch size. Need to merge and un-merge later.
def _reshape_to_batch_and_copy_targets(preds, targets):
batched_preds = jnp.reshape(preds, [-1, preds.shape[-1]])
batched_targets = jnp.reshape(targets, [-1])
return batched_preds, batched_targets, targets
def _reshape_xent_back(xent, targets):
return jnp.reshape(xent, targets.shape)
batched_xent = tl.Serial(
tl.Fn('pre_xent_rebatch', _reshape_to_batch_and_copy_targets, n_out=3),
chunked_xent,
tl.Fn('after_xent_rebatch', _reshape_xent_back)
)
loss_layer = tl.Serial(std_layers[:loss_start] + [batched_xent],
std_layers[-1])
else:
loss_layer = tl.Serial(std_layers)
return blocks, loss_layer |
Initialize reversible blocks and the loss layer and place weights on CPU.
Args:
blocks: List of reversible blocks (pairs of layer lists).
loss_layer: The final loss layer to initialize.
input_signature: The signature of the input to the blocks.
rng: Random key used to initialize the layers. | def init_reversible_blocks(blocks, loss_layer, input_signature, rng):
"""Initialize reversible blocks and the loss layer and place weights on CPU.
Args:
blocks: List of reversible blocks (pairs of layer lists).
loss_layer: The final loss layer to initialize.
input_signature: The signature of the input to the blocks.
rng: Random key used to initialize the layers.
"""
sig_stack = input_signature
process = psutil.Process(os.getpid())
mem_use = process.memory_info().rss
for (std_layers, rev_layers) in blocks:
rngs = fastmath.random.split(rng, len(std_layers) + len(rev_layers) + 1)
rng = rngs[0]
for layer, layer_rng in zip(std_layers + rev_layers, rngs[1:]):
sig = cb.inputs_from_stack(sig_stack, layer.n_in)
layer.init(sig, rng=layer_rng)
layer.weights = tl.on_cpu(layer.weights) # store weights in cpu memory
layer.state = tl.on_cpu(layer.state) # store weights in cpu memory
logging.info('init: layer %s\nadded cpu memory (MB): %.2f', str(layer),
(process.memory_info().rss - mem_use) / float(1024 * 1024))
mem_use = process.memory_info().rss
logging.info('init: cpu memory use (MB): %.2f',
mem_use / float(1024 * 1024))
out_sig = layer.output_signature(sig)
sig_stack = cb.outputs_onto_stack(out_sig, sig_stack, layer.n_in)
loss_layer.init(cb.inputs_from_stack(sig_stack, loss_layer.n_in), rng=rng)
loss_layer.weights = tl.on_cpu(loss_layer.weights)
loss_layer.state = tl.on_cpu(loss_layer.state) |
Copy model weights[start:end] from from_trainer to to_trainer. | def _copy_model_weights_and_state( # pylint: disable=invalid-name
start, end, from_trainer, to_trainer, copy_optimizer_slots=False
):
"""Copy model weights[start:end] from from_trainer to to_trainer."""
from_weights = from_trainer.model_weights
to_weights = list(to_trainer.model_weights)
shared_weights = from_weights[start:end]
to_weights[start:end] = shared_weights
to_trainer.model_weights = to_weights
from_state = from_trainer.model_state
to_state = list(to_trainer.model_state)
shared_state = from_state[start:end]
to_state[start:end] = shared_state
to_trainer.model_state = to_state
if copy_optimizer_slots:
# TODO(lukaszkaiser): make a nicer API in Trainer to support this.
# Currently we use the hack below. Note [0] since that's the model w/o loss.
# pylint: disable=protected-access
from_slots = from_trainer._opt_state.slots[0][start:end]
to_slots = to_trainer._opt_state.slots[0]
# The lines below do to_slots[start:end] = from_slots, but on tuples.
new_slots = to_slots[:start] + from_slots[start:end] + to_slots[end:]
new_slots = tuple([new_slots] + list(to_trainer._opt_state.slots[1:]))
to_trainer._opt_state = to_trainer._opt_state._replace(slots=new_slots) |
Returns True every n_steps, for use as *_at functions in various places. | def every(n_steps):
"""Returns True every n_steps, for use as *_at functions in various places."""
return lambda step: step % n_steps == 0 |
Calculate weights for x by percentile-and-weights given in thresholds.
Thresholds contain a list of (p, weight, minumum). For each threshold,
all elements of x that are above the p-th percentile *and* above minimum
get the weight weight, and all other get the weight 0.
The result is the sum over all thresholds.
Args:
x: tensor to calculate the weights for
thresholds: list of triples (percentile, weight, minimum) used to
calculate the weights (see above how)
Returns:
weights, a tensor of the same shape as x | def _weighted_percentiles(x, thresholds):
"""Calculate weights for x by percentile-and-weights given in thresholds.
Thresholds contain a list of (p, weight, minumum). For each threshold,
all elements of x that are above the p-th percentile *and* above minimum
get the weight weight, and all other get the weight 0.
The result is the sum over all thresholds.
Args:
x: tensor to calculate the weights for
thresholds: list of triples (percentile, weight, minimum) used to
calculate the weights (see above how)
Returns:
weights, a tensor of the same shape as x
"""
res = []
for (percentile, weight, minimum) in thresholds:
threshold = jnp.percentile(x, percentile)
if minimum is not None:
threshold = jnp.maximum(minimum, threshold)
zero_ones = jnp.where(x < threshold, jnp.zeros_like(x), jnp.ones_like(x))
res.append(weight * zero_ones)
return sum(res) |
Definition of the Advantage Weighted Regression (AWR) loss. | def AWRLoss(beta, w_max, thresholds): # pylint: disable=invalid-name
"""Definition of the Advantage Weighted Regression (AWR) loss."""
def f(log_probs, advantages, old_log_probs, mask):
del old_log_probs # Not used in AWR.
weights = jnp.minimum(awr_weights(advantages, beta, thresholds), w_max)
return -jnp.sum(log_probs * weights * mask) / jnp.sum(mask)
return tl.Fn('AWRLoss', f) |
Definition of the Advantage Weighted Regression (AWR) loss. | def SamplingAWRLoss(beta, w_max, thresholds, # pylint: disable=invalid-name
reweight=False, sampled_all_discrete=False):
"""Definition of the Advantage Weighted Regression (AWR) loss."""
def f(log_probs, advantages, old_log_probs, mask):
if reweight: # Use new policy weights for sampled actions instead.
mask *= jnp.exp(fastmath.stop_gradient(log_probs) - old_log_probs)
if sampled_all_discrete: # Actions were sampled uniformly; weight them.
mask *= jnp.exp(old_log_probs)
weights = jnp.minimum(awr_weights(advantages, beta, thresholds), w_max)
return -jnp.sum(log_probs * weights * mask) / jnp.sum(mask)
return tl.Fn('SamplingAWRLoss', f) |
Computes a discount to apply at a given timestep, based on the mask. | def mask_discount(discount, discount_mask):
"""Computes a discount to apply at a given timestep, based on the mask."""
return fastmath.numpy.where(discount_mask, discount, 1.0) |
Computes discounted returns for a trajectory or a batch of them. | def discounted_returns(rewards, gammas):
"""Computes discounted returns for a trajectory or a batch of them."""
returns = np.zeros_like(rewards)
ret = 0.0
for i in reversed(range(rewards.shape[-1])):
ret = rewards[..., i] + gammas[..., i] * ret
returns[..., i] = ret
return returns |
Calculate Monte Carlo advantage.
We assume the values are a tensor of shape [batch_size, length] and this
is the same shape as rewards and returns.
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin]. | def monte_carlo(gamma, margin):
"""Calculate Monte Carlo advantage.
We assume the values are a tensor of shape [batch_size, length] and this
is the same shape as rewards and returns.
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin].
"""
del gamma
def estimator(rewards, returns, values, dones, discount_mask):
del discount_mask
(_, length) = returns.shape
# Make sure that the future returns and values at "done" states are zero.
returns[dones] = rewards[dones]
values[dones] = 0
return (returns - values)[:, :(length - margin)]
return estimator |
Calculate TD-k advantage.
The k parameter is assumed to be the same as margin.
We calculate advantage(s_i) as:
gamma^n_steps * value(s_{i + n_steps}) - value(s_i) + discounted_rewards
where discounted_rewards is the sum of rewards in these steps with
discounting by powers of gamma.
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin]. | def td_k(gamma, margin):
"""Calculate TD-k advantage.
The k parameter is assumed to be the same as margin.
We calculate advantage(s_i) as:
gamma^n_steps * value(s_{i + n_steps}) - value(s_i) + discounted_rewards
where discounted_rewards is the sum of rewards in these steps with
discounting by powers of gamma.
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin].
"""
def estimator(rewards, returns, values, dones, discount_mask):
del returns
gammas = mask_discount(gamma, discount_mask)
# Here we calculate advantage with TD-k, where k=margin.
k = margin
assert k > 0
advantages = np.zeros_like(values[:, k:])
discount = 1.0
for i in range(margin):
advantages += discount * rewards[:, i:-(margin - i)]
discount *= gammas[:, i:-(margin - i)]
advantages += discount * values[:, k:]
# Zero out the future returns at "done" states.
dones = dones[:, :-k]
# TPU friendly version of the formula
# advantages[dones] = rewards[:, :-k][dones]
advantages = fastmath.index_update(advantages,
dones,
rewards[:, :-k][dones])
# Subtract the baseline (value).
advantages -= values[:, :-k]
return advantages
return estimator |
Calculate TD-lambda advantage.
The estimated return is an exponentially-weighted average of different TD-k
returns.
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
lambda_: float, the lambda parameter of TD-lambda
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin]. | def td_lambda(gamma, margin, lambda_=0.95):
"""Calculate TD-lambda advantage.
The estimated return is an exponentially-weighted average of different TD-k
returns.
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
lambda_: float, the lambda parameter of TD-lambda
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin].
"""
def estimator(rewards, returns, values, dones, discount_mask):
gammas = mask_discount(gamma, discount_mask)
lambdas = mask_discount(lambda_, discount_mask)
td_returns = np.zeros_like(returns)
(_, length) = returns.shape
td_returns[:, -1] = values[:, -1]
for i in reversed(range(length - 1)):
lambda_i = lambdas[:, i]
td_returns[:, i] = rewards[:, i] + (1 - dones[:, i]) * gammas[:, i] * (
(1 - lambda_i) * values[:, i + 1] + lambda_i * td_returns[:, i + 1]
)
return (td_returns - values)[:, :(returns.shape[1] - margin)]
return estimator |
Calculate Generalized Advantage Estimation.
Calculate state values bootstrapping off the following state values -
Generalized Advantage Estimation https://arxiv.org/abs/1506.02438
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
lambda_: float, the lambda parameter of GAE
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin]. | def gae(gamma, margin, lambda_=0.95):
"""Calculate Generalized Advantage Estimation.
Calculate state values bootstrapping off the following state values -
Generalized Advantage Estimation https://arxiv.org/abs/1506.02438
Args:
gamma: float, gamma parameter for TD from the underlying task
margin: number of extra steps in the sequence
lambda_: float, the lambda parameter of GAE
Returns:
Function (rewards, returns, values, dones) -> advantages, where advantages
advantages is an array of shape [batch_size, length - margin].
"""
def estimator(rewards, returns, values, dones, discount_mask):
del returns
gammas = mask_discount(gamma, discount_mask)
lambdas = mask_discount(lambda_, discount_mask)
advantages = np.zeros_like(rewards)
(_, length) = rewards.shape
for i in reversed(range(length - 1)):
bellman_delta = rewards[:, i] - values[:, i] + (1 - dones[:, i]) * (
gammas[:, i] * values[:, i + 1]
)
advantages[:, i] = bellman_delta + (1 - dones[:, i]) * (
gammas[:, i] * lambdas[:, i] * advantages[:, i + 1]
)
return advantages[:, :(rewards.shape[1] - margin)]
return estimator |
Creates a Distribution for the given Gym space. | def create_distribution(space):
"""Creates a Distribution for the given Gym space."""
if isinstance(space, gym.spaces.Discrete):
return Categorical(shape=(), n_categories=space.n)
elif isinstance(space, gym.spaces.MultiDiscrete):
assert space.nvec.size
assert min(space.nvec) == max(space.nvec), (
'Every dimension must have the same number of categories, got '
'{}.'.format(space.nvec)
)
return Categorical(shape=(len(space.nvec),), n_categories=space.nvec[0])
elif isinstance(space, gym.spaces.Box):
return Gaussian(shape=space.shape)
else:
raise TypeError('Space {} unavailable as a distribution support.') |
Builds a log loss layer for a Distribution. | def LogLoss(distribution, **unused_kwargs): # pylint: disable=invalid-name
"""Builds a log loss layer for a Distribution."""
return tl.Serial(
distribution.LogProb(),
tl.Negate(),
tl.WeightedSum()
) |
Dense-LayerNorm-Tanh normalizer inspired by ACME. | def LayerNormSquash(mode, width=128): # pylint: disable=invalid-name
"""Dense-LayerNorm-Tanh normalizer inspired by ACME."""
# https://github.com/deepmind/acme/blob/master/acme/jax/networks/continuous.py#L34
del mode
return tl.Serial([
tl.Dense(width),
tl.LayerNorm(),
tl.Tanh(),
]) |
Definition of the loss of the value function. | def ValueLoss(values, returns, value_loss_coeff):
"""Definition of the loss of the value function."""
advantages = returns - values
l2_value_loss = jnp.mean(advantages**2) * value_loss_coeff
return l2_value_loss |
Definition of explained variance - an approach from OpenAI baselines. | def ExplainedVariance(values, returns):
"""Definition of explained variance - an approach from OpenAI baselines."""
assert returns.shape == values.shape, (
f'returns.shape was {returns.shape} and values.shape was {values.shape}')
# TODO(henrykm): it would be good to explain the relation with the time dim.
returns_variance = jnp.var(returns)
explained_variance = 1 - jnp.var(returns-values)/returns_variance
return explained_variance |
Definition of the preferred move. | def PreferredMove(dist_inputs, sample):
"""Definition of the preferred move."""
preferred_moves = sample(dist_inputs, temperature=0.0)
return jnp.mean(preferred_moves) |
Given distribution and actions calculate log probs. | def NewLogProbs(dist_inputs, actions, log_prob_fun):
"""Given distribution and actions calculate log probs."""
new_log_probs = log_prob_fun(dist_inputs,
actions)
return new_log_probs |
Definition of the Entropy Layer. | def EntropyLoss(dist_inputs, distribution, coeff):
"""Definition of the Entropy Layer."""
entropy_loss = distribution.entropy(dist_inputs) * coeff
return jnp.mean(entropy_loss) |
Probability Ratio from the PPO algorithm. | def ProbsRatio(dist_inputs, actions, old_log_probs, log_prob_fun):
"""Probability Ratio from the PPO algorithm."""
# dist_inputs of the shape float32[128,1,18]
# actions of the shape int32[128,1]
# and old_log_probs of the shape float32[128,1]
new_log_probs = NewLogProbs(dist_inputs, actions, log_prob_fun)
assert new_log_probs.shape == old_log_probs.shape, (
f'new_log_probs.shape was {new_log_probs.shape} and'
f'old_log_probs.shape was {old_log_probs.shape}')
# The ratio between new_probs and old_probs expressed
# using log_probs and exponentiation
probs_ratio = jnp.exp(new_log_probs - old_log_probs)
return probs_ratio |
Probability Ratio from the PPO algorithm. | def ApproximateKLDivergence(dist_inputs, actions, old_log_probs, log_prob_fun):
"""Probability Ratio from the PPO algorithm."""
new_log_probs = NewLogProbs(dist_inputs, actions, log_prob_fun)
assert new_log_probs.shape == old_log_probs.shape, (
f'new_log_probs.shape was {new_log_probs.shape} and'
f'old_log_probs.shape was {old_log_probs.shape}')
approximate_kl_divergence = 0.5 * \
jnp.mean(new_log_probs - old_log_probs) ** 2
return approximate_kl_divergence |
Unclipped Objective from the PPO algorithm. | def UnclippedObjective(probs_ratio, advantages):
"""Unclipped Objective from the PPO algorithm."""
assert probs_ratio.shape == advantages.shape, (
f'probs_ratio.shape was {probs_ratio.shape} and'
f'advantages.shape was {advantages.shape}')
unclipped_objective = probs_ratio * advantages
return unclipped_objective |
Clipped Objective from the PPO algorithm. | def ClippedObjective(probs_ratio, advantages, epsilon):
"""Clipped Objective from the PPO algorithm."""
assert probs_ratio.shape == advantages.shape, (
f'probs_ratio.shape was {probs_ratio.shape} and'
f'advantages.shape was {advantages.shape}')
clipped_objective = jnp.clip(probs_ratio, 1 - epsilon,
1 + epsilon) * advantages
assert probs_ratio.shape == clipped_objective.shape, (
f'probs_ratio.shape was {probs_ratio.shape} and'
f'clipped_objective.shape was {clipped_objective.shape}')
return clipped_objective |
PPO Objective. | def PPOObjective(dist_inputs, values, returns, dones, rewards,
actions, old_log_probs, log_prob_fun, epsilon,
normalize_advantages):
"""PPO Objective."""
# dist_inputs of the shape float32[128,1,18]
# values of the shape float32[128,1,1]
# returns of the shape float32[128,1,1]
# dones of the shape float32[128,1,1]
# rewards of the shape int32[128,1,1]
# actions of the shape int32[128,1]
# and old_log_probs of the shape float32[128,1]
returns = returns.squeeze(axis=2)
values = values.squeeze(axis=2)
dones = dones.squeeze(axis=2)
rewards = rewards.squeeze(axis=2)
assert rewards.shape == dones.shape, (
f'rewards.shape was {rewards.shape} and dones.shape was {dones.shape}')
assert dones.shape == values.shape, (
f'dones.shape was {dones.shape} and values.shape was {values.shape}')
assert returns.shape == values.shape, (
f'returns.shape was {returns.shape} and values.shape was {values.shape}')
assert returns.shape == old_log_probs.shape, (
f'returns.shape was {returns.shape} and'
f'old_log_probs.shape was {old_log_probs.shape}')
probs_ratio = ProbsRatio(dist_inputs, actions, old_log_probs, log_prob_fun)
assert probs_ratio.shape == old_log_probs.shape, (
f'probs_ratio.shape was {probs_ratio.shape} and'
f'old_log_probs.shape was {old_log_probs.shape}')
# jaxified versions of
# returns[dones] = rewards[dones]
# values[dones] = 0
returns = jnp.where(dones, rewards, returns)
values = jnp.where(dones, jnp.zeros_like(values), values)
advantages = returns - values
if normalize_advantages:
advantages = advantages - jnp.mean(advantages)
advantages /= jnp.std(advantages) + 1e-8
assert old_log_probs.shape == advantages.shape, (
f'old_log_probs.shape was {old_log_probs.shape} and advantages.shape was '
f'{advantages.shape}')
unclipped_objective = UnclippedObjective(probs_ratio, advantages)
assert unclipped_objective.shape == advantages.shape, (
f'old_log_probs.shape was {old_log_probs.shape} and'
f'unclipped_objective.shape was {unclipped_objective.shape}')
clipped_objective = ClippedObjective(probs_ratio, advantages, epsilon)
assert clipped_objective.shape == advantages.shape, (
f'clipped_objective.shape was {clipped_objective.shape} and'
f'advantages.shape was {advantages.shape}')
ppo_objective = jnp.minimum(unclipped_objective, clipped_objective)
assert ppo_objective.shape == advantages.shape, (
f'ppo_objective.shape was {ppo_objective.shape} and'
f'advantages.shape was {advantages.shape}')
return ppo_objective |
Definition of the Advantage Actor Critic (A2C) loss. | def A2CObjective(dist_inputs, values, returns, dones, rewards,
actions, mask, log_prob_fun, normalize_advantages):
"""Definition of the Advantage Actor Critic (A2C) loss."""
# dist_inputs of the shape float32[128,1,18]
# values of the shape float32[128,1,1]
# returns of the shape float32[128,1,1]
# dones of the shape int32[128,1,1]
# actions of the shape int32[128,1]
# and mask of the shape float32[128,1]
# We have to squeeze values and returns, because we
# are planning to compute (return - values) * new_log_probs * mask
# and all of them should be of the same dimension
values = values.squeeze(axis=2)
returns = returns.squeeze(axis=2)
dones = dones.squeeze(axis=2)
rewards = rewards.squeeze(axis=2)
assert rewards.shape == dones.shape, (
f'rewards.shape was {rewards.shape} and dones.shape was {dones.shape}')
assert dones.shape == values.shape, (
f'dones.shape was {dones.shape} and values.shape was {values.shape}')
assert returns.shape == values.shape, (
f'returns.shape was {returns.shape} and values.shape was {values.shape}')
assert values.shape == mask.shape, (
f'values.shape was {values.shape} and mask.shape was {mask.shape}')
assert returns.shape[0] == dist_inputs.shape[0], (
f'returns.shape[0] was {returns.shape[0]} and dist_inputs.shape[0] was '
f'{dist_inputs.shape[0]}')
new_log_probs = NewLogProbs(dist_inputs, actions, log_prob_fun)
assert new_log_probs.shape == mask.shape, (
f'new_log_probs.shape was {new_log_probs.shape} and mask.shape was '
f'{mask.shape}')
# jaxified versions of
# returns[dones] = rewards[dones]
# values[dones] = 0
returns = jnp.where(dones, rewards, returns)
values = jnp.where(dones, jnp.zeros_like(values), values)
advantages = returns - values
if normalize_advantages:
advantages = advantages - jnp.mean(advantages)
advantages /= jnp.std(advantages) + 1e-8
assert new_log_probs.shape == advantages.shape, (
f'new_log_probs.shape was {new_log_probs.shape} and advantages.shape was '
f'{advantages.shape}')
# One of the motivation to the squeezes and assertions is to
# avoid [128,1] * [128,1,1] * [128] multiplications in the definition
# of the a2c objective - we insist on the same shapes
a2c_objective = -jnp.sum(new_log_probs * advantages * mask) / jnp.sum(mask)
return a2c_objective |
Layer that serializes a given array. | def Serialize(serializer):
"""Layer that serializes a given array."""
def serialize(x):
(batch_size, length) = x.shape[:2]
shape_suffix = x.shape[2:]
x = jnp.reshape(x, (batch_size * length,) + shape_suffix)
x = serializer.serialize(x)
return jnp.reshape(x, (batch_size, -1, serializer.representation_length,))
return tl.Fn('Serialize', serialize) |
Layer that interleaves and flattens two serialized sequences.
The first sequence can be longer by 1 than the second one. This is so we can
interleave sequences of observations and actions, when there's 1 extra
observation at the end.
For serialized sequences [[x_1_1, ..., x_1_R1], ..., [x_L1_1, ..., x_L1_R1]]
and [[y_1_1, ..., y_1_R2], ..., [y_L2_1, ..., y_L2_R2]], where L1 = L2 + 1,
the result is [x_1_1, ..., x_1_R1, y_1_1, ..., y_1_R2, ..., x_L2_1, ...,
x_L2_R1, y_L2_1, ..., y_L2_R2, x_L1_1, ..., x_L1_R1] (batch dimension omitted
for clarity).
The layer inputs are a sequence pair of shapes (B, L1, R1) and (B, L2, R2),
where B is batch size, L* is the length of the sequence and R* is the
representation length of each element in the sequence.
Returns:
Layer that interleaves sequence of shape (B, L1 * R1 + L2 * R2). | def Interleave():
"""Layer that interleaves and flattens two serialized sequences.
The first sequence can be longer by 1 than the second one. This is so we can
interleave sequences of observations and actions, when there's 1 extra
observation at the end.
For serialized sequences [[x_1_1, ..., x_1_R1], ..., [x_L1_1, ..., x_L1_R1]]
and [[y_1_1, ..., y_1_R2], ..., [y_L2_1, ..., y_L2_R2]], where L1 = L2 + 1,
the result is [x_1_1, ..., x_1_R1, y_1_1, ..., y_1_R2, ..., x_L2_1, ...,
x_L2_R1, y_L2_1, ..., y_L2_R2, x_L1_1, ..., x_L1_R1] (batch dimension omitted
for clarity).
The layer inputs are a sequence pair of shapes (B, L1, R1) and (B, L2, R2),
where B is batch size, L* is the length of the sequence and R* is the
representation length of each element in the sequence.
Returns:
Layer that interleaves sequence of shape (B, L1 * R1 + L2 * R2).
"""
def interleave(x, y):
(batch_size, _, _) = x.shape
(_, length, _) = y.shape
assert x.shape[1] in (length, length + 1)
reprs = jnp.concatenate((x[:, :length], y), axis=2)
reprs = jnp.reshape(reprs, (batch_size, -1))
remainder = jnp.reshape(x[:, length:], (batch_size, -1))
return jnp.concatenate((reprs, remainder), axis=1)
return tl.Fn('Interleave', interleave) |
Layer that does the inverse of Interleave. | def Deinterleave(x_size, y_size):
"""Layer that does the inverse of Interleave."""
def deinterleave(inputs):
reprs = inputs
(batch_size, length) = reprs.shape[:2]
shape_suffix = reprs.shape[2:]
remainder_length = length % (x_size + y_size)
if remainder_length > 0:
remainder = reprs[:, None, -remainder_length:]
reprs = reprs[:, :-remainder_length]
reprs = jnp.reshape(reprs, (batch_size, -1, x_size + y_size) + shape_suffix)
x_reprs = reprs[:, :, :x_size]
y_reprs = reprs[:, :, x_size:]
if remainder_length > 0:
x_reprs = jnp.concatenate((x_reprs, remainder), axis=1)
return (x_reprs, y_reprs)
return tl.Fn('Deinterleave', deinterleave, n_out=2) |
Upsamples a mask to cover the serialized representation. | def RepresentationMask(serializer):
"""Upsamples a mask to cover the serialized representation."""
# Trax enforces the mask to be of the same size as the target. Get rid of the
# extra dimensions.
def representation_mask(mask):
# mask shape (batch_size,4)
mask = jnp.amax(mask, axis=tuple(range(2, mask.ndim)))
# mask shape (batch_size,4)
mask = jnp.repeat(
mask[..., jnp.newaxis],
repeats=serializer.representation_length,
axis=2)
# mask shape (batch_size,4,representation_length)
return mask
return tl.Fn('RepresentationMask', representation_mask) |
Multiplies a binary mask with a symbol significance mask. | def SignificanceWeights(serializer, decay):
"""Multiplies a binary mask with a symbol significance mask."""
def significance_weights(mask):
# (repr,) -> (batch, length, repr)
# significance = [0, 1, 2]
significance = serializer.significance_map
assert significance.shape[0] == mask.shape[2]
# significance = batch_size * [0, 1, 2]
significance = jnp.repeat(
significance[np.newaxis, ...], repeats=mask.shape[0], axis=0)
# significance = batch_size * [0, 1, 2] * mask.shape[1]
significance = jnp.repeat(
significance[..., jnp.newaxis], repeats=mask.shape[1], axis=2)
# significance = batch_size * mask.shape[1] * [0, 1, 2]
significance = jnp.swapaxes(significance, 1, 2)
assert significance.shape == mask.shape
sig_weights = mask * decay ** significance
return sig_weights
return tl.Fn('SignificanceWeights', significance_weights) |
Simplified constructor for SerializedModel, for time series prediction. | def TimeSeriesModel(
seq_model,
low=0.0,
high=1.0,
precision=2,
vocab_size=64,
significance_decay=0.7,
mode='train',
):
"""Simplified constructor for SerializedModel, for time series prediction."""
# Model scalar time series.
obs_srl = space_serializer.BoxSpaceSerializer(
space=gym.spaces.Box(shape=(), low=low, high=high),
vocab_size=vocab_size,
precision=precision,
)
# Artifact of the fact that we must provide some actions.
# TODO(pkozakowski): Remove this requirement.
act_srl = space_serializer.DiscreteSpaceSerializer(
space=gym.spaces.Discrete(n=1), vocab_size=1
)
seq_model = functools.partial(seq_model, vocab_size=vocab_size)
return SerializedModel(seq_model, obs_srl, act_srl, significance_decay, mode) |
Wraps a sequence model in a policy interface.
The resulting model takes as input observation anc action sequences, but only
uses the observations. Adds output heads for action logits and value
predictions.
Args:
seq_model: Trax sequence model taking as input and outputting a sequence of
continuous vectors.
n_controls: Number of controls.
n_actions: Number of action categories in each control.
Returns:
A model of signature (obs, act) -> (act_logits, values), with shapes:
obs: (batch_size, length + 1, obs_depth)
act: (batch_size, length, n_controls)
act_logits: (batch_size, length, n_controls, n_actions)
values: (batch_size, length) | def RawPolicy(seq_model, n_controls, n_actions):
"""Wraps a sequence model in a policy interface.
The resulting model takes as input observation anc action sequences, but only
uses the observations. Adds output heads for action logits and value
predictions.
Args:
seq_model: Trax sequence model taking as input and outputting a sequence of
continuous vectors.
n_controls: Number of controls.
n_actions: Number of action categories in each control.
Returns:
A model of signature (obs, act) -> (act_logits, values), with shapes:
obs: (batch_size, length + 1, obs_depth)
act: (batch_size, length, n_controls)
act_logits: (batch_size, length, n_controls, n_actions)
values: (batch_size, length)
"""
def SplitControls(): # pylint: disable=invalid-name
"""Splits logits for actions in different controls."""
def f(x):
return jnp.reshape(x, x.shape[:2] + (n_controls, n_actions))
return tl.Fn('SplitControls', f)
action_head = [
# Predict all action logits at the same time.
tl.Dense(n_controls * n_actions),
# Then group them into separate controls, adding a new dimension.
SplitControls(),
tl.LogSoftmax(),
]
return tl.Serial( # (obs, act)
tl.Select([0], n_in=2), # (obs,)
seq_model, # (obs_hidden,)
tl.Dup(), # (obs_hidden, obs_hidden)
tl.Parallel(action_head, [tl.Dense(1),
tl.Flatten()]) # (act_logits, values)
) |
Substitutes the weights/state of the inner model in a RawPolicy. | def substitute_inner_policy_raw(raw_policy, inner_policy): # pylint: disable=invalid-name
"""Substitutes the weights/state of the inner model in a RawPolicy."""
return raw_policy[:1] + [inner_policy] + raw_policy[2:] |
Wraps a policy in serialization machinery for training.
The resulting model takes as input observation and action sequences, and
serializes them into one sequence similar to SerializedModel, before passing
to the given sequence model. Adds output heads for action logits and value
predictions.
Args:
seq_model: Trax sequence model taking as input a sequence of symbols and
outputting a sequence of continuous vectors.
n_controls: Number of controls.
n_actions: Number of action categories in each control.
observation_serializer: Serializer to use for observations.
action_serializer: Serializer to use for actions.
Returns:
A model of signature (obs, act) -> (act_logits, values), same as in
RawPolicy. | def SerializedPolicy(
seq_model, n_controls, n_actions, observation_serializer, action_serializer
):
"""Wraps a policy in serialization machinery for training.
The resulting model takes as input observation and action sequences, and
serializes them into one sequence similar to SerializedModel, before passing
to the given sequence model. Adds output heads for action logits and value
predictions.
Args:
seq_model: Trax sequence model taking as input a sequence of symbols and
outputting a sequence of continuous vectors.
n_controls: Number of controls.
n_actions: Number of action categories in each control.
observation_serializer: Serializer to use for observations.
action_serializer: Serializer to use for actions.
Returns:
A model of signature (obs, act) -> (act_logits, values), same as in
RawPolicy.
"""
if action_serializer.representation_length != n_controls:
raise ValueError(
'Action symbols should correspond 1-1 to controls, but got {} '
'controls and {} symbols.'.format(
n_controls, action_serializer.representation_length
)
)
def FirstSymbol():
return tl.Fn('FirstSymbol', lambda x: x[:, :, 0])
def PadRight(n_to_pad):
def pad_right(x):
pad_widths = [(0, 0), (0, n_to_pad)] + [(0, 0)] * (x.ndim - 2)
return jnp.pad(
x, pad_widths, mode='constant', constant_values=x.dtype.type(0))
return tl.Fn(f'PadRight({n_to_pad})', pad_right)
action_head = [
tl.Dense(n_actions),
tl.LogSoftmax(),
]
value_head = [
# Take just the vectors corresponding to the first action symbol.
FirstSymbol(),
# Predict values.
tl.Dense(1),
# Get rid of the singleton dimension.
tl.Flatten(),
]
return tl.Serial(
# (obs, act)
tl.Parallel(Serialize(observation_serializer),
Serialize(action_serializer)),
# (obs_repr, act_repr)
Interleave(),
# (obs_act_repr,)
# Add one dummy action to the right - we'll use the output at its first
# symbol to predict the value for the last observation.
PadRight(action_serializer.representation_length),
# Shift one symbol to the right, so we predict the n-th action symbol
# based on action symbols 1..n-1 instead of 1..n.
tl.ShiftRight(),
seq_model,
# (obs_act_hidden,)
Deinterleave(observation_serializer.representation_length,
action_serializer.representation_length),
# (obs_hidden, act_hidden)
tl.Select([1, 1]),
# (act_hidden, act_hidden)
tl.Parallel(action_head, value_head),
# (act_logits, values)
) |
Substitutes the weights/state of the inner model in a SerializedPolicy. | def substitute_inner_policy_serialized(serialized_policy, inner_policy): # pylint: disable=invalid-name
"""Substitutes the weights/state of the inner model in a SerializedPolicy."""
return serialized_policy[:4] + [inner_policy] + serialized_policy[5:] |
Returns the number of controls and actions for an action space. | def analyze_action_space(action_space): # pylint: disable=invalid-name
"""Returns the number of controls and actions for an action space."""
assert isinstance(
action_space, (gym.spaces.Discrete, gym.spaces.MultiDiscrete)
), 'Action space expected to be Discrete of MultiDiscrete, got {}.'.format(
type(action_space)
)
if isinstance(action_space, gym.spaces.Discrete):
n_actions = action_space.n
n_controls = 1
else:
(n_controls,) = action_space.nvec.shape
assert n_controls > 0
assert np.min(action_space.nvec) == np.max(action_space.nvec), (
'Every control must have the same number of actions.'
)
n_actions = action_space.nvec[0]
return (n_controls, n_actions) |
Wraps a sequence model in either RawPolicy or SerializedPolicy.
Args:
seq_model: Trax sequence model.
observation_space: Gym observation space.
action_space: Gym action space.
vocab_size: Either the number of symbols for a serialized policy, or None.
Returns:
RawPolicy if vocab_size is None, else SerializedPolicy. | def wrap_policy(seq_model, observation_space, action_space, vocab_size): # pylint: disable=invalid-name
"""Wraps a sequence model in either RawPolicy or SerializedPolicy.
Args:
seq_model: Trax sequence model.
observation_space: Gym observation space.
action_space: Gym action space.
vocab_size: Either the number of symbols for a serialized policy, or None.
Returns:
RawPolicy if vocab_size is None, else SerializedPolicy.
"""
(n_controls, n_actions) = analyze_action_space(action_space)
if vocab_size is None:
policy_wrapper = RawPolicy
else:
obs_serializer = space_serializer.create(observation_space, vocab_size)
act_serializer = space_serializer.create(action_space, vocab_size)
policy_wrapper = functools.partial(SerializedPolicy,
observation_serializer=obs_serializer,
action_serializer=act_serializer)
return policy_wrapper(seq_model, n_controls, n_actions) |
Substitutes the inner weights/state in a {Raw,Serialized}Policy.
Args:
wrapped_policy (pytree): Weights or state of a wrapped policy.
inner_policy (pytree): Weights or state of an inner policy.
vocab_size (int or None): Vocabulary size of a serialized policy, or None
in case of a raw policy.
Returns:
New weights or state of wrapped_policy, with the inner weights/state
copied from inner_policy. | def substitute_inner_policy(wrapped_policy, inner_policy, vocab_size): # pylint: disable=invalid-name
"""Substitutes the inner weights/state in a {Raw,Serialized}Policy.
Args:
wrapped_policy (pytree): Weights or state of a wrapped policy.
inner_policy (pytree): Weights or state of an inner policy.
vocab_size (int or None): Vocabulary size of a serialized policy, or None
in case of a raw policy.
Returns:
New weights or state of wrapped_policy, with the inner weights/state
copied from inner_policy.
"""
if vocab_size is None:
substitute_fn = substitute_inner_policy_raw
else:
substitute_fn = substitute_inner_policy_serialized
return substitute_fn(wrapped_policy, inner_policy) |
Dummy sequence model for testing. | def TestModel(extra_dim, mode='train'):
"""Dummy sequence model for testing."""
del mode
def f(inputs):
# Cast the input to float32 - this is for simulating discrete-input models.
inputs = inputs.astype(np.float32)
# Add an extra dimension if requested, e.g. the logit dimension for output
# symbols.
if extra_dim is not None:
return jnp.broadcast_to(inputs[:, :, None], inputs.shape + (extra_dim,))
else:
return inputs
return layers_base.Fn('TestModel', f) |
Creates a SpaceSerializer for the given Gym space. | def create(space, vocab_size):
"""Creates a SpaceSerializer for the given Gym space."""
return {
gym.spaces.Box: BoxSpaceSerializer,
gym.spaces.Discrete: DiscreteSpaceSerializer,
gym.spaces.MultiDiscrete: MultiDiscreteSpaceSerializer,
}[type(space)](space, vocab_size) |
Play an episode in env taking actions according to the given policy.
Environment is first reset and an from then on, a game proceeds. At each
step, the policy is asked to choose an action and the environment moves
forward. A Trajectory is created in that way and returns when the episode
finished, which is either when env returns `done` or max_steps is reached.
Args:
env: the environment to play in, conforming to gym.Env or
DeepMind suite interfaces.
policy: a function taking a Trajectory and returning a pair consisting
of an action (int or float) and the confidence in that action (float,
defined as the log of the probability of taking that action).
dm_suite: whether we are using the DeepMind suite or the gym interface
max_steps: for how many steps to play.
last_observation: last observation from a previous trajectory slice, used to
begin a new one. Controls whether we reset the environment at the
beginning - if `None`, resets the env and starts the slice from the
observation got from reset().
Returns:
a completed trajectory slice that was just played. | def play(env, policy, dm_suite=False, max_steps=None, last_observation=None):
"""Play an episode in env taking actions according to the given policy.
Environment is first reset and an from then on, a game proceeds. At each
step, the policy is asked to choose an action and the environment moves
forward. A Trajectory is created in that way and returns when the episode
finished, which is either when env returns `done` or max_steps is reached.
Args:
env: the environment to play in, conforming to gym.Env or
DeepMind suite interfaces.
policy: a function taking a Trajectory and returning a pair consisting
of an action (int or float) and the confidence in that action (float,
defined as the log of the probability of taking that action).
dm_suite: whether we are using the DeepMind suite or the gym interface
max_steps: for how many steps to play.
last_observation: last observation from a previous trajectory slice, used to
begin a new one. Controls whether we reset the environment at the
beginning - if `None`, resets the env and starts the slice from the
observation got from reset().
Returns:
a completed trajectory slice that was just played.
"""
done = False
cur_step = 0
if last_observation is None:
# TODO(pkozakowski): Make a Gym wrapper over DM envs to get rid of branches
# like that.
last_observation = env.reset().observation if dm_suite else env.reset()
cur_trajectory = Trajectory(last_observation)
while not done and (max_steps is None or cur_step < max_steps):
action, dist_inputs = policy(cur_trajectory)
action = np.asarray(action)
step = env.step(action)
if dm_suite:
(observation, reward, done) = (
step.observation, step.reward, step.step_type.last()
)
info = {}
else:
(observation, reward, done, info) = step
# Make an EnvInfo out of the supported keys in the info dict.
env_info = EnvInfo(**{
key: value for (key, value) in info.items()
if key in EnvInfo._fields
})
cur_trajectory.extend(
action=action,
dist_inputs=dist_inputs,
reward=reward,
done=done,
new_observation=observation,
env_info=env_info,
)
cur_step += 1
return cur_trajectory |
Helper for np.pad with 0s for single-axis case. | def _zero_pad(x, pad, axis):
"""Helper for np.pad with 0s for single-axis case."""
pad_widths = [(0, 0)] * len(x.shape)
pad_widths[axis] = pad # Padding on axis.
return np.pad(x, pad_widths, mode='constant',
constant_values=x.dtype.type(0)) |
Sample an element from the inputs list proportionally to weights.
Args:
inputs: a list, we will return one element of this list.
weights: a sequence of numbers of the same length as inputs; we will sample
the k-th input with probability weights[k] / sum(weights).
Returns:
an element from inputs. | def _sample_proportionally(inputs, weights):
"""Sample an element from the inputs list proportionally to weights.
Args:
inputs: a list, we will return one element of this list.
weights: a sequence of numbers of the same length as inputs; we will sample
the k-th input with probability weights[k] / sum(weights).
Returns:
an element from inputs.
"""
l = len(inputs)
weights = np.array(weights)
if l != len(weights):
raise ValueError(f'Inputs and weights must have the same length, but do not'
f': {l} != {len(weights)}')
norm_weights = weights / np.sum(weights)
# TODO(pkozakowski): Currently this is O(n). It can be sped up to O(log n) by
# storing CDF and binsearching on it.
idx = np.random.choice(l, p=norm_weights)
return inputs[int(idx)] |
How many slices of length upto max_slice_length in a trajectory. | def _n_slices(trajectory, max_slice_length, margin):
"""How many slices of length upto max_slice_length in a trajectory."""
# TODO(lukaszkaiser): add option to sample from n last trajectories.
if not max_slice_length:
return 1
# A trajectory [a, b, c, end_state] will have 2 slices of length 2:
# the slice [a, b] and the one [b, c], with margin=0; 3 with margin=1.
return max(1, len(trajectory) + margin - max_slice_length) |
Helper function to calculate remaining evaluations for a trainer.
Args:
cur_step: current step of the supervised trainer
epoch: current epoch of the RL trainer
train_steps_per_epoch: supervised trainer steps per RL epoch
evals_per_epoch: supervised trainer evals per RL epoch
Returns:
number of remaining evals to do this epoch
Raises:
ValueError if the provided numbers indicate a step mismatch | def remaining_evals(cur_step, epoch, train_steps_per_epoch, evals_per_epoch):
"""Helper function to calculate remaining evaluations for a trainer.
Args:
cur_step: current step of the supervised trainer
epoch: current epoch of the RL trainer
train_steps_per_epoch: supervised trainer steps per RL epoch
evals_per_epoch: supervised trainer evals per RL epoch
Returns:
number of remaining evals to do this epoch
Raises:
ValueError if the provided numbers indicate a step mismatch
"""
if epoch < 1:
raise ValueError('Epoch must be at least 1, got %d' % epoch)
prev_steps = (epoch - 1) * train_steps_per_epoch
done_steps_this_epoch = cur_step - prev_steps
if done_steps_this_epoch < 0:
raise ValueError('Current step (%d) < previously done steps (%d).'
% (cur_step, prev_steps))
train_steps_per_eval = train_steps_per_epoch // evals_per_epoch
if done_steps_this_epoch % train_steps_per_eval != 0:
raise ValueError('Done steps (%d) must divide train steps per eval (%d).'
% (done_steps_this_epoch, train_steps_per_eval))
return evals_per_epoch - (done_steps_this_epoch // train_steps_per_eval) |
Expert function that runs a policy network with lower temperature.
Args:
temperature: Temperature passed from the Agent.
temperature_multiplier: Multiplier to apply to the temperature to "sharpen"
the policy distribution. Should be <= 1, but this is not a requirement.
**kwargs: Keyword arguments passed to network_policy.
Returns:
Pair (action, dist_inputs) where action is the action taken and dist_inputs
is the parameters of the policy distribution, that will later be used for
training. | def sharpened_network_policy(
temperature,
temperature_multiplier=1.0,
**kwargs
):
"""Expert function that runs a policy network with lower temperature.
Args:
temperature: Temperature passed from the Agent.
temperature_multiplier: Multiplier to apply to the temperature to "sharpen"
the policy distribution. Should be <= 1, but this is not a requirement.
**kwargs: Keyword arguments passed to network_policy.
Returns:
Pair (action, dist_inputs) where action is the action taken and dist_inputs
is the parameters of the policy distribution, that will later be used for
training.
"""
return network_policy(
temperature=(temperature_multiplier * temperature),
**kwargs
) |
Policy function powered by a neural network.
Used to implement Agent.policy() in policy-based agents.
Args:
collect_model: the model used for collecting trajectories
policy_distribution: an instance of trax.rl.distributions.Distribution
loop: trax.supervised.training.Loop used to train the policy network
trajectory_np: an instance of trax.rl.task.TimeStepBatch
head_index: index of the policy head a multihead model.
temperature: temperature used to sample from the policy (default=1.0)
Returns:
a pair (action, dist_inputs) where action is the action taken and
dist_inputs is the parameters of the policy distribution, that will later
be used for training. | def network_policy(
collect_model,
policy_distribution,
loop,
trajectory_np,
head_index=0,
temperature=1.0,
):
"""Policy function powered by a neural network.
Used to implement Agent.policy() in policy-based agents.
Args:
collect_model: the model used for collecting trajectories
policy_distribution: an instance of trax.rl.distributions.Distribution
loop: trax.supervised.training.Loop used to train the policy network
trajectory_np: an instance of trax.rl.task.TimeStepBatch
head_index: index of the policy head a multihead model.
temperature: temperature used to sample from the policy (default=1.0)
Returns:
a pair (action, dist_inputs) where action is the action taken and
dist_inputs is the parameters of the policy distribution, that will later
be used for training.
"""
if temperature == 1.0:
model = collect_model
else:
# When evaluating (t != 1.0), use the evaluation model instead of the
# collection model - some models accumulate normalization statistics
# during data collection, and we don't want to do it in eval to avoid data
# leakage.
model = loop.eval_model
model.state = collect_model.state
# Copying weights from loop.model should work, because the raw model's
# weights should be updated automatically during training, but it doesn't.
# TODO(pkozakowski): Debug.
acc = loop._trainer_per_task[0].accelerated_model_with_loss # pylint: disable=protected-access
model.weights = acc._unreplicate(acc.weights[0]) # pylint: disable=protected-access
# Add batch dimension to trajectory_np and run the model.
pred = model(trajectory_np.observation[None, ...])
if isinstance(pred, (tuple, list)):
# For multihead models, extract the policy head output.
pred = pred[head_index]
assert pred.shape == (
1, trajectory_np.observation.shape[0], policy_distribution.n_inputs
)
# Pick element 0 from the batch (the only one), last (current) timestep.
pred = pred[0, -1, :]
sample = policy_distribution.sample(pred, temperature=temperature)
result = (sample, pred)
if fastmath.is_backend(fastmath.Backend.JAX):
# The result is composed of mutable numpy arrays. We copy them to avoid
# accidental modification.
result = fastmath.nested_map(lambda x: x.copy(), result)
return result |
Generate `n` random sequences of length `length` and yield with copies. | def copy_stream(length, low=2, high=15, n=1):
"""Generate `n` random sequences of length `length` and yield with copies."""
while True:
res = []
for _ in range(n):
seq = np.random.randint(low, high, size=(length,), dtype=np.int32)
res.extend([seq, seq])
yield res |
Token-level accuracy. | def _accuracy(seq1, seq2):
"""Token-level accuracy."""
seq1, seq2 = np.array(seq1), np.array(seq2)
max_length = max(seq1.shape[-1], seq2.shape[-1])
min_length = min(seq1.shape[-1], seq2.shape[-1])
seq1s, seq2s = seq1[..., :min_length], seq2[..., :min_length]
return np.sum(np.equal(seq1s, seq2s)) / max_length |
Creates a function that generates the Multibonacci sequence modulo n. | def make_multibonacci_modulo(history_length, limit):
"""Creates a function that generates the Multibonacci sequence modulo n."""
def sequence_fn(seq):
return np.sum(seq[-history_length:]) % limit
return sequence_fn |
Generates random actions and observations that follow sequence_fn. | def generate_trajectory(sequence_fn, space, n_steps):
"""Generates random actions and observations that follow sequence_fn."""
act = [space.sample() for _ in range(n_steps)]
obs = [space.sample()]
for (o, a) in zip(
obs,
act[:-1], # Don't generate the last observation.
):
context = list(np.array([o, a]).flatten())
symbols = []
for _ in range(np.array(o).size):
symbol = sequence_fn(context + symbols)
symbols.append(symbol)
obs.append(np.reshape(symbols, space.shape))
obs = np.array([obs])
act = np.array([act])
return (obs, act) |
Creates an EvalTask with just one example. | def make_singleton_eval_task(observations, actions):
"""Creates an EvalTask with just one example."""
mask = np.ones(observations.shape[:2])
def data():
while True:
yield (observations, actions, observations, mask)
return training.EvalTask(
labeled_data=data(),
metrics=[],
) |
Yields samples from `model`, in autoregressive language model fashion.
This function uses `model` to generate outputs one position at a time, with
access to inputs for the current position and all preceding positions. The
new output becomes the next position's input, and further calls to
`autoregressive_sample_stream` repeat the process for successive positions
indefinitely.
Inputs and outputs always come in batches, even if size 1. If `inputs` is
present, it must have shape (`batch_size`, inputs_sequence_length), and each
output in the stream has shape (`batch_size`, 1).
Args:
model: A layer object (subclass of `trax.layers.Layer`) created in
`'predict'` mode and initialized from trained weights. The model
must have a structure that allows it to run as an autoregressive
one-sample-at-a-time predictor (e.g., `trax.models.TransformerLM`),
except if `eval_mode` is set -- any model can be sampled then,
but the sampling process may be much slower.
inputs: Sequence of symbols the model sees as input the first time it
generates an output. If None, the model generates the first output
based on just the start symbol.
batch_size: Number of sequences to generate in parallel as a batch.
temperature: Parameter that controls the sharpness of the softmax that
feeds the sampling process. Values range from 0.0 (all probability mass
goes to one candidate; like an argmax) to positive infinity (all
candidates have equal probability).
start_id: Integer representing the start symbol for the autoregressive
process, or array of shape (`batch_size`, 1) of such integers.
accelerate: If True, create an accelerated version of `model` and use it
for generating outputs.
eval_mode: If True, assume the model is created in `eval` mode and sample
by collecting all previous outputs and passing the whole tensor.
eval_min_length: If set, the minimum length to pad to in eval mode.
Yields:
Tensor of integers with shape (`batch_size`, 1), representing the batch of
outputs for the next position in the stream. | def autoregressive_sample_stream(model, inputs=None,
batch_size=1, temperature=1.0,
start_id=0, accelerate=True,
eval_mode=False, eval_min_length=1):
"""Yields samples from `model`, in autoregressive language model fashion.
This function uses `model` to generate outputs one position at a time, with
access to inputs for the current position and all preceding positions. The
new output becomes the next position's input, and further calls to
`autoregressive_sample_stream` repeat the process for successive positions
indefinitely.
Inputs and outputs always come in batches, even if size 1. If `inputs` is
present, it must have shape (`batch_size`, inputs_sequence_length), and each
output in the stream has shape (`batch_size`, 1).
Args:
model: A layer object (subclass of `trax.layers.Layer`) created in
`'predict'` mode and initialized from trained weights. The model
must have a structure that allows it to run as an autoregressive
one-sample-at-a-time predictor (e.g., `trax.models.TransformerLM`),
except if `eval_mode` is set -- any model can be sampled then,
but the sampling process may be much slower.
inputs: Sequence of symbols the model sees as input the first time it
generates an output. If None, the model generates the first output
based on just the start symbol.
batch_size: Number of sequences to generate in parallel as a batch.
temperature: Parameter that controls the sharpness of the softmax that
feeds the sampling process. Values range from 0.0 (all probability mass
goes to one candidate; like an argmax) to positive infinity (all
candidates have equal probability).
start_id: Integer representing the start symbol for the autoregressive
process, or array of shape (`batch_size`, 1) of such integers.
accelerate: If True, create an accelerated version of `model` and use it
for generating outputs.
eval_mode: If True, assume the model is created in `eval` mode and sample
by collecting all previous outputs and passing the whole tensor.
eval_min_length: If set, the minimum length to pad to in eval mode.
Yields:
Tensor of integers with shape (`batch_size`, 1), representing the batch of
outputs for the next position in the stream.
"""
if inputs is not None and inputs.shape[0] != batch_size:
raise ValueError(f'Inputs batch size ({inputs.shape[0]}) does not match '
f'batch_size arg ({batch_size}.')
fast_model = tl.Accelerate(model) if accelerate else model
if np.isscalar(start_id):
start_symbol = np.full((batch_size, 1), start_id, dtype=np.int32)
else:
start_symbol = start_id
if model.n_in == 1 and inputs is not None:
current_symbols = np.concatenate([start_symbol, inputs], axis=1)
else:
current_symbols = start_symbol
if eval_mode:
# no start symbol needed in eval mode
current_symbols = current_symbols[:, 1:]
while True:
# Pad inputs to power-of-2 length if needed.
if eval_mode:
# one extra symbol as an initial one will be added
l = max(eval_min_length, current_symbols.shape[1] + 1)
pad_len = int(2**np.ceil(np.log2(l))) - current_symbols.shape[1]
unpadded_symbols = current_symbols
current_symbols = np.pad(
current_symbols, [[0, 0], [0, pad_len]], mode='constant')
last_index = -pad_len # no -1 as the starting one will be added
else:
last_index = -1
# Run the model.
if model.n_in > 1 and inputs is not None:
logits = fast_model((inputs, current_symbols))[0]
else:
logits = fast_model(current_symbols)
logits = tl.log_softmax(logits[:, last_index, :])
sample = tl.logsoftmax_sample(logits, temperature=temperature)
yield sample
if eval_mode:
current_symbols = np.concatenate(
[unpadded_symbols, sample[:, None]], axis=1)
else:
# NOTE: Because the model is autoregressive and in 'predict' mode, its
# history is cached in the model state and the next input is the single
# symbol just sampled.
current_symbols = sample[:, None] |
Returns a batch of sequences created by autoregressive sampling.
This function uses `model` to generate outputs one position at a time, with
access to inputs for the current position and all preceding positions. The
new output becomes the next position's input, and this loop repeats until
either the model outputs the `eos_id` value or the output sequence reaches
`max_length` items.
Args:
model: A layer object (subclass of `trax.layers.Layer`) created in
`'predict'` mode and initialized from trained weights. The model
must have a structure that allows it to run as autoregressive
one-sample-at-a-time predictor (e.g., `trax.models.TransformerLM`),
except if `eval_mode` is set -- any model can be sampled then,
but the sampling process may be much slower.
inputs: Sequence of symbols the model sees as input the first time it
generates an output. If None, the model must generate the first output
with no input to guide it.
batch_size: Number of sequences to generate in parallel as a batch.
temperature: Parameter that controls the sharpness of the softmax that
feeds the sampling process. Values range from 0.0 (all probability mass
goes to one candidate; like an argmax) to positive infinity (all
candidates have equal probability).
start_id: The start symbol (ID/integer) for the autoregressive process,
or array of shape (`batch_size`, 1) of such integers.
eos_id: The end-of-sequence symbol (ID/integer) for the autoregressive
process.
max_length: Maximum length for generated sequences.
accelerate: If True, create an accelerated version of `model` and use it
for generating outputs.
eval_mode: If True, assume the model is created in `eval` mode and sample
by collecting all previous outputs and passing the whole tensor.
eval_min_length: If set, the minimum length to pad to in eval mode.
Returns:
Tensor of integers with shape (`batch_size`, output_length) representing
a batch of output sequences. output_length is the maximum length of the
output sequences, where each sequence can be no longer than `max_length`. | def autoregressive_sample(model, inputs=None,
batch_size=1, temperature=1.0,
start_id=0, eos_id=1, max_length=100,
accelerate=True, eval_mode=False, eval_min_length=1):
"""Returns a batch of sequences created by autoregressive sampling.
This function uses `model` to generate outputs one position at a time, with
access to inputs for the current position and all preceding positions. The
new output becomes the next position's input, and this loop repeats until
either the model outputs the `eos_id` value or the output sequence reaches
`max_length` items.
Args:
model: A layer object (subclass of `trax.layers.Layer`) created in
`'predict'` mode and initialized from trained weights. The model
must have a structure that allows it to run as autoregressive
one-sample-at-a-time predictor (e.g., `trax.models.TransformerLM`),
except if `eval_mode` is set -- any model can be sampled then,
but the sampling process may be much slower.
inputs: Sequence of symbols the model sees as input the first time it
generates an output. If None, the model must generate the first output
with no input to guide it.
batch_size: Number of sequences to generate in parallel as a batch.
temperature: Parameter that controls the sharpness of the softmax that
feeds the sampling process. Values range from 0.0 (all probability mass
goes to one candidate; like an argmax) to positive infinity (all
candidates have equal probability).
start_id: The start symbol (ID/integer) for the autoregressive process,
or array of shape (`batch_size`, 1) of such integers.
eos_id: The end-of-sequence symbol (ID/integer) for the autoregressive
process.
max_length: Maximum length for generated sequences.
accelerate: If True, create an accelerated version of `model` and use it
for generating outputs.
eval_mode: If True, assume the model is created in `eval` mode and sample
by collecting all previous outputs and passing the whole tensor.
eval_min_length: If set, the minimum length to pad to in eval mode.
Returns:
Tensor of integers with shape (`batch_size`, output_length) representing
a batch of output sequences. output_length is the maximum length of the
output sequences, where each sequence can be no longer than `max_length`.
"""
result = []
eos_seen = []
counter = 0
for sample in autoregressive_sample_stream(
model, inputs, batch_size=batch_size, temperature=temperature,
start_id=start_id, accelerate=accelerate, eval_mode=eval_mode,
eval_min_length=eval_min_length):
sample = sample[:, None]
result.append(sample)
counter += 1
if counter >= max_length:
return np.concatenate(result, axis=1)
# Check at which batch positions have we already encountered EOS.
for j in range(batch_size):
if int(sample[j, 0]) == eos_id:
eos_seen.append(j)
# If EOS has been seen on all positions, stop.
if all([j in eos_seen for j in range(batch_size)]):
return np.concatenate(result, axis=1)
return np.concatenate(result, axis=1) |
Returns a batch of n_beams-sequences created by beam search.
This function uses `model` to generate outputs one position at a time, with
access to inputs for the current position and all preceding positions. The
new output becomes the next position's input, and this loop repeats until
either the model outputs the `eos_id` value or the output sequence reaches
`max_length` items -- but keeping n_beams top beams.
Args:
model: A layer object (subclass of `trax.layers.Layer`) created in
`'predict'` mode and initialized from trained weights. The model
must have a structure that allows it to run as autoregressive
one-sample-at-a-time predictor (e.g., `trax.models.TransformerLM`).
inputs: Sequence of symbols the model sees as input the first time it
generates an output. If None, the model must generate the first output
with no input to guide it.
batch_size: Number of sequences to generate in parallel as a batch.
n_beams: How many beams to consider at the same time.
start_id: The start symbol (ID/integer) for the autoregressive process,
or array of shape (`batch_size`, 1) of such integers.
eos_id: The end-of-sequence symbol (ID/integer) for the autoregressive
process.
max_length: Maximum length for generated sequences.
length_penalty: Factor alpha in calculating the length penalty for beams.
accelerate: If True, create an accelerated version of `model` and use it
for generating outputs.
Returns:
Tensor of integers with shape (`batch_size`, n_beams, output_length) with
a batch of output sequences. output_length is the maximum length of the
output sequences, where each sequence can be no longer than `max_length`. | def beam_search(model, inputs=None, batch_size=1, n_beams=2, start_id=0,
eos_id=1, max_length=100, length_penalty=1.0, accelerate=True):
"""Returns a batch of n_beams-sequences created by beam search.
This function uses `model` to generate outputs one position at a time, with
access to inputs for the current position and all preceding positions. The
new output becomes the next position's input, and this loop repeats until
either the model outputs the `eos_id` value or the output sequence reaches
`max_length` items -- but keeping n_beams top beams.
Args:
model: A layer object (subclass of `trax.layers.Layer`) created in
`'predict'` mode and initialized from trained weights. The model
must have a structure that allows it to run as autoregressive
one-sample-at-a-time predictor (e.g., `trax.models.TransformerLM`).
inputs: Sequence of symbols the model sees as input the first time it
generates an output. If None, the model must generate the first output
with no input to guide it.
batch_size: Number of sequences to generate in parallel as a batch.
n_beams: How many beams to consider at the same time.
start_id: The start symbol (ID/integer) for the autoregressive process,
or array of shape (`batch_size`, 1) of such integers.
eos_id: The end-of-sequence symbol (ID/integer) for the autoregressive
process.
max_length: Maximum length for generated sequences.
length_penalty: Factor alpha in calculating the length penalty for beams.
accelerate: If True, create an accelerated version of `model` and use it
for generating outputs.
Returns:
Tensor of integers with shape (`batch_size`, n_beams, output_length) with
a batch of output sequences. output_length is the maximum length of the
output sequences, where each sequence can be no longer than `max_length`.
"""
del eos_id, length_penalty # TODO(lukaszkaiser): add length penalty, eos
assert batch_size == 1, 'Batch size > 1 not supported yet'
if inputs is not None and inputs.shape[0] != batch_size:
raise ValueError(f'Inputs batch size ({inputs.shape[0]}) does not match '
f'batch_size arg ({batch_size}.')
fast_model = tl.Accelerate(model) if accelerate else model
if np.isscalar(start_id):
start_symbol = np.full((batch_size, 1), start_id, dtype=np.int32)
else:
start_symbol = start_id
if model.n_in == 1 and inputs is not None:
current_symbols = np.concatenate([start_symbol, inputs], axis=1)
else:
current_symbols = start_symbol
beams = [current_symbols for _ in range(n_beams)]
results = [([], 0.0) for _ in range(n_beams)]
states = [fast_model.state for _ in range(n_beams)]
top_k = [None] * n_beams
counter = 0
while counter < max_length:
counter += 1
# Run the model on all beams, collect states and top_k for each beam.
for beam_id in range(n_beams if counter > 1 else 1):
fast_model.state = states[beam_id]
if model.n_in > 1 and inputs is not None:
logits = fast_model((inputs, beams[beam_id]))[0]
else:
logits = fast_model(beams[beam_id])
logits = tl.log_softmax(logits[:, -1, :])
states[beam_id] = fast_model.state
top_k[beam_id] = fastmath.top_k(logits, k=n_beams)
# Select new beams.
cur_values = [] # will hold triples (sum-of-logprobs, beam-id, symbol)
for beam_id in range(n_beams if counter > 1 else 1):
for k in range(n_beams):
values, symbols = top_k[beam_id]
value, symbol = values[:, k], symbols[:, k]
cur_values.append((results[beam_id][1] + value, beam_id, symbol))
cur_values.sort(key=lambda x: -x[0][0]) # x[0][0] as batch_size=1
# Collect top beams to the new states and results.
new_results, new_states, new_beams = [], [], []
for (value, beam_id, symbol) in cur_values[:n_beams]:
new_results.append((results[beam_id][0] + [symbol], value))
new_states.append(states[beam_id]) # copy?
new_beams.append(symbol[:, None])
results, states, beams = new_results, new_states, new_beams
return [(np.stack(r, axis=-1), v) for (r, v) in results] |
Returns an LR schedule that is constant from time (step) 1 to infinity. | def constant(value):
"""Returns an LR schedule that is constant from time (step) 1 to infinity."""
return _BodyAndTail(value, body_start=1) |
Returns an LR schedule with linear warm-up followed by constant value.
Args:
n_warmup_steps: Number of steps during which the learning rate rises on
a line connecting (0, 0) and (n_warmup_steps, max_value).
max_value: Value for learning rate after warm-up has finished. | def warmup(n_warmup_steps, max_value):
"""Returns an LR schedule with linear warm-up followed by constant value.
Args:
n_warmup_steps: Number of steps during which the learning rate rises on
a line connecting (0, 0) and (n_warmup_steps, max_value).
max_value: Value for learning rate after warm-up has finished.
"""
return _BodyAndTail(max_value, body_start=n_warmup_steps + 1) |
Returns an LR schedule with warm-up + reciprocal square root decay. | def warmup_and_rsqrt_decay(n_warmup_steps, max_value):
"""Returns an LR schedule with warm-up + reciprocal square root decay."""
return _BodyAndTail(max_value, tail_start=n_warmup_steps + 1, tail_fn=_rsqrt) |
Factor-based learning rate schedule.
Interprets factors in the factors string which can consist of:
* constant: interpreted as the constant value,
* linear_warmup: interpreted as linear warmup until warmup_steps,
* rsqrt_decay: divide by square root of max(step, warmup_steps)
* decay_every: Every k steps decay the learning rate by decay_factor.
* cosine_deay: Cyclic cosine decay, uses steps_per_cycle parameter.
* two_constants: constant until second_constant_step, then switch to
second_constant.
Args:
factors: a string with factors separated by '*' that defines the schedule.
constant: float, the starting constant for the learning rate schedule.
warmup_steps: how many steps to warm up for in the warmup schedule.
decay_factor: The amount to decay the learning rate by.
steps_per_decay: How often to decay the learning rate.
steps_per_cycle: Steps per cycle when using cosine decay.
second_constant: float, the second constant for the learning rate schedule.
second_constant_step: the step when the second_constant is triggered.
minimum: if the computed rate is below the minimum, then return the minimum.
Returns:
a function learning_rate(step): float -> {'learning_rate': float}, the
step-dependent lr. | def multifactor(factors='constant * linear_warmup * rsqrt_decay',
constant=0.1, # pylint: disable=redefined-outer-name
warmup_steps=400,
decay_factor=0.5,
steps_per_decay=20000,
steps_per_cycle=100000,
second_constant=0.01,
second_constant_step=10000,
minimum=0):
"""Factor-based learning rate schedule.
Interprets factors in the factors string which can consist of:
* constant: interpreted as the constant value,
* linear_warmup: interpreted as linear warmup until warmup_steps,
* rsqrt_decay: divide by square root of max(step, warmup_steps)
* decay_every: Every k steps decay the learning rate by decay_factor.
* cosine_deay: Cyclic cosine decay, uses steps_per_cycle parameter.
* two_constants: constant until second_constant_step, then switch to
second_constant.
Args:
factors: a string with factors separated by '*' that defines the schedule.
constant: float, the starting constant for the learning rate schedule.
warmup_steps: how many steps to warm up for in the warmup schedule.
decay_factor: The amount to decay the learning rate by.
steps_per_decay: How often to decay the learning rate.
steps_per_cycle: Steps per cycle when using cosine decay.
second_constant: float, the second constant for the learning rate schedule.
second_constant_step: the step when the second_constant is triggered.
minimum: if the computed rate is below the minimum, then return the minimum.
Returns:
a function learning_rate(step): float -> {'learning_rate': float}, the
step-dependent lr.
"""
factors = [n.strip() for n in factors.split('*')]
def learning_rate(step):
"""Step to learning rate function."""
ret = 1.0
for name in factors:
if name == 'constant':
ret *= constant
elif name == 'two_constants':
if step < second_constant_step:
ret *= constant
else:
ret *= second_constant
elif name == 'linear_warmup':
ret *= jnp.minimum(1.0, step / warmup_steps)
elif name == 'rsqrt_decay':
ret /= jnp.sqrt(jnp.maximum(step, warmup_steps))
elif name == 'rsqrt_normalized_decay':
ret *= jnp.sqrt(warmup_steps)
ret /= jnp.sqrt(jnp.maximum(step, warmup_steps))
elif name == 'decay_every':
ret *= (decay_factor ** (step//steps_per_decay))
elif name == 'cosine_decay':
progress = jnp.maximum(
0.0, (step - warmup_steps) / float(steps_per_cycle))
ret *= (0.5 * (1.0 + jnp.cos(jnp.pi * (progress % 1.0))))
else:
raise ValueError('Unknown factor %s.' % name)
# TODO(henrykm): return float(jnp.max(minimum, ret)) would be
# better but causes TypeError: 'numpy.float64' object cannot
# be interpreted as an integer
if ret <= minimum:
return minimum
return ret
return learning_rate |
Computes a tail using a scaled reciprocal square root of step number.
Args:
step_number: Absolute step number from the start of training.
tail_start: Step number at which the tail of the curve starts.
body_value: Value relative to which the tail should be computed.
Returns:
A learning rate value that falls as the reciprocal square root of the step
number, scaled so that it joins smoothly with the body of a BodyAndTail
instance. | def _rsqrt(step_number, tail_start, body_value):
"""Computes a tail using a scaled reciprocal square root of step number.
Args:
step_number: Absolute step number from the start of training.
tail_start: Step number at which the tail of the curve starts.
body_value: Value relative to which the tail should be computed.
Returns:
A learning rate value that falls as the reciprocal square root of the step
number, scaled so that it joins smoothly with the body of a BodyAndTail
instance.
"""
return body_value * (math.sqrt(tail_start) / math.sqrt(step_number)) |
Loads (and caches) the standard MNIST data set. | def _mnist_dataset():
"""Loads (and caches) the standard MNIST data set."""
streams = tf_inputs.data_streams('mnist')
return inputs.batcher(streams, variable_shapes=False,
batch_size_per_device=256,
eval_batch_size=256) |
Loads (and caches) a MNIST mean brightness data set. | def _mnist_brightness_dataset():
"""Loads (and caches) a MNIST mean brightness data set."""
def preprocess_stream(stream):
def new_stream():
for (image, _) in stream():
yield (image, (image / 255).mean()[None])
return new_stream
streams = tuple(map(preprocess_stream, tf_inputs.data_streams('mnist')))
return inputs.batcher(streams, variable_shapes=False,
batch_size_per_device=256,
eval_batch_size=256) |
Creates MNIST training and evaluation tasks.
Args:
head: Adaptor layer to put before loss and accuracy layers in the tasks.
Returns:
A pair (train_task, eval_task) consisting of the MNIST training task and the
MNIST evaluation task using cross-entropy as loss and accuracy as metric. | def _mnist_tasks(head=None):
"""Creates MNIST training and evaluation tasks.
Args:
head: Adaptor layer to put before loss and accuracy layers in the tasks.
Returns:
A pair (train_task, eval_task) consisting of the MNIST training task and the
MNIST evaluation task using cross-entropy as loss and accuracy as metric.
"""
loss = tl.WeightedCategoryCrossEntropy()
accuracy = tl.WeightedCategoryAccuracy()
if head is not None:
loss = tl.Serial(head, loss)
accuracy = tl.Serial(head, accuracy)
task = training.TrainTask(
itertools.cycle(_mnist_dataset().train_stream(1)),
loss,
adam.Adam(0.001),
)
eval_task = training.EvalTask(
itertools.cycle(_mnist_dataset().eval_stream(1)),
[loss, accuracy],
n_eval_batches=10,
metric_names=['CrossEntropy', 'WeightedCategoryAccuracy'],
)
return (task, eval_task) |
Streams batches of examples from tfds, with pure-python preprocessing. | def _tfds_stream(n_devices,
dataset_name,
split,
batch_size,
data_dir,
shuffle_files,
shuffle_buffer_size,
batch_shuffle_size,
preprocess_fun,
repeat=True):
"""Streams batches of examples from tfds, with pure-python preprocessing."""
# TODO(piotrekp1): delete if switched to data_streams
if batch_size % n_devices != 0:
raise ValueError(f'Batch size ({batch_size}) not divisible'
' by number of devices ({n_devices})')
ds = tfds.load(
name=dataset_name,
split=split,
data_dir=data_dir,
shuffle_files=shuffle_files)
if repeat:
ds = ds.repeat()
if shuffle_buffer_size is not None:
ds = ds.shuffle(shuffle_buffer_size)
ds = ds.batch(batch_size)
if batch_shuffle_size is not None:
ds = ds.shuffle(batch_shuffle_size)
for batch in tfds.as_numpy(ds):
if preprocess_fun is not None:
yield preprocess_fun(batch)
else:
yield batch |
Tensorflow Datasets input pipeline, with pure-python preprocessing. | def tfds_inputs(
dataset_name,
preprocess_fun,
batch_size,
eval_batch_size=None,
data_dir=None,
train_split=tfds.Split.TRAIN,
eval_split=tfds.Split.VALIDATION,
shuffle_buffer_size=1024,
batch_shuffle_size=128,
):
"""Tensorflow Datasets input pipeline, with pure-python preprocessing."""
if eval_batch_size is None:
eval_batch_size = batch_size
return Inputs(
train_stream=functools.partial(
_tfds_stream,
dataset_name=dataset_name,
split=train_split,
batch_size=batch_size,
data_dir=data_dir,
shuffle_files=True,
shuffle_buffer_size=shuffle_buffer_size,
batch_shuffle_size=batch_shuffle_size,
preprocess_fun=preprocess_fun,
),
eval_stream=functools.partial(
_tfds_stream,
dataset_name=dataset_name,
split=eval_split,
batch_size=eval_batch_size,
data_dir=data_dir,
shuffle_files=False,
shuffle_buffer_size=None,
batch_shuffle_size=None,
preprocess_fun=preprocess_fun,
),
) |
Constructs a BERT tokenizer. | def bert_tokenizer(vocab_path=None):
"""Constructs a BERT tokenizer."""
# This import is from https://github.com/google-research/bert which is not
# listed as a dependency in trax.
# TODO(piotrekp1): using SubwordTextEncoder instead after fixing the
# differences
from bert.tokenization.bert_tokenization import FullTokenizer # pylint: disable=g-import-not-at-top
if vocab_path is None:
raise ValueError('vocab_path is required to construct the BERT tokenizer.')
tokenizer = FullTokenizer(vocab_path, do_lower_case=True)
return tokenizer |
Tokenize and convert text to model inputs in a BERT format. | def bert_preprocess(batch, tokenizer, key_a, key_b=None, max_len=128):
"""Tokenize and convert text to model inputs in a BERT format."""
batch_size = batch['idx'].shape[0]
input_ids = onp.zeros((batch_size, max_len), dtype=onp.int32)
type_ids = onp.zeros((batch_size, max_len), dtype=onp.int32)
for i in range(batch_size):
sentence_a = batch[key_a][i]
tokens_a = [101] + tokenizer.convert_tokens_to_ids(
tokenizer.tokenize(sentence_a)) + [102]
if key_b is not None:
sentence_b = batch[key_b][i]
tokens_b = tokenizer.convert_tokens_to_ids(
tokenizer.tokenize(sentence_b)) + [102]
else:
tokens_b = []
ex_input_ids = (tokens_a + tokens_b)[:max_len]
ex_type_ids = ([0] * len(tokens_a) + [1] * len(tokens_b))[:max_len]
input_ids[i, :len(ex_input_ids)] = ex_input_ids
type_ids[i, :len(ex_type_ids)] = ex_type_ids
return input_ids, type_ids, input_ids > 0, batch['label'], onp.ones(
batch_size) |
Input pipeline for fine-tuning BERT on GLUE tasks. | def glue_inputs(dataset_name=gin.REQUIRED,
batch_size=16,
eval_batch_size=None,
data_dir=None,
max_len=128,
tokenizer=bert_tokenizer):
"""Input pipeline for fine-tuning BERT on GLUE tasks."""
if callable(tokenizer): # If we pass a function, e.g., through gin, call it.
tokenizer = bert_tokenizer()
eval_split = tfds.Split.VALIDATION
if dataset_name == 'glue/mnli':
eval_split = 'validation_matched'
# TODO(kitaev): Support diagnostic dataset (AX)
keys_lookup = {
'glue/cola': ('sentence', None),
'glue/sst2': ('sentence', None),
'glue/mrpc': ('sentence1', 'sentence2'),
'glue/qqp': ('question1', 'question2'),
'glue/stsb': ('sentence1', 'sentence2'),
'glue/mnli': ('premise', 'hypothesis'), # TODO(kitaev): swap the two?
'glue/qnli': ('question', 'sentence'), # TODO(kitaev) swap the two?
'glue/rte': ('sentence1', 'sentence2'),
'glue/wnli': ('sentence1', 'sentence2'),
}
key_a, key_b = keys_lookup[dataset_name]
preprocess_fn = functools.partial(
bert_preprocess,
tokenizer=tokenizer,
key_a=key_a,
key_b=key_b,
max_len=max_len)
return tfds_inputs( # TODO(piotrekp1): use data_streams instead
dataset_name=dataset_name,
preprocess_fun=preprocess_fn,
batch_size=batch_size,
eval_batch_size=eval_batch_size,
data_dir=data_dir,
train_split=tfds.Split.TRAIN,
eval_split=eval_split) |
Train the model on the inputs.
Args:
output_dir: Directory where to put the logs and checkpoints.
model: The model to train as a callable returning 2 callables, an init_fn
and apply_fn.
loss_fn: callable with signature: weights, trax.inputs.Inputs, model, state,
rng -> loss.
inputs: callable returning trax.inputs.Inputs.
optimizer: The optimizer (see optimizers/base.py for signature).
lr_schedule_fn: A learning rate schedule function, that when called returns
a function from step to learning rate (a float).
trainer_class: The trainer class to use.
steps: int, total number of training steps.
checkpoints_at: list of integers. Save a checkpoint for each training step
in the list.
permanent_checkpoints_at: list of integers. Save a permanent checkpoint for
each training step in the list.
eval_steps: int, num of steps per evaluation. If None or 0, eval disabled.
eval_frequency: int, how often to run evaluation (every eval_frequency
steps). If None or 0, eval disabled.
permanent_checkpoint_frequency: int, how often to save permanent checkpoints
(every permanent_checkpoint_frequency steps).
random_seed: the random seed to use; time/os dependent if None (default).
save_graphs: bool, if True, save computation graph to file.
metrics: optionally override the default metrics dictionary.
checkpoint_highest: save the checkpoint highest at this metric.
checkpoint_lowest: save the checkpoint lowest at this metric.
use_loop: whether to use training.Loop instead of Trainer.
loss_chunk_size: int, if > 0 chunk loss into these sizes to save memory.
use_memory_efficient_trainer: whether to use memory-efficient trainer.
adasum: if True, use adaptive summation for multi-device gradients.
init_checkpoint: a checkpoint for fine tuning.
callbacks: a list of callbacks to call during training.
n_weights_shards: shard weights into this many devices.
additional_train_tasks: additional tasks which should be performed during
training.
additional_eval_tasks: additional tasks which should be performed during
evaluation.
additional_eval_streams: List[NamedStream], additional data streams that
should be used during evaluation. Can be provided independently of
additional_eval_tasks.
Returns:
trax.TrainerState or training.Loop if use_loop is True | def train(output_dir,
model=gin.REQUIRED,
loss_fn=tl.WeightedCategoryCrossEntropy(),
inputs=trax_inputs.batcher,
optimizer=trax_opt.Adafactor,
lr_schedule_fn=lr.multifactor,
trainer_class=Trainer,
steps=1000,
checkpoints_at=None,
permanent_checkpoints_at=None,
eval_steps=10,
eval_frequency=100,
permanent_checkpoint_frequency=None,
random_seed=None,
save_graphs=True,
metrics=None,
checkpoint_highest=None,
checkpoint_lowest=None,
use_loop=True,
loss_chunk_size=0,
use_memory_efficient_trainer=False,
adasum=False,
init_checkpoint=None,
callbacks=None,
n_weights_shards=1,
additional_train_tasks=None,
additional_eval_tasks=None,
additional_eval_streams=None):
"""Train the model on the inputs.
Args:
output_dir: Directory where to put the logs and checkpoints.
model: The model to train as a callable returning 2 callables, an init_fn
and apply_fn.
loss_fn: callable with signature: weights, trax.inputs.Inputs, model, state,
rng -> loss.
inputs: callable returning trax.inputs.Inputs.
optimizer: The optimizer (see optimizers/base.py for signature).
lr_schedule_fn: A learning rate schedule function, that when called returns
a function from step to learning rate (a float).
trainer_class: The trainer class to use.
steps: int, total number of training steps.
checkpoints_at: list of integers. Save a checkpoint for each training step
in the list.
permanent_checkpoints_at: list of integers. Save a permanent checkpoint for
each training step in the list.
eval_steps: int, num of steps per evaluation. If None or 0, eval disabled.
eval_frequency: int, how often to run evaluation (every eval_frequency
steps). If None or 0, eval disabled.
permanent_checkpoint_frequency: int, how often to save permanent checkpoints
(every permanent_checkpoint_frequency steps).
random_seed: the random seed to use; time/os dependent if None (default).
save_graphs: bool, if True, save computation graph to file.
metrics: optionally override the default metrics dictionary.
checkpoint_highest: save the checkpoint highest at this metric.
checkpoint_lowest: save the checkpoint lowest at this metric.
use_loop: whether to use training.Loop instead of Trainer.
loss_chunk_size: int, if > 0 chunk loss into these sizes to save memory.
use_memory_efficient_trainer: whether to use memory-efficient trainer.
adasum: if True, use adaptive summation for multi-device gradients.
init_checkpoint: a checkpoint for fine tuning.
callbacks: a list of callbacks to call during training.
n_weights_shards: shard weights into this many devices.
additional_train_tasks: additional tasks which should be performed during
training.
additional_eval_tasks: additional tasks which should be performed during
evaluation.
additional_eval_streams: List[NamedStream], additional data streams that
should be used during evaluation. Can be provided independently of
additional_eval_tasks.
Returns:
trax.TrainerState or training.Loop if use_loop is True
"""
base.N_WEIGHTS_SHARDS = n_weights_shards
if (permanent_checkpoint_frequency is not None
and permanent_checkpoints_at is not None):
raise ValueError('Only one of ["permanent_checkpoint_frequency", '
'"permanent_checkpoints_at"] should be set.')
if use_loop:
n_devices = num_devices() or fastmath.local_device_count()
# Prepare the training task.
# Inputs is either an Inputs instance or a function that returns it.
if callable(inputs): # If we pass a function, e.g., through gin, call it.
inputs = inputs()
opt = optimizer if use_memory_efficient_trainer else optimizer()
train_task = training.TrainTask(
inputs.train_stream(n_devices),
loss_layer=loss_fn,
optimizer=opt,
lr_schedule=lr_schedule_fn(),
n_steps_per_checkpoint=eval_frequency,
n_steps_per_permanent_checkpoint=permanent_checkpoint_frequency)
if additional_train_tasks is None:
additional_train_tasks = []
# Prepare the evaluation.
metrics_dict = metrics if metrics is not None else _DEFAULT_METRICS
names, metrics = zip(*metrics_dict.items())
eval_task = training.EvalTask(inputs.eval_stream(n_devices),
metrics,
metric_names=names,
n_eval_batches=eval_steps)
if additional_eval_tasks is None:
additional_eval_tasks = []
additional_eval_tasks_from_streams = []
if additional_eval_streams is not None:
for stream in additional_eval_streams:
additional_eval_tasks_from_streams.append(
training.EvalTask(stream.stream,
metrics,
metric_names=names,
n_eval_batches=eval_steps,
export_prefix=stream.name))
# Prepare the training loop.
checkpoint_at = None
if checkpoints_at is not None:
checkpoint_at = lambda step: step in checkpoints_at
permanent_checkpoint_at = None
if permanent_checkpoints_at is not None:
permanent_checkpoint_at = (lambda step: step in permanent_checkpoints_at)
# Setup the model.
model_train = model(mode='train')
model_predict_eval = model(mode='eval')
if init_checkpoint:
model_train.init_from_file(init_checkpoint, weights_only=True)
model_predict_eval.init_from_file(init_checkpoint, weights_only=True)
loop = training.Loop(
model_train, [train_task] + additional_train_tasks,
eval_model=model_predict_eval,
eval_tasks=[eval_task] +
additional_eval_tasks + additional_eval_tasks_from_streams,
output_dir=output_dir,
checkpoint_at=checkpoint_at,
checkpoint_low_metric=checkpoint_lowest,
checkpoint_high_metric=checkpoint_highest,
permanent_checkpoint_at=permanent_checkpoint_at,
n_devices=n_devices,
loss_chunk_size=loss_chunk_size,
use_memory_efficient_trainer=use_memory_efficient_trainer,
adasum=adasum,
random_seed=random_seed,
callbacks=callbacks,
)
steps_to_go = steps - loop.step
if steps_to_go <= 0:
log('Stop training, already reached the total training steps %d' % steps)
return loop
# Train and return the loop.
loop.run(steps_to_go)
return loop
n_devices = num_devices()
trainer = trainer_class(model, loss_fn, optimizer, lr_schedule_fn(), inputs,
output_dir,
random_seed=random_seed,
n_devices=n_devices,
checkpoints_at=checkpoints_at,
metrics=metrics,
checkpoint_lowest=checkpoint_lowest,
checkpoint_highest=checkpoint_highest,
init_checkpoint=init_checkpoint)
epoch_steps = [steps] # Only training if eval_frequency is 0 or None
if eval_frequency and eval_steps > 0:
epoch_steps = itertools.chain([1, # first epoch only 1 step
eval_frequency - 1],
itertools.repeat(eval_frequency))
trainer.log_step('Starting training using %d devices' % trainer.n_devices)
trainer.print_n_weights()
try:
for epoch_steps in epochs(steps, trainer.step, epoch_steps):
trainer.train_epoch(epoch_steps, eval_steps)
# Bookkeeping we do at the first step
if trainer.step == 1:
# Save computation graph (single-device only for now)
if (save_graphs and fastmath.is_backend(fastmath.Backend.JAX)):
trainer.save_computation_graphs()
# Save Gin config
trainer.save_gin()
trainer.log_step('Training done')
except Exception as e:
raise e
finally:
trainer.close()
return trainer.state |
Returns how many devices to use (if None, default, use all available). | def num_devices(value=None):
"""Returns how many devices to use (if None, default, use all available)."""
return value |
Returns a (JIT-compiled) function that computes updates for one step. | def _jit_update_fn(predict_fn, loss_fn, optimizer, n_devices, jit=True):
"""Returns a (JIT-compiled) function that computes updates for one step."""
model_and_loss = tl.Serial(predict_fn, loss_fn)
# Gradients are always wrt. the first argument, so putting weights first.
def model_and_loss_call(weights, batch, state, rng):
res = model_and_loss(batch, weights=weights, state=state, rng=rng)
return res, model_and_loss.state
if n_devices == 1: # TODO(lukaszkaiser): remove branch when not needed.
def single_update(weights_and_slots, i, opt_params, batch, state, rng):
weights, slots = weights_and_slots
rng, subrng = jax_random.split(rng[0])
grad_fn = fastmath.grad(model_and_loss_call, has_aux=True)
grads, state = grad_fn(weights, batch, state, rng)
new_weights, new_slots, stats = optimizer.tree_update(
i, grads, weights, slots, opt_params)
return (new_weights, new_slots), stats, state, [subrng]
if jit:
# TODO(lukaszkaiser): donate_argnums=(0,) when XLA supports it on GPU
return fastmath.jit(single_update)
else:
return single_update
# Else, for n_devices > 1:
@functools.partial(fastmath.pmap, axis_name='batch') # donate_argnums=(0,))
def mapped_update(weights_and_slots, i, opt_params, batch, state, rng):
"""This is a multi-device version of the update function above."""
# We assume all tensors have the first dimension = n_devices.
weights, slots = weights_and_slots
rng, subrng = jax_random.split(rng)
grad_fn = fastmath.grad(model_and_loss_call, has_aux=True)
grads, state = grad_fn(weights, batch, state, rng)
# We do a psum(1.0) here instead of `n_devices` since `n_devices` is just
# the number of devices on this host machine, however psum goes over all
# devices of all hosts (ex: a TPU pod) and we need to be averaging over all
# of them.
#
# Collect all gradients.
grads = fastmath.psum(grads, 'batch')
n_devices_total = fastmath.psum(np.array(1.0), 'batch')
# Average across hosts.
grads = jax.tree_util.tree_map(lambda g: g / n_devices_total, grads)
new_weights, new_slots, stats = optimizer.tree_update(
i, grads, weights, slots, opt_params)
return (new_weights, new_slots), stats, state, subrng
def update(weights_and_slots, i, opt_params, batch, state, rng):
return mapped_update(weights_and_slots, np.repeat(i, n_devices),
opt_params, batch, state, rng)
return update |
Returns a JIT-compiled predict function (unless jit=False). | def _jit_predict_fn(model_predict, metric_fn, n_devices, jit=True):
"""Returns a JIT-compiled predict function (unless jit=False)."""
model = tl.Serial(model_predict, metric_fn)
if not jit:
return model.pure_fn
return tl.jit_forward(model.pure_fn, n_devices) |
Returns a (JIT-compiled) function that computes the loss for one step. | def _jit_compute_loss_fn(predict_fn, loss_fn, n_devices, jit=True):
"""Returns a (JIT-compiled) function that computes the loss for one step."""
if n_devices == 1: # TODO(lukaszkaiser): remove branch when not needed.
def single_compute_loss(opt_state, batch, state, rng):
rng, subrng = jax_random.split(rng[0])
loss_val, state = loss_fn(opt_state[0], batch, predict_fn, state, rng)
return loss_val, state, [subrng]
return fastmath.jit(single_compute_loss) if jit else single_compute_loss
# Else, for n_devices > 1:
@functools.partial(fastmath.pmap, axis_name='batch')
def mapped_compute_loss(opt_state, batch, state, rng):
"""This is a multi-device version of the update function above."""
# We assume all tensors have the first dimension = n_devices.
rng, subrng = jax_random.split(rng)
loss_val, state = loss_fn(opt_state[0], batch, predict_fn, state, rng)
return loss_val, state, subrng
def compute_loss(opt_state, batch, state, rng):
return mapped_compute_loss(
opt_state, _reshape_by_device(batch, n_devices), state, rng)
return compute_loss |
Generates the number of steps in each epoch before reaching total_steps.
Args:
total_steps: int, total number of steps.
steps_to_skip: int, number of steps to skip because of a restart.
epoch_steps: iterable of int, numbers of steps in each epoch.
Yields:
epoch_steps: int, number of steps in this epoch | def epochs(total_steps, steps_to_skip, epoch_steps):
"""Generates the number of steps in each epoch before reaching total_steps.
Args:
total_steps: int, total number of steps.
steps_to_skip: int, number of steps to skip because of a restart.
epoch_steps: iterable of int, numbers of steps in each epoch.
Yields:
epoch_steps: int, number of steps in this epoch
"""
steps_to_go = total_steps - steps_to_skip
epoch_steps = iter(epoch_steps)
# Remove the desired number of steps from the stream.
for steps_this_epoch in epoch_steps:
if steps_this_epoch > steps_to_skip:
# Put back the number of steps left in the unfinished epoch.
epoch_steps = itertools.chain(
[steps_this_epoch - steps_to_skip], epoch_steps)
if steps_this_epoch >= steps_to_skip:
break
steps_to_skip -= steps_this_epoch
# Yield the remaining steps per epoch up to total_steps.
for steps_this_epoch in epoch_steps:
steps_this_epoch = min(steps_this_epoch, steps_to_go)
yield steps_this_epoch
steps_to_go -= steps_this_epoch
if steps_to_go == 0:
break |
Creates a trainer state dictionary to save to disk.
Args:
step: int, a step number
opt_state: OptState namedtuple
history: `trax.history.History`, the history object.
model_state: A nested structure of the model state.
input_signature: signature of model inputs.
Returns:
A dictionary with the fields of TrainerState and OptState flattened. | def make_trainer_state_dict(step,
opt_state,
history,
model_state,
input_signature):
"""Creates a trainer state dictionary to save to disk.
Args:
step: int, a step number
opt_state: OptState namedtuple
history: `trax.history.History`, the history object.
model_state: A nested structure of the model state.
input_signature: signature of model inputs.
Returns:
A dictionary with the fields of TrainerState and OptState flattened.
"""
flat_weights, flat_state = tl.flatten_weights_and_state(
opt_state.weights, model_state)
return {
'step': step,
'flat_weights': flat_weights,
'slots': opt_state.slots,
'opt_params': opt_state.opt_params,
'history': history,
'flat_state': flat_state,
'input_signature': input_signature,
'version_timestamp': 'Jun-18-2020' # To update in the future if needed.
} |
Given the trainer state dictionary, returns `TrainerState`. | def trainer_state_from_dict(trainer_state_dict, model):
"""Given the trainer state dictionary, returns `TrainerState`."""
# TODO(afrozm): This becomes simpler if OptState is flattened into
# TrainerState.
step = trainer_state_dict['step']
history = trainer_state_dict['history']
input_signature = trainer_state_dict['input_signature']
weights_and_state_sig = model.weights_and_state_signature(input_signature)
weights, model_state = tl.unflatten_weights_and_state(
trainer_state_dict['flat_weights'], trainer_state_dict['flat_state'],
weights_and_state_sig)
opt_state = OptState(
weights=weights,
slots=trainer_state_dict['slots'],
opt_params=trainer_state_dict['opt_params'])
return TrainerState(step=step, opt_state=OptState(*opt_state),
history=history, model_state=model_state) |
Returns a TrainerState instance loaded from the given `output_dir`. | def load_trainer_state(output_dir, model, weights_file=None):
"""Returns a TrainerState instance loaded from the given `output_dir`."""
if weights_file is None:
weights_file = os.path.join(output_dir, 'model.pkl.gz')
if not tf.io.gfile.exists(weights_file):
return TrainerState(step=None, opt_state=None,
history=trax_history.History(), model_state=None)
elif not tf.io.gfile.exists(weights_file):
raise ValueError('File not found: %s' % weights_file)
trainer_state_dict = training.unpickle_from_file(weights_file, gzip=True)
trainer_state = trainer_state_from_dict(trainer_state_dict, model)
log('Model loaded from %s at step %d' % (weights_file, trainer_state.step))
logging.debug('From loaded model : history = %s', trainer_state.history)
return trainer_state |
Reshapes possibly nested x into a shape (n_devices, ...). | def _reshape_by_device(x, n_devices):
"""Reshapes possibly nested x into a shape (n_devices, ...)."""
return tl.reshape_by_device(x, n_devices) |
Fold the function f to the nested structure x (dicts, tuples, lists). | def _nested_reduce(f, x):
"""Fold the function f to the nested structure x (dicts, tuples, lists)."""
if isinstance(x, list):
return f([_nested_reduce(f, y) for y in x])
if isinstance(x, tuple):
return f([_nested_reduce(f, y) for y in x])
if isinstance(x, dict):
return f([_nested_reduce(f, v) for (_, v) in x.items()])
return x |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.