Model Overview
⚠️ T5 is currently only available via the keras-hub-nightly
package. Use pip install keras-hub-nightly
to try this model.
T5 encoder-decoder backbone model.
T5 is a LLM pretrained on a mix of unsupervised and supervised tasks,
where each task is converted to a sequence-to-sequence format.
T5 works well on a variety of tasks out-of-the-box by prepending
various prefixes to the input sequence, e.g., for translation:
"translate English to German: ..."
, for summarization:
"summarize: ..."
.
T5 was introduced in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
The default constructor gives a fully customizable, randomly initialized T5
model with any number of layers, heads, and embedding dimensions. To load
preset architectures and weights, use the from_preset
constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind.
Links
- [T5 Quickstart Notebook](coming soon)
- T5 API Documentation
- T5 Model Card
- KerasHub Beginner Guide
- KerasHub Model Publishing Guide
Installation
Keras and KerasHub can be installed with:
pip install -U -q keras-hub
pip install -U -q keras
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the Keras Getting Started page.
Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
Preset name | Parameters | Description |
---|---|---|
t5_small_multi | 0 | 8-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). |
t5_base_multi | 0 | 12-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). |
t5_large_multi | 0 | 24-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). |
flan_small_multi | 0 | 8-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). |
flan_base_multi | 0 | 12-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). |
flan_large_multi | 0 | 24-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). |
t5_1.1_small | 60.51M | |
tt5_1.1_base | 247.58M | |
t5_1.1_large | 750.25M | |
t5_1.1_xl | 2.85B | |
t5_1.1_xxl | 11.14B |
Arguments
- vocabulary_size: int. The size of the token vocabulary.
- num_layers: int. The number of Transformer layers.
- num_heads: int. The number of attention heads for each Transformer. The hidden size must be divisible by the number of attention heads.
- hidden_dim: int. The hidden size of the Transformer layers.
- intermediate_dim: int. The output dimension of the first Dense layer in a two-layer feedforward network for each Transformer layer.
- key_value_dim: int. The dimension of each head of the key/value projections in the multi-head attention layers. Defaults to hidden_dim / num_heads.
- dropout: float. Dropout probability for the Transformer layers.
- activation: activation function (or activation string name). The
activation to be used in the inner dense blocks of the
Transformer layers. Defaults to
"relu"
. - use_gated_activation: boolean. Whether to use activation gating in
the inner dense blocks of the Transformer layers.
The original T5 architecture didn't use gating, but more
recent versions do. Defaults to
True
. - layer_norm_epsilon: float. Epsilon factor to be used in the layer normalization layers in the Transformer layers.
- tie_embedding_weights: boolean. If
True
, the weights of the token embedding and the weights projecting language model outputs fromhidden_dim
- Downloads last month
- 7