Spaces:
Runtime error
Runtime error
<!--Copyright 2021 The HuggingFace Team. All rights reserved. | |
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
the License. You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
specific language governing permissions and limitations under the License. | |
--> | |
# T5v1.1 | |
## Overview | |
T5v1.1 was released in the [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) | |
repository by Colin Raffel et al. It's an improved version of the original T5 model. | |
One can directly plug in the weights of T5v1.1 into a T5 model, like so: | |
```python | |
>>> from transformers import T5ForConditionalGeneration | |
>>> model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base") | |
``` | |
T5 Version 1.1 includes the following improvements compared to the original T5 model: | |
- GEGLU activation in the feed-forward hidden layer, rather than ReLU. See [this paper](https://arxiv.org/abs/2002.05202). | |
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. | |
- Pre-trained on C4 only without mixing in the downstream tasks. | |
- No parameter sharing between the embedding and classifier layer. | |
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller | |
`num_heads` and `d_ff`. | |
Note: T5 Version 1.1 was only pre-trained on [C4](https://huggingface.co/datasets/c4) excluding any supervised | |
training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 | |
model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task | |
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. | |
Google has released the following variants: | |
- [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) | |
- [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) | |
- [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) | |
- [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) | |
- [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl). | |
One can refer to [T5's documentation page](t5) for all tips, code examples and notebooks. | |
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be | |
found [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511). | |