|
--- |
|
datasets: |
|
- abertsch/booksum-fullbooks |
|
pipeline_tag: text2text-generation |
|
inference: false |
|
--- |
|
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625). |
|
|
|
This model was finetuned from a BART-base model using the random-encoding training strategy described in section 3.2 of the paper. It was finetuned on the dataset BookSum (full-book setting). |
|
|
|
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions. |