--- datasets: - abertsch/booksum-fullbooks pipeline_tag: text2text-generation --- Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625). This model was finetuned from a BART-base model using the retrieval-augmented training strategy described in section 3.2. It was finetuned on the dataset BookSum (full-book setting).