Papers
arxiv:2104.08247

What to Pre-Train on? Efficient Intermediate Task Selection

Published on Apr 16, 2021
Authors:
,
,

Abstract

Intermediate task fine-tuning has been shown to culminate in large transfer gains across many NLP tasks. With an abundance of candidate datasets as well as pre-trained language models, it has become infeasible to run the cross-product of all combinations to find the best transfer setting. In this work we first establish that similar sequential fine-tuning gains can be achieved in adapter settings, and subsequently consolidate previously proposed methods that efficiently identify beneficial tasks for intermediate transfer learning. We experiment with a diverse set of 42 intermediate and 11 target English classification, multiple choice, question answering, and sequence tagging tasks. Our results show that efficient embedding based methods that rely solely on the respective datasets outperform computational expensive few-shot fine-tuning approaches. Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.

Community

Sign up or log in to comment

Models citing this paper 138

Browse 138 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2104.08247 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1