Datasets:
MuLD
MuLD: The Multitask Long Document Benchmark
MuLD (Multitask Long Document Benchmark) is a set of 6 NLP tasks where the inputs consist of at least 10,000 words. The benchmark covers a wide variety of task types including translation, summarization, question answering, and classification. Additionally there is a range of output lengths from a single word classification label all the way up to an output longer than the input text.
- Repository: https://github.com/ghomasHudson/muld
- Paper: https://arxiv.org/abs/2202.07362
Supported Tasks and Leaderboards
The 6 MuLD tasks consist of:
Dataset Structure
The data is presented in a text-to-text format where each instance contains a input string, output string and (optionally) json encoded metadata.
{'input: 'Who was wearing the blue shirt? The beginning...', 'output': ['John'], 'metadata': ''}
Data Fields
input
: a string which has a differing structure per task but is presented in a unified formatoutput
: a list of strings where each is a possible answer. Most instances only have a single answer, but some such as narrativeQA and VLSP may have multiple.metadata
: Additional metadata which may be helpful for evaluation. In this version, only the OpenSubtitles task contains metadata (for the ContraPro annotations).
Data Splits
Each tasks contains different splits.