id
stringlengths 2
115
| private
bool 1
class | tags
sequence | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
quora | false | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown"
] | null | 1,713 | 5 |
quoref | false | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"coreference-resolution"
] | Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this
span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard
coreferences before selecting the appropriate span(s) in the paragraphs for answering questions. | 12,187 | 0 |
race | false | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1704.04683"
] | Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension. | 46,623 | 14 |
re_dial | false | [
"task_categories:other",
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"dialogue-sentiment-classification"
] | ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users
recommend movies to each other. The dataset was collected by a team of researchers working at
Polytechnique Montréal, MILA – Quebec AI Institute, Microsoft Research Montréal, HEC Montreal, and Element AI.
The dataset allows research at the intersection of goal-directed dialogue systems
(such as restaurant recommendation) and free-form (also called “chit-chat”) dialogue systems. | 300 | 0 |
reasoning_bg | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:bg",
"license:apache-2.0",
"arxiv:1908.01519"
] | This new dataset is designed to do reading comprehension in Bulgarian language. | 794 | 0 |
recipe_nlg | false | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:unknown"
] | The dataset contains 2231142 cooking recipes (>2 millions). It's processed in more careful way and provides more samples than any other dataset in the area. | 398 | 7 |
reclor | false | [] | Logical reasoning is an important ability to examine, analyze, and critically evaluate arguments as they occur in ordinary
language as the definition from LSAC. ReClor is a dataset extracted from logical reasoning questions of standardized graduate
admission examinations. Empirical results show that the state-of-the-art models struggle on ReClor with poor performance
indicating more research is needed to essentially enhance the logical reasoning ability of current models. We hope this
dataset could help push Machine Reading Comprehension (MRC) towards more complicated reasonin | 269 | 1 |
red_caps | false | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2111.11431"
] | RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
Images and captions from Reddit depict and describe a wide variety of objects and scenes.
The data is collected from a manually curated set of subreddits (350 total),
which give coarse image labels and allow steering of the dataset composition
without labeling individual instances. | 229,734 | 26 |
reddit | false | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"reddit-posts-summarization"
] | This corpus contains preprocessed posts from the Reddit dataset.
The dataset consists of 3,848,330 posts with an average length of 270 words for content,
and 28 words for the summary.
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
Content is used as document and summary is used as summary. | 1,271 | 15 |
reddit_tifu | false | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"reddit-posts-summarization",
"arxiv:1811.00783"
] | Reddit dataset, where TIFU denotes the name of subbreddit /r/tifu.
As defined in the publication, styel "short" uses title as summary and
"long" uses tldr as summary.
Features includes:
- document: post text without tldr.
- tldr: tldr line.
- title: trimmed title without tldr.
- ups: upvotes.
- score: score.
- num_comments: number of comments.
- upvote_ratio: upvote ratio. | 663 | 4 |
refresd | false | [
"task_categories:text-classification",
"task_categories:translation",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:extended|other-wikimatrix",
"language:en",
"language:fr",
"license:mit",
"arxiv:1907.05791"
] | The Rationalized English-French Semantic Divergences (REFreSD) dataset consists of 1,039
English-French sentence-pairs annotated with sentence-level divergence judgments and token-level
rationales. For any questions, write to [email protected]. | 264 | 0 |
reuters21578 | false | [
"language:en"
] | The Reuters-21578 dataset is one of the most widely used data collections for text
categorization research. It is collected from the Reuters financial newswire service in 1987. | 2,105 | 2 |
riddle_sense | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other"
] | Answering such a riddle-style question is a challenging cognitive process, in that it requires
complex commonsense reasoning abilities, an understanding of figurative language, and counterfactual reasoning
skills, which are all important abilities for advanced natural language understanding (NLU). However,
there is currently no dedicated datasets aiming to test these abilities. Herein, we present RiddleSense,
a new multiple-choice question answering task, which comes with the first large dataset (5.7k examples) for answering
riddle-style commonsense questions. We systematically evaluate a wide range of models over the challenge,
and point out that there is a large gap between the best-supervised model and human performance — suggesting
intriguing future research in the direction of higher-order commonsense reasoning and linguistic creativity towards
building advanced NLU systems. | 1,139 | 6 |
ro_sent | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ro",
"license:unknown",
"arxiv:2009.08712"
] | This dataset is a Romanian Sentiment Analysis dataset.
It is present in a processed form, as used by the authors of `Romanian Transformers`
in their examples and based on the original data present in
`https://github.com/katakonst/sentiment-analysis-tensorflow`. The original dataset is collected
from product and movie reviews in Romanian. | 302 | 0 |
ro_sts | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-sts-b",
"language:ro",
"license:cc-by-4.0"
] | The RO-STS (Romanian Semantic Textual Similarity) dataset contains 8628 pairs of sentences with their similarity score. It is a high-quality translation of the STS benchmark dataset. | 265 | 0 |
ro_sts_parallel | false | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-sts-b",
"language:en",
"language:ro",
"license:cc-by-4.0"
] | The RO-STS-Parallel (a Parallel Romanian English dataset - translation of the Semantic Textual Similarity) contains 17256 sentences in Romanian and English. It is a high-quality translation of the English STS benchmark dataset into Romanian. | 265 | 0 |
roman_urdu | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ur",
"license:unknown"
] | This is an extensive compilation of Roman Urdu Dataset (Urdu written in Latin/Roman script) tagged for sentiment analysis. | 265 | 1 |
ronec | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:mit",
"arxiv:1909.01247"
] | RONEC - the Romanian Named Entity Corpus, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities. It is used for named entity recognition and represents the largest Romanian NER corpus to date. | 482 | 0 |
ropes | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:1908.05852"
] | ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset
which tests a system's ability to apply knowledge from a passage
of text to a new situation. A system is presented a background
passage containing a causal or qualitative relation(s) (e.g.,
"animal pollinators increase efficiency of fertilization in flowers"),
a novel situation that uses this background, and questions that require
reasoning about effects of the relationships in the background
passage in the background of the situation. | 16,519 | 7 |
rotten_tomatoes | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown"
] | Movie Review Dataset.
This is a dataset of containing 5,331 positive and 5,331 negative processed
sentences from Rotten Tomatoes movie reviews. This data was first used in Bo
Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for
sentiment categorization with respect to rating scales.'', Proceedings of the
ACL, 2005. | 34,669 | 15 |
russian_super_glue | false | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"task_ids:natural-language-inference",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:ru",
"license:mit",
"glue",
"qa",
"superGLUE",
"NLI",
"reasoning",
"arxiv:2202.07791"
] | Recent advances in the field of universal language models and transformers require the development of a methodology for
their broad diagnostics and testing for general intellectual skills - detection of natural language inference,
commonsense reasoning, ability to perform simple logical operations regardless of text subject or lexicon. For the first
time, a benchmark of nine tasks, collected and organized analogically to the SuperGLUE methodology, was developed from
scratch for the Russian language. We provide baselines, human level evaluation, an open-source framework for evaluating
models and an overall leaderboard of transformer models for the Russian language. | 7,007 | 8 |
allenai/s2orc | false | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"citation-recommendation",
"arxiv:1911.02782"
] | A large corpus of 81.1M English-language academic papers spanning many academic disciplines.
Rich metadata, paper abstracts, resolved bibliographic references, as well as structured full
text for 8.1M open access papers. Full text annotated with automatically-detected inline mentions of
citations, figures, and tables, each linked to their corresponding paper objects. Aggregated papers
from hundreds of academic publishers and digital archives into a unified source, and create the largest
publicly-available collection of machine-readable academic text to date. | 382 | 9 |
samsum | false | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-nd-4.0",
"conversations-summarization",
"arxiv:1911.12237"
] | SAMSum Corpus contains over 16k chat dialogues with manually annotated
summaries.
There are two features:
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: id of a example. | 18,507 | 53 |
sanskrit_classic | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:sa",
"license:other"
] | This dataset combines some of the classical Sanskrit texts. | 263 | 1 |
saudinewsnet | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown"
] | The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers and written in MSA. | 264 | 1 |
sberquad | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ru",
"license:unknown",
"arxiv:1912.09723"
] | Sber Question Answering Dataset (SberQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Russian original analogue presented in Sberbank Data Science Journey 2017. | 859 | 6 |
scan | false | [
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:bsd",
"multi-turn",
"arxiv:1711.00350"
] | SCAN tasks with various splits.
SCAN is a set of simple language-driven navigation tasks for studying
compositional learning and zero-shot generalization.
See https://github.com/brendenlake/SCAN for a description of the splits.
Example usage:
data = datasets.load_dataset('scan/length') | 7,085 | 2 |
scb_mt_enth_2020 | false | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"language:th",
"license:cc-by-sa-4.0",
"arxiv:2007.03541",
"arxiv:1909.05858"
] | scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents.
Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner.
We train machine translation models based on this dataset. Our models' performance are comparable to that of
Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is
included in the training data for both Thai-English and English-Thai translation.
The dataset, pre-trained models, and source code to reproduce our work are available for public use. | 400 | 1 |
scene_parse_150 | false | [
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|ade20k",
"language:en",
"license:bsd-3-clause",
"scene-parsing",
"arxiv:1608.05442"
] | Scene parsing is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bed.
MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing.
The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts.
Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing.
There are totally 150 semantic categories included for evaluation, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.
Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene. | 2,623 | 7 |
schema_guided_dstc8 | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:dialogue-modeling",
"task_ids:multi-class-classification",
"task_ids:parsing",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1909.05855",
"arxiv:2002.01359"
] | The Schema-Guided Dialogue dataset (SGD) was developed for the Dialogue State Tracking task of the Eights Dialogue Systems Technology Challenge (dstc8).
The SGD dataset consists of over 18k annotated multi-domain, task-oriented conversations between a human and a virtual assistant.
These conversations involve interactions with services and APIs spanning 17 domains, ranging from banks and events to media, calendar, travel, and weather.
For most of these domains, the SGD dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces,
which reflects common real-world scenarios. | 703 | 4 |
allenai/scicite | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1904.01608"
] | This is a dataset for classifying citation intents in academic papers.
The main citation intent label for each Json object is specified with the label
key while the citation context is specified in with a context key. Example:
{
'string': 'In chacma baboons, male-infant relationships can be linked to both
formation of friendships and paternity success [30,31].'
'sectionName': 'Introduction',
'label': 'background',
'citingPaperId': '7a6b2d4b405439',
'citedPaperId': '9d1abadc55b5e0',
...
}
You may obtain the full information about the paper using the provided paper ids
with the Semantic Scholar API (https://api.semanticscholar.org/).
The labels are:
Method, Background, Result | 411 | 0 |
scielo | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:es",
"language:pt",
"license:unknown",
"arxiv:1905.01852"
] | A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm. | 556 | 1 |
scientific_papers | false | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"abstractive-summarization",
"arxiv:1804.05685"
] | Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, pagragraphs seperated by "/n".
- abstract: the abstract of the document, pagragraphs seperated by "/n".
- section_names: titles of sections, seperated by "/n". | 4,273 | 42 |
allenai/scifact | false | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-2.0"
] | SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales. | 441 | 2 |
sciq | false | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0"
] | The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided. | 397,602 | 18 |
scitail | false | [
"language:en"
] | The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question
and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information
retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We
crowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create
the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples
with neutral label | 2,533 | 2 |
allenai/scitldr | false | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"scientific-documents-summarization",
"arxiv:2004.15011"
] | A new multi-target dataset of 5.4K TLDRs over 3.2K papers.
SCITLDR contains both author-written and expert-derived TLDRs,
where the latter are collected using a novel annotation protocol
that produces high-quality summaries while minimizing annotation burden. | 767 | 10 |
search_qa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1704.05179"
] | We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind
CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article
and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google.
Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context
tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation
as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human
and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering. | 503 | 4 |
sede | false | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2106.05006",
"arxiv:2005.02539"
] | SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their
natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform,
which brings complexities and challenges never seen before in any other semantic parsing dataset like
including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most
importantly: under-specification and hidden-assumptions.
Paper (NLP4Prog workshop at ACL2021): https://arxiv.org/abs/2106.05006 | 313 | 2 |
selqa | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1606.00851"
] | The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks,
answer sentence selection and answer triggering. | 1,891 | 0 |
sem_eval_2010_task_8 | false | [
"language:en"
] | The SemEval-2010 Task 8 focuses on Multi-way classification of semantic relations between pairs of nominals.
The task was designed to compare different approaches to semantic relation classification
and to provide a standard testbed for future research. | 1,152 | 4 |
sem_eval_2014_task_1 | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-ImageFlickr and SemEval-2012 STS MSR-Video Descriptions",
"language:en",
"license:cc-by-4.0"
] | The SemEval-2014 Task 1 focuses on Evaluation of Compositional Distributional Semantic Models
on Full Sentences through Semantic Relatedness and Entailment. The task was designed to
predict the degree of relatedness between two sentences and to detect the entailment
relation holding between them. | 1,135 | 1 |
sem_eval_2018_task_1 | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"language:en",
"language:es",
"license:unknown",
"emotion-classification"
] | SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification.
This is a dataset for multilabel emotion classification for tweets.
'Given a tweet, classify it as 'neutral or no emotion' or as one, or more, of eleven given emotions that best represent the mental state of the tweeter.'
It contains 22467 tweets in three languages manually annotated by crowdworkers using Best–Worst Scaling. | 1,628 | 8 |
sem_eval_2020_task_11 | false | [
"task_categories:text-classification",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"propaganda-span-identification",
"propaganda-technique-classification",
"arxiv:2009.02696"
] | Propagandistic news articles use specific techniques to convey their message,
such as whataboutism, red Herring, and name calling, among many others.
The Propaganda Techniques Corpus (PTC) allows to study automatic algorithms to
detect them. We provide a permanent leaderboard to allow researchers both to
advertise their progress and to be up-to-speed with the state of the art on the
tasks offered (see below for a definition). | 297 | 5 |
sent_comp | false | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"sentence-compression"
] | Large corpus of uncompressed and compressed sentences from news articles. | 832 | 0 |
senti_lex | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:af",
"language:an",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mk",
"language:mr",
"language:ms",
"language:mt",
"language:nl",
"language:nn",
"language:no",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vo",
"language:wa",
"language:yi",
"language:zh",
"language:zhw",
"license:gpl-3.0"
] | This dataset add sentiment lexicons for 81 languages generated via graph propagation based on a knowledge graph--a graphical representation of real-world entities and the links between them. | 11,030 | 3 |
senti_ws | false | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-scoring",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-sa-3.0"
] | SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1]. | 397 | 1 |
sentiment140 | false | [
"language:en"
] | Sentiment140 consists of Twitter messages with emoticons, which are used as noisy labels for
sentiment classification. For more detailed information please refer to the paper. | 604 | 6 |
sepedi_ner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nso",
"license:other"
] | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | 266 | 1 |
sesotho_ner_corpus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:st",
"license:other"
] | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | 266 | 0 |
setimes | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:bg",
"language:bs",
"language:el",
"language:en",
"language:hr",
"language:mk",
"language:ro",
"language:sq",
"language:sr",
"language:tr",
"license:cc-by-sa-4.0"
] | SETimes – A Parallel Corpus of English and South-East European Languages
The corpus is based on the content published on the SETimes.com news portal. The news portal publishes “news and views from Southeast Europe” in ten languages: Bulgarian, Bosnian, Greek, English, Croatian, Macedonian, Romanian, Albanian and Serbian. This version of the corpus tries to solve the issues present in an older version of the corpus (published inside OPUS, described in the LREC 2010 paper by Francis M. Tyers and Murat Serdar Alperen). The following procedures were applied to resolve existing issues:
- stricter extraction process – no HTML residues present
- language identification on every non-English document – non-English online documents contain English material in case the article was not translated into that language
- resolving encoding issues in Croatian and Serbian – diacritics were partially lost due to encoding errors – text was rediacritized. | 6,030 | 0 |
setswana_ner_corpus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tn",
"license:other"
] | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | 267 | 0 |
sharc | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"conversational-qa",
"arxiv:1809.01494"
] | ShARC is a Conversational Question Answering dataset focussing on question answering from texts containing rules. The goal is to answer questions by possibly asking follow-up questions first. It is assumed assume that the question is often underspecified, in the sense that the question does not provide enough information to be answered directly. However, an agent can use the supporting rule text to infer what needs to be asked in order to determine the final answer. | 267 | 1 |
sharc_modified | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|sharc",
"language:en",
"license:unknown",
"conversational-qa",
"arxiv:1909.03759",
"arxiv:2009.06354"
] | ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. SharcModified is a new dataset which reduces the patterns identified in the original dataset. To reduce the sensitivity of neural models, for each occurence of an instance conforming to any of the patterns, we automatically construct alternatives where we choose to either replace the current instance with an alternative instance which does not exhibit the pattern; or retain the original instance. The modified ShARC has two versions sharc-mod and history-shuffled. For morre details refer to Appendix A.3 . | 665 | 0 |
sick | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|image-flickr-8k",
"source_datasets:extended|semeval2012-sts-msr-video",
"language:en",
"license:cc-by-nc-sa-3.0"
] | Shared and internationally recognized benchmarks are fundamental for the development of any computational system.
We aim to help the research community working on compositional distributional semantic models (CDSMs) by providing SICK (Sentences Involving Compositional Knowldedge), a large size English benchmark tailored for them.
SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets (idiomatic multiword expressions, named entities, telegraphic language) that are not within the scope of CDSMs.
By means of crowdsourcing techniques, each pair was annotated for two crucial semantic tasks: relatedness in meaning (with a 5-point rating scale as gold score) and entailment relation between the two elements (with three possible gold labels: entailment, contradiction, and neutral).
The SICK data set was used in SemEval-2014 Task 1, and it freely available for research purposes. | 2,878 | 3 |
silicone | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"emotion-classification",
"dialogue-act-classification",
"arxiv:2009.11152"
] | The Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE (SILICONE) benchmark is a collection
of resources for training, evaluating, and analyzing natural language understanding systems
specifically designed for spoken language. All datasets are in the English language and cover a
variety of domains including daily life, scripted scenarios, joint task completion, phone call
conversations, and televsion dialogue. Some datasets additionally include emotion and/or sentimant
labels. | 3,320 | 5 |
simple_questions_v2 | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-3.0"
] | SimpleQuestions is a dataset for simple QA, which consists
of a total of 108,442 questions written in natural language by human
English-speaking annotators each paired with a corresponding fact,
formatted as (subject, relationship, object), that provides the answer
but also a complete explanation. Fast have been extracted from the
Knowledge Base Freebase (freebase.com). We randomly shuffle these
questions and use 70% of them (75910) as training set, 10% as
validation set (10845), and the remaining 20% as test set. | 566 | 1 |
siswati_ner_corpus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ss",
"license:other"
] | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | 265 | 0 |
smartdata | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0"
] | DFKI SmartData Corpus is a dataset of 2598 German-language documents
which has been annotated with fine-grained geo-entities, such as streets,
stops and routes, as well as standard named entity types. It has also
been annotated with a set of 15 traffic- and industry-related n-ary
relations and events, such as Accidents, Traffic jams, Acquisitions,
and Strikes. The corpus consists of newswire texts, Twitter messages,
and traffic reports from radio stations, police and railway companies.
It allows for training and evaluating both named entity recognition
algorithms that aim for fine-grained typing of geo-entities, as well
as n-ary relation extraction systems. | 279 | 1 |
sms_spam | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-nus-sms-corpus",
"language:en",
"license:unknown"
] | The SMS Spam Collection v.1 is a public set of SMS labeled messages that have been collected for mobile phone spam research.
It has one collection composed by 5,574 English, real and non-enconded messages, tagged according being legitimate (ham) or spam. | 1,972 | 5 |
snips_built_in_intents | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"arxiv:1805.10190"
] | Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at
https://github.com/sonos/nlu-benchmark 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes. The
related paper mentioned on the github page is https://arxiv.org/abs/1805.10190 and a related Medium post is
https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d . | 2,798 | 3 |
snli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-flicker-30k",
"source_datasets:extended|other-visual-genome",
"language:en",
"license:cc-by-4.0",
"arxiv:1909.02209"
] | The SNLI corpus (version 1.0) is a collection of 570k human-written English
sentence pairs manually labeled for balanced classification with the labels
entailment, contradiction, and neutral, supporting the task of natural language
inference (NLI), also known as recognizing textual entailment (RTE). | 10,970 | 18 |
snow_simplified_japanese_corpus | false | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:other",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:ja",
"license:cc-by-4.0"
] | About SNOW T15: The simplified corpus for the Japanese language. The corpus has 50,000 manually simplified and aligned sentences. This corpus contains the original sentences, simplified sentences and English translation of the original sentences. It can be used for automatic text simplification as well as translating simple Japanese into English and vice-versa. The core vocabulary is restricted to 2,000 words where it is selected by accounting for several factors such as meaning preservation, variation, simplicity and the UniDic word segmentation criterion.
For details, refer to the explanation page of Japanese simplification (http://www.jnlp.org/research/Japanese_simplification). The original texts are from "small_parallel_enja: 50k En/Ja Parallel Corpus for Testing SMT Methods", which is a bilingual corpus for machine translation. About SNOW T23: An expansion corpus of 35,000 sentences rewritten in easy Japanese (simple Japanese vocabulary) based on SNOW T15. The original texts are from "Tanaka Corpus" (http://www.edrdg.org/wiki/index.php/Tanaka_Corpus). | 418 | 5 |
so_stacksample | false | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:open-domain-abstractive-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0"
] | Dataset with the text of 10% of questions and answers from the Stack Overflow programming Q&A website.
This is organized as three tables:
Questions contains the title, body, creation date, closed date (if applicable), score, and owner ID for all non-deleted Stack Overflow questions whose Id is a multiple of 10.
Answers contains the body, creation date, score, and owner ID for each of the answers to these questions. The ParentId column links back to the Questions table.
Tags contains the tags on each of these questions. | 627 | 3 |
social_bias_frames | false | [
"task_categories:text2text-generation",
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"explanation-generation"
] | Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language.
For example, these frames are meant to distill the implication that "women (candidates) are less qualified"
behind the statement "we shouldn’t lower our standards to hire more women." | 277 | 5 |
social_i_qa | false | [
"language:en"
] | We introduce Social IQa: Social Interaction QA, a new question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus on physical or taxonomic knowledge, Social IQa focuses on reasoning about people’s actions and their social implications. For example, given an action like "Jesse saw a concert" and a question like "Why did Jesse do this?", humans can easily infer that Jesse wanted "to see their favorite performer" or "to enjoy the music", and not "to see what's happening inside" or "to see if it works". The actions in Social IQa span a wide variety of social situations, and answer candidates contain both human-curated answers and adversarially-filtered machine-generated candidates. Social IQa contains over 37,000 QA pairs for evaluating models’ abilities to reason about the social implications of everyday events and situations. (Less) | 10,077 | 1 |
sofc_materials_articles | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:named-entity-recognition",
"task_ids:slot-filling",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2006.03039"
] | The SOFC-Exp corpus consists of 45 open-access scholarly articles annotated by domain experts.
A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested
named entity recognition and slot filling tasks as well as high annotation quality is presented
in the accompanying paper. | 408 | 2 |
sogou_news | false | [
"arxiv:1509.01626"
] | The Sogou News dataset is a mixture of 2,909,551 news articles from the SogouCA and SogouCS news corpora, in 5 categories.
The number of training samples selected for each class is 90,000 and testing 12,000. Note that the Chinese characters have been converted to Pinyin.
classification labels of the news are determined by their domain names in the URL. For example, the news with
URL http://sports.sohu.com is categorized as a sport class. | 275 | 0 |
spanish_billion_words | false | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0"
] | An unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus. | 396 | 7 |
spc | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:af",
"language:el",
"language:en",
"language:zh",
"license:unknown"
] | This is a collection of parallel corpora collected by Hercules Dalianis and his research group for bilingual dictionary construction.
More information in: Hercules Dalianis, Hao-chun Xing, Xin Zhang: Creating a Reusable English-Chinese Parallel Corpus for Bilingual Dictionary Construction, In Proceedings of LREC2010 (source: http://people.dsv.su.se/~hercules/SEC/) and Konstantinos Charitakis (2007): Using Parallel Corpora to Create a Greek-English Dictionary with UPLUG, In Proceedings of NODALIDA 2007. Afrikaans-English: Aldin Draghoender and Mattias Kanhov: Creating a reusable English – Afrikaans parallel corpora for bilingual dictionary construction
4 languages, 3 bitexts
total number of files: 6
total number of tokens: 1.32M
total number of sentence fragments: 0.15M | 528 | 0 |
species_800 | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown"
] | We have developed an efficient algorithm and implementation of a dictionary-based approach to named entity recognition,
which we here use to identifynames of species and other taxa in text. The tool, SPECIES, is more than an order of
magnitude faster and as accurate as existing tools. The precision and recall was assessed both on an existing gold-standard
corpus and on a new corpus of 800 abstracts, which were manually annotated after the development of the tool. The corpus
comprises abstracts from journals selected to represent many taxonomic groups, which gives insights into which types of
organism names are hard to detect and which are easy. Finally, we have tagged organism names in the entire Medline database
and developed a web resource, ORGANISMS, that makes the results accessible to the broad community of biologists. | 1,116 | 1 |
speech_commands | false | [
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"annotations_creators:other",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:1804.03209"
] | This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. This dataset is covered in more detail at
[https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The `_silence_` class contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise. | 926 | 4 |
spider | false | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"text-to-sql"
] | Spider is a large-scale complex and cross-domain semantic parsing and text-toSQL dataset annotated by 11 college students | 1,906 | 21 |
squad | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:1606.05250"
] | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | 121,615 | 75 |
squad_adversarial | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|squad",
"language:en",
"license:mit"
] | Here are two different adversaries, each of which uses a different procedure to pick the sentence it adds to the paragraph:
AddSent: Generates up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. Picks the one that most confuses the model.
AddOneSent: Similar to AddSent, but just picks one of the candidate sentences at random. This adversary is does not query the model in any way. | 1,567 | 4 |
squad_es | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|squad",
"language:es",
"license:cc-by-4.0",
"arxiv:1912.05200"
] | automatic translation of the Stanford Question Answering Dataset (SQuAD) v2 into Spanish | 520 | 1 |
squad_it | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|squad",
"language:it",
"license:unknown"
] | SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset
into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian.
The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is
split into training and test sets to support the replicability of the benchmarking of QA systems: | 292 | 2 |
squad_kor_v1 | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-nd-4.0",
"arxiv:1909.07005"
] | KorQuAD 1.0 is a large-scale Korean dataset for machine reading comprehension task consisting of human generated questions for Wikipedia articles. We benchmark the data collecting process of SQuADv1.0 and crowdsourced 70,000+ question-answer pairs. 1,637 articles and 70,079 pairs of question answers were collected. 1,420 articles are used for the training set, 140 for the dev set, and 77 for the test set. 60,407 question-answer pairs are for the training set, 5,774 for the dev set, and 3,898 for the test set. | 1,355 | 3 |
squad_kor_v2 | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|squad_kor_v1",
"source_datasets:original",
"language:ko",
"license:cc-by-nd-4.0"
] | KorQuAD 2.0 is a Korean question and answering dataset consisting of a total of 100,000+ pairs. There are three major differences from KorQuAD 1.0, which is the standard Korean Q & A data. The first is that a given document is a whole Wikipedia page, not just one or two paragraphs. Second, because the document also contains tables and lists, it is necessary to understand the document structured with HTML tags. Finally, the answer can be a long text covering not only word or phrase units, but paragraphs, tables, and lists. As a baseline model, BERT Multilingual is used, released by Google as an open source. It shows 46.0% F1 score, a very low score compared to 85.7% of the human F1 score. It indicates that this data is a challenging task. Additionally, we increased the performance by no-answer data augmentation. Through the distribution of this data, we intend to extend the limit of MRC that was limited to plain text to real world tasks of various lengths and formats. | 441 | 1 |
squad_v1_pt | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:mit",
"arxiv:1606.05250"
] | Portuguese translation of the SQuAD dataset. The translation was performed automatically using the Google Cloud API. | 473 | 2 |
squad_v2 | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1606.05250"
] | combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers
to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from answering. | 41,619 | 27 |
squadshifts | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0"
] | null | 3,153 | 2 |
srwac | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:sr",
"license:cc-by-sa-3.0"
] | The Serbian web corpus srWaC was built by crawling the .rs top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Serbian vs. Croatian).
Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations. | 266 | 1 |
sst | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | The Stanford Sentiment Treebank, the first corpus with fully labeled parse trees that allows for a
complete analysis of the compositional effects of sentiment in language. | 9,176 | 10 |
stereoset | false | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"stereotype-detection",
"arxiv:2004.09456"
] | Stereoset is a dataset that measures stereotype bias in language models. Stereoset consists of 17,000 sentences that
measures model preferences across gender, race, religion, and profession. | 855 | 2 |
story_cloze | false | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown"
] | Story Cloze Test' is a commonsense reasoning framework for evaluating story understanding,
story generation, and script learning.This test requires a system to choose the correct ending
to a four-sentence story. | 9,754 | 2 |
stsb_mt_sv | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-sts-b",
"language:sv",
"license:unknown",
"arxiv:2009.03116"
] | null | 266 | 1 |
stsb_multi_mt | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-sts-b",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"language:zh",
"license:other",
"arxiv:1708.00055"
] | These are different multilingual translations and the English original of the STSbenchmark dataset. Translation has been done with deepl.com. | 7,309 | 17 |
style_change_detection | false | [] | The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general.
Access to the dataset needs to be requested from zenodo. | 403 | 0 |
subjqa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|yelp_review_full",
"source_datasets:extended|other-amazon_reviews_ucsd",
"source_datasets:extended|other-tripadvisor_reviews",
"language:en",
"license:unknown",
"arxiv:2004.14283"
] | SubjQA is a question answering dataset that focuses on subjective questions and answers.
The dataset consists of roughly 10,000 questions over reviews from 6 different domains: books, movies, grocery,
electronics, TripAdvisor (i.e. hotels), and restaurants. | 6,338 | 3 |
super_glue | false | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:natural-language-inference",
"task_ids:word-sense-disambiguation",
"task_ids:coreference-resolution",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"superglue",
"NLU",
"natural language understanding"
] | SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard. | 674,721 | 84 |
superb | false | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"task_ids:speaker-identification",
"task_ids:audio-intent-classification",
"task_ids:audio-emotion-recognition",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|other-librimix",
"source_datasets:extended|other-speech_commands",
"language:en",
"license:unknown",
"query-by-example-spoken-term-detection",
"audio-slot-filling",
"speaker-diarization",
"automatic-speaker-verification",
"arxiv:2105.01051"
] | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing community lacks a similar setup to systematically explore the
paradigm. To bridge this gap, we introduce Speech processing Universal
PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the
performance of a shared model across a wide range of speech processing tasks
with minimal architecture changes and labeled data. Among multiple usages of the
shared model, we especially focus on extracting the representation learned from
SSL due to its preferable re-usability. We present a simple framework to solve
SUPERB tasks by learning task-specialized lightweight prediction heads on top of
the frozen shared model. Our results demonstrate that the framework is promising
as SSL representations show competitive generalizability and accessibility
across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a
benchmark toolkit to fuel the research in representation learning and general
speech processing.
Note that in order to limit the required storage for preparing this dataset, the
audio is stored in the .wav format and is not converted to a float32 array. To
convert the audio file to a float32 array, please make use of the `.map()`
function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | 2,388 | 13 |
svhn | false | [
"task_categories:image-classification",
"task_categories:object-detection",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other"
] | SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting.
It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images)
and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. | 556 | 2 |
swag | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1808.05326"
] | Given a partial description like "she opened the hood of the car,"
humans can reason about the situation and anticipate what might come
next ("then, she examined the engine"). SWAG (Situations With Adversarial Generations)
is a large-scale dataset for this task of grounded commonsense
inference, unifying natural language inference and physically grounded reasoning.
The dataset consists of 113k multiple choice questions about grounded situations
(73k training, 20k validation, 20k test).
Each question is a video caption from LSMDC or ActivityNet Captions,
with four answer choices about what might happen next in the scene.
The correct answer is the (real) video caption for the next event in the video;
the three incorrect answers are adversarially generated and human verified,
so as to fool machines but not humans. SWAG aims to be a benchmark for
evaluating grounded commonsense NLI and for learning representations.
The full data contain more information,
but the regular configuration will be more interesting for modeling
(note that the regular data are shuffled). The test set for leaderboard submission
is under the regular configuration. | 4,834 | 7 |
swahili | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sw",
"license:cc-by-4.0"
] | The Swahili dataset developed specifically for language modeling task.
The dataset contains 28,000 unique words with 6.84M, 970k, and 2M words for the train,
valid and test partitions respectively which represent the ratio 80:10:10.
The entire dataset is lowercased, has no punctuation marks and,
the start and end of sentence markers have been incorporated to facilitate easy tokenization during language modeling. | 283 | 1 |
swahili_news | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sw",
"license:cc-by-4.0"
] | Swahili is spoken by 100-150 million people across East Africa. In Tanzania, it is one of two national languages (the other is English) and it is the official language of instruction in all schools. News in Swahili is an important part of the media sphere in Tanzania.
News contributes to education, technology, and the economic growth of a country, and news in local languages plays an important cultural role in many Africa countries. In the modern age, African languages in news and other spheres are at risk of being lost as English becomes the dominant language in online spaces.
The Swahili news dataset was created to reduce the gap of using the Swahili language to create NLP technologies and help AI practitioners in Tanzania and across Africa continent to practice their NLP skills to solve different problems in organizations or societies related to Swahili language. Swahili News were collected from different websites that provide news in the Swahili language. I was able to find some websites that provide news in Swahili only and others in different languages including Swahili.
The dataset was created for a specific task of text classification, this means each news content can be categorized into six different topics (Local news, International news , Finance news, Health news, Sports news, and Entertainment news). The dataset comes with a specified train/test split. The train set contains 75% of the dataset and test set contains 25% of the dataset. | 280 | 1 |
swda | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-Switchboard-1 Telephone Speech Corpus, Release 2",
"language:en",
"license:cc-by-nc-sa-3.0",
"arxiv:1811.05021",
"arxiv:1711.05568",
"arxiv:1709.04250",
"arxiv:1805.06280"
] | The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2 with
turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the
associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to
align the two resources. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the
conversations and their participants. | 512 | 7 |
swedish_medical_ner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:sv",
"license:cc-by-sa-4.0"
] | SwedMedNER is a dataset for training and evaluating Named Entity Recognition systems on medical texts in Swedish.
It is derived from medical articles on the Swedish Wikipedia, Läkartidningen, and 1177 Vårdguiden. | 645 | 0 |
swedish_ner_corpus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sv",
"license:cc-by-4.0"
] | Webbnyheter 2012 from Spraakbanken, semi-manually annotated and adapted for CoreNLP Swedish NER. Semi-manually defined in this case as: Bootstrapped from Swedish Gazetters then manually correcte/reviewed by two independent native speaking swedish annotators. No annotator agreement calculated. | 268 | 0 |
swedish_reviews | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:sv",
"license:unknown"
] | null | 285 | 1 |