id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
ramybaly/conll2012
false
[]
The CoNLL-2012 shared task involved predicting coreference in English, Chinese, and Arabic, using the final version, v5.0, of the OntoNotes corpus. It was a follow-on to the English-only task organized in 2011. Until the creation of the OntoNotes corpus, resources in this sub-field of language processing were limited to noun phrase coreference, often on a restricted set of entities, such as the ACE entities. OntoNotes provides a large-scale corpus of general anaphoric coreference not restricted to noun phrases or to a specified set of entity types, and covers multiple languages. OntoNotes also provides additional layers of integrated annotation, capturing additional shallow semantic structure. This paper describes the OntoNotes annotation (coreference and other layers) and then describes the parameters of the shared task including the format, pre-processing information, evaluation criteria, and presents and discusses the results achieved by the participating systems. The task of coreference has had a complex evaluation history. Potentially many evaluation conditions, have, in the past, made it difficult to judge the improvement in new algorithms over previously reported results. Having a standard test set and standard evaluation parameters, all based on a resource that provides multiple integrated annotation layers (syntactic parses, semantic roles, word senses, named entities and coreference) and in multiple languages could support joint modeling and help ground and energize ongoing research in the task of entity and event coreference. For more details see https://aclanthology.org/W12-4501.pdf
14
0
ramybaly/nerd
false
[]
Recently, considerable literature has grown up around the theme of few-shot named entity recognition (NER), but little published benchmark data specifically focused on the practical and challenging task. Current approaches collect existing supervised NER datasets and reorganize them into the few-shot setting for empirical study. These strategies conventionally aim to recognize coarse-grained entity types with few examples, while in practice, most unseen entity types are fine-grained. In this paper, we present FEW-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types. FEW-NERD consists of 188,238 sentences from Wikipedia, 4,601,160 words are included and each is annotated as context or a part of a two-level entity type. To the best of our knowledge, this is the first few-shot NER dataset and the largest human-crafted NER dataset. We construct benchmark tasks with different emphases to comprehensively assess the generalization capability of models. Extensive empirical results and analysis show that FEW-NERD is challenging and the problem requires further research. We make Few-NERD public at https://nigding97.github.io/fewnerd/
0
0
ranim/Algerian-Arabic
false
[]
null
0
1
ranpox/xfund
false
[]
null
0
3
rays2pix/example
false
[]
null
0
0
rays2pix/example_dataset
false
[]
null
0
0
rbawden/DiaBLa
false
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:fr", "license:cc-by-sa-4.0" ]
null
4
0
readerbench/ChatLinks
false
[]
null
0
0
rewardsignal/reddit_writing_prompts
false
[]
null
0
4
rgismondi/code-fill-dataset
false
[]
null
0
0
robz/test
false
[]
null
0
0
rocca/sims4-faces
false
[]
null
0
0
ronaldvanos/testdata
false
[]
null
0
0
rony/soccer-dialogues
false
[]
null
1
0
rookieguy12/dataset
false
[]
null
0
0
rosettarandd/rosetta_balcanica
false
[]
null
0
0
roskoN/dailydialog
false
[]
The DailyDialog dataset as provided in the original form with a bit of preprocessing applied to enable dast prototyping. The splits are as in the original distribution.
360
0
roskoN/dstc8-reddit-corpus
false
[]
The DSTC8 dataset as provided in the original form. The only difference is that the splits are in separate zip files. In the orignal output it is one big archive containing all splits.
2
0
rubenwol/multi_news_qasrl
false
[]
null
0
0
rubrix/cleanlab-label_errors
false
[]
null
0
0
rubrix/gutenberg_spacy-ner
false
[]
null
2,825
0
rubrix/imdb_spacy-ner
false
[]
null
0
0
rubrix/sentiment-banking
false
[]
null
0
0
rucyang/sales
false
[]
null
0
0
rwebe/rwebe
false
[]
null
0
0
s-myk/test
false
[]
null
0
0
s3h/arabic-gec
false
[]
null
0
0
s3h/arabic-grammar-corrections
false
[]
null
0
0
s3h/custom-qalb-classification
false
[]
null
0
0
s3h/customized-qalb-v2
false
[]
null
0
0
s3h/customized-qalb
false
[]
null
0
0
s3h/gec-arabic
false
[]
null
0
0
s3h/gec-cleaned
false
[]
null
0
0
s3h/gec-token-classification
false
[]
null
0
0
s3h/poc-gec
false
[]
null
0
0
s50227harry/test1
false
[]
null
0
0
safik/github-issues-comments
false
[]
null
0
0
safik/github-issues
false
[]
null
0
0
sagnikrayc/mctest
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "explanations-in-question-answering" ]
MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension.
18
1
sagnikrayc/quasar
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en-US", "license:bsd-3-clause", "arxiv:1707.03904" ]
We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query.
14
0
sagteam/author_profiling
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ru", "license:apache-2.0" ]
he corpus for the author profiling analysis contains texts in Russian-language which labeled for 5 tasks: 1) gender -- 13530 texts with the labels, who wrote this: text female or male; 2) age -- 13530 texts with the labels, how old the person who wrote the text. This is a number from 12 to 80. In addition, for the classification task we added 5 age groups: 1-19; 20-29; 30-39; 40-49; 50+; 3) age imitation -- 7574 texts, where crowdsource authors is asked to write three texts: a) in their natural manner, b) imitating the style of someone younger, c) imitating the style of someone older; 4) gender imitation -- 5956 texts, where the crowdsource authors is asked to write texts: in their origin gender and pretending to be the opposite gender; 5) style imitation -- 5956 texts, where crowdsource authors is asked to write a text on behalf of another person of your own gender, with a distortion of the authors usual style.
0
0
sajadk/IranianCarLicencePlate
false
[]
null
0
0
salesken/Paraphrase_category_detection
false
[]
null
0
0
sangmini/FooReview
false
[]
null
0
0
sangmini/star_tagging
false
[]
null
0
0
samirt8/fr_corpora_parliament_processed
false
[]
null
0
0
samjgorman/sample
false
[]
null
0
0
sammy786/finnish_traindata
false
[]
null
0
0
sanyu/aw
false
[]
null
0
0
sanyu/er
false
[]
null
0
0
sanyu/hh
false
[]
null
0
0
sanyu/vb
false
[]
null
0
0
sarulab-speech/bvcc-voicemos2022
false
[]
This dataset is for internal use only. For voicemos challenge
9
0
sc2qa/sc2q_commoncrawl
false
[ "arxiv:2109.04689" ]
\
0
1
sc2qa/sc2q_commoncrawl_large
false
[ "arxiv:2109.04689" ]
\
19
1
sc2qa/sc2qa_commoncrawl
false
[ "arxiv:2109.04689" ]
\
0
0
sdfufygvjh/fgghuviugviu
false
[]
null
0
0
seamew/ChnSentiCorp
false
[]
null
928
13
seamew/Hotel
false
[]
null
4
0
seamew/THUCNews
false
[]
null
3
0
seamew/THUCNewsText
false
[]
null
13
2
seamew/THUCNewsTitle
false
[]
null
0
0
seamew/Weibo
false
[]
null
24
1
seanbethard/autonlp-data-summarization_model
false
[]
null
0
2
sebastiaan/test-cefr
false
[]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
0
0
sebastian-hofstaetter/tripclick-training
false
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:other", "annotations_creators:clicks", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:tripclick", "language:en-US", "license:apache-2.0", "arxiv:2201.00365" ]
null
7
0
segments/sidewalk-semantic
false
[ "task_categories:image-segmentation", "task_ids:semantic-segmentation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:expert-generated", "size_categories:n<1K", "source_datasets:original" ]
null
767
16
semeru/completeformer-masked
false
[]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
0
1
sentence-transformers/embedding-training-data
false
[]
null
7
23
sentence-transformers/msmarco-hard-negatives
false
[]
null
390
3
sentence-transformers/parallel-sentences
false
[]
null
2
8
sentence-transformers/reddit-title-body
false
[]
null
81
3
seregadgl/test_set
false
[]
null
0
0
sevbqewre/vebdesbdty
false
[]
null
0
0
severo/autonlp-data-sentiment_detection-3c8bcd36
false
[]
null
0
0
severo/embellishments
false
[ "annotations_creators:no-annotation", "size_categories:n<1K", "source_datasets:original", "license:cc0-1.0" ]
null
0
2
severo/wit
false
[]
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
0
1
seyia92coding/steam_games_2019.csv
false
[]
null
0
0
shahp7575/sia_pile_sample
false
[]
null
0
1
shahp7575/sia_tp_sample
false
[]
null
0
0
shahrukhx01/questions-vs-statements
false
[]
null
0
0
shaina/covid19
false
[]
null
1
0
shanya/website_metadata_c4_toy
false
[]
null
0
0
shao/git_data
false
[]
null
0
1
shao/test
false
[]
null
0
0
sharejing/BiPaR
false
[ "arxiv:1910.05040" ]
null
0
0
sheryylli/utr_total_reads
false
[]
null
0
0
shibing624/nli_zh
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:shibing624", "language_creators:shibing624", "multilinguality:monolingual", "size_categories:100K<n<20M", "source_datasets:https://github.com/shibing624/text2vec", "source_datasets:https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC", "source_datasets:http://icrc.hitsz.edu.cn/info/1037/1162.htm", "source_datasets:http://icrc.hitsz.edu.cn/Article/show/171.html", "source_datasets:https://arxiv.org/abs/1908.11828", "source_datasets:https://github.com/pluto-junzeng/CNSD", "language:zh", "license:cc-by-4.0", "arxiv:1908.11828" ]
纯文本数据,格式:(sentence1, sentence2, label)。常见中文语义匹配数据集,包含ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务。
343
10
shibing624/source_code
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100M<n<200M", "source_datasets:https://github.com/shibing624/code-autocomplete", "source_datasets:https://github.com/bharathgs/Awesome-pytorch-list", "source_datasets:https://github.com/akullpp/awesome-java", "source_datasets:https://github.com/fffaraz/awesome-cpp", "language:en", "license:cc-by-4.0", "license:gfdl" ]
纯文本数据,内容:高质量编程源代码,包括Python,Java,CPP源代码
17
1
shivam/hindi_pib_processed
false
[]
null
0
0
shivam/marathi_pib_processed
false
[]
null
0
0
shivam/marathi_samanantar_processed
false
[]
null
0
0
shivam/test-translation-2
false
[]
null
0
0
shivam/test-translation
false
[]
null
0
0
shivam/test
false
[]
null
0
0
shivkumarganesh/CoLA
false
[]
null
9
0
shivmoha/squad-unanswerable
false
[]
combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
0
0
shivmoha/squad_adversarial_manual
false
[]
This dataset is prepared with the same idea as the squad adversarial dataset, however all the examples have been curated manually by the authors and are significantly more difficult.
0
0
shpotes/ms_coco
false
[]
null
0
0
shpotes/tfcol
false
[]
null
0
0