id
stringlengths 2
115
| private
bool 1
class | tags
list | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
GEM-submissions/lewtun__this-is-a-test__1647247409 | false | [
"benchmark:gem",
"evaluation",
"benchmark"
] | null | 0 | 0 |
EMBO/BLURB | false | [
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:closed-domain-qa",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2007.15779",
"arxiv:1909.06146"
] | null | 301 | 2 |
Jiejie/asr_book_lm_v2.0 | false | [] | null | 0 | 0 |
GEM-submissions/lewtun__this-is-a-test__1647256250 | false | [
"benchmark:gem",
"evaluation",
"benchmark"
] | null | 0 | 0 |
wikitablequestions | false | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"table-question-answering",
"arxiv:1508.00305"
] | This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables. | 302 | 6 |
gimmaru/github-issues | false | [
"arxiv:2005.00614"
] | null | 0 | 0 |
GEM-submissions/lewtun__this-is-a-test__1647263213 | false | [
"benchmark:gem",
"evaluation",
"benchmark"
] | null | 0 | 0 |
marsyas/gtzan | false | [] | GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock. | 1 | 0 |
GEM/xwikis | false | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583"
] | The XWikis Corpus (Perez-Beltrachini and Lapata, 2021) provides datasets with different language pairs and directions for cross-lingual abstractive document summarisation. This current version includes four languages: English, German, French, and Czech. The dataset is derived from Wikipedia. It is based on the observation that for a Wikipedia title, the lead section provides an overview conveying salient information, while the body provides detailed information. It thus assumes the body and lead paragraph as a document-summary pair. Furthermore, as a Wikipedia title can be associated with Wikipedia articles in various languages, 1) Wikipedia’s Interlanguage Links are used to find titles across languages and 2) given any two related Wikipedia titles, e.g., Huile d’Olive (French) and Olive Oil (English), the lead paragraph from one title is paired with the body of the other to derive cross-lingual pairs. | 21 | 2 |
lvwerra/my_test | false | [] | null | 0 | 0 |
lvwerra/my_test_2 | false | [] | null | 0 | 0 |
Jiejie/asr_book_lm_v2.1 | false | [] | null | 0 | 0 |
cgarciae/cartoonset | false | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"arxiv:1711.05139"
] | Cartoon Set is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork
categories, 4 color categories, and 4 proportion categories, with a total of ~1013 possible
combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes. | 25 | 11 |
PradeepReddyThathireddy/Inspiring_Content_Detection_Dataset | false | [] | null | 0 | 0 |
conll2012_ontonotesv5 | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:coreference-resolution",
"task_ids:parsing",
"task_ids:lemmatization",
"task_ids:word-sense-disambiguation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"language:en",
"language:zh",
"license:cc-by-nc-nd-4.0",
"semantic-role-labeling"
] | OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. | 2,048 | 17 |
anjandash/java-8m-methods-v2 | false | [
"multilinguality:monolingual",
"language:java",
"license:mit"
] | null | 0 | 0 |
victor/autonlp-data-tweet-sentiment | false | [
"task_categories:text-classification",
"language:en"
] | null | 0 | 0 |
agemagician/uniref50 | false | [] | null | 4,605 | 0 |
hazal/Turkish-Biomedical-corpus-trM | false | [
"language:tr"
] | null | 0 | 2 |
rubrix/go_emotions_training | false | [] | null | 2 | 0 |
Jiejie/asr_book_lm_v2.3 | false | [] | null | 0 | 0 |
malteos/paperswithcode-aspects | false | [] | Papers with aspects from paperswithcode.com dataset | 0 | 0 |
kSaluja/tokens_data | false | [] | null | 0 | 0 |
Dayyan/bwns | false | [] | null | 0 | 0 |
Hiruni99/eng-sin-laws-and-acts | false | [] | null | 0 | 0 |
rubrix/research_titles_multi-label | false | [] | null | 8 | 0 |
rubrix/go_emotions_multi-label | false | [] | null | 4 | 0 |
elricwan/roberta-data | false | [] | null | 1 | 0 |
willcai/wav2vec2_common_voice_accents_3 | false | [] | null | 0 | 0 |
jorge-henao/disco_poetry_spanish | false | [] | null | 0 | 1 |
gcaillaut/enwiki_el | false | [
"task_categories:other",
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en-EN",
"license:wtfpl"
] | English Wikipedia dataset for Entity Linking | 0 | 0 |
crabz/stsb-sk | false | [
"task_ids:semantic-similarity-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|stsb_multi_mt",
"language:sk",
"license:unknown"
] | null | 0 | 0 |
mfleck/german_extracted_text | false | [] | null | 0 | 0 |
ebrigham/agnewsadapted | false | [] | AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
([email protected]) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015). | 0 | 0 |
yangdong/ecqa | false | [] | null | 63 | 0 |
davanstrien/newspaper_navigator_people | false | [] | null | 0 | 0 |
voidful/NMSQA | false | [
"task_categories:question-answering",
"task_categories:automatic-speech-recognition",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"speech-recognition",
"arxiv:2203.04911"
] | null | 140 | 3 |
shpotes/SJTU | false | [] | null | 0 | 0 |
shpotes/ImVisible | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 0 |
LongNN/news_sum | false | [
"license:gpl-3.0"
] | null | 0 | 0 |
tomekkorbak/test | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-small | false | [] | null | 0 | 0 |
MatanBenChorin/temp | false | [] | null | 0 | 0 |
shivam/split-test | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 0 |
rubrix/research_papers_multi-label | false | [] | null | 0 | 1 |
wietsedv/udpos28 | false | [] | Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal) Stanford dependencies (de Marneffe et al., 2006, 2008, 2014), Google universal part-of-speech tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets (Zeman, 2008). | 532 | 0 |
nimaster/autonlp-data-devign_raw_test | false | [
"task_categories:text-classification"
] | null | 0 | 0 |
anthonny/hate_speech | false | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es-EC",
"license:unknown"
] | null | 1 | 0 |
umanlp/xscitldr | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 0 |
nikit91/qald9 | false | [] | null | 0 | 0 |
n6L3/kaggle | false | [
"license:apache-2.0"
] | null | 0 | 0 |
n6L3/nlp | false | [] | null | 0 | 0 |
Wang123/codeparrot-train | false | [] | null | 0 | 0 |
Wang123/codeparrot-valid | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-full_test | false | [] | null | 0 | 0 |
DrishtiSharma/MESD-Processed-Dataset | false | [] | null | 0 | 0 |
abidlabs/crowdsourced-test3 | false | [] | null | 0 | 0 |
abidlabs/crowdsourced-test4 | false | [] | null | 0 | 0 |
abidlabs/crowdsourced-test5 | false | [] | null | 0 | 0 |
shivam/split | false | [] | null | 0 | 0 |
mrm8488/test2 | false | [
"license:wtfpl"
] | null | 0 | 0 |
mercerchen/fakenews-jsonl | false | [] | null | 0 | 0 |
Mionozmi/Ddy | false | [] | null | 0 | 0 |
franz96521/scientific_papers | false | [] | null | 1 | 0 |
Paulosdeanllons/ODS_BOE | false | [
"license:afl-3.0"
] | null | 0 | 1 |
malteos/test-ds | false | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en-US"
] | null | 0 | 0 |
malteos/test2 | false | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0"
] | null | 0 | 0 |
malteos/aspect-paper-embeddings | false | [] | null | 0 | 0 |
elena-soare/crawled-ecommerce | false | [] | null | 0 | 0 |
abdusah/arabic_speech_massive | false | [] | null | 1 | 0 |
cfilt/iwn_wordlists | false | [
"task_categories:token-classification",
"annotations_creators:Shivam Mhaskar, Diptesh Kanojia",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:as",
"language:bn",
"language:mni",
"language:gu",
"language:hi",
"language:kn",
"language:ks",
"language:kok",
"language:ml",
"language:mr",
"language:or",
"language:ne",
"language:pa",
"language:sa",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-nc-sa-4.0",
"abbreviation-detection"
] | We provide the unique word list form the IndoWordnet (IWN) knowledge base. | 0 | 2 |
arun007/mydata | false | [] | null | 0 | 0 |
tomekkorbak/pile-debug | false | [] | null | 0 | 0 |
malteos/aspect-paper-metadata | false | [] | null | 2 | 0 |
hackathon-pln-es/parallel-sentences | false | [] | null | 0 | 0 |
fofiu/test-dataset | false | [] | null | 0 | 0 |
indonesian-nlp/eli5_id | false | [] | null | 0 | 1 |
tomekkorbak/pile-curse-chunk-1 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-0 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-3 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-2 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-5 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-6 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-4 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-16 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-15 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-14 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-13 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-8 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-9 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-20 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-18 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-7 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-24 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-17 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-21 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-22 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-10 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-26 | false | [] | null | 0 | 0 |
tomekkorbak/pile-curse-chunk-11 | false | [] | null | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.