id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
xiaobendanyn/nyt10
false
[]
null
4
0
xiaobendanyn/tacred
false
[]
null
1
3
xkang/github-issues
false
[]
null
0
0
xuyeliu/notebookCDG
false
[ "arxiv:2104.01002" ]
null
0
1
yabramuvdi/wfh-problematic
false
[]
null
0
0
yannobla/Sunshine
false
[]
null
0
0
yazdipour/text-to-sparql-kdwd
false
[]
null
1
0
ydshieh/coco_dataset_script
false
[]
COCO is a large-scale object detection, segmentation, and captioning dataset.
8,967
2
yerevann/sst2
false
[]
null
0
0
yharyarias/tirads_tiroides
false
[]
null
0
1
yhavinga/mc4_nl_cleaned
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "multilinguality:en-nl", "source_datasets:extended", "language:nl", "language:en", "license:odc-by", "arxiv:1910.10683" ]
A thoroughly cleaned version of the Dutch portion of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning detailed in the repository README file.
5
6
yluisfern/PBU
false
[]
null
0
0
yo/devparty
false
[]
null
0
1
yonesuke/Ising2D
false
[]
null
0
0
yonesuke/Vicsek
false
[ "license:mit" ]
null
2
0
yonesuke/kuramoto
false
[]
null
0
0
ysharma/rickandmorty
false
[]
null
6
0
yuanchuan/annotated_reference_strings
false
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
A repository of reference strings annotated using CSL processor using citations obtained from various sources.
0
0
yuchenlin/OntoRock
false
[]
null
0
0
yuvalkirstain/asset
false
[]
null
0
0
yuvalkirstain/contract_nli-debug
false
[]
null
0
0
yuvalkirstain/contract_nli_t5
false
[]
null
0
0
yuvalkirstain/contract_nli_t5_lm
false
[]
null
0
0
yuvalkirstain/qasper_t5
false
[]
null
1
0
yuvalkirstain/qasper_t5_lm
false
[]
null
1
0
yuvalkirstain/qmsum_t5
false
[]
null
0
0
yuvalkirstain/qmsum_t5_lm
false
[]
null
114
0
yuvalkirstain/quality
false
[]
null
0
0
yuvalkirstain/quality_debug
false
[]
null
0
0
yuvalkirstain/quality_squad
false
[]
null
0
0
yuvalkirstain/quality_squad_debug
false
[]
null
0
0
yuvalkirstain/quality_t5
false
[]
null
0
0
yuvalkirstain/quality_t5_lm
false
[]
null
0
0
yuvalkirstain/scrolls_t5
false
[]
null
0
0
yuvalkirstain/squad_full_doc
false
[]
null
0
0
yuvalkirstain/squad_seq2seq
false
[]
null
2
0
yuvalkirstain/squad_t5
false
[]
null
0
0
yuvalkirstain/summ_screen_fd_t5
false
[]
null
0
0
yuvalkirstain/summ_screen_fd_t5_lm
false
[]
null
0
0
yxchar/ag-tlm
false
[]
null
0
0
yxchar/amazon-tlm
false
[]
null
0
0
yxchar/chemprot-tlm
false
[]
null
8
0
yxchar/citation_intent-tlm
false
[]
null
0
1
yxchar/hyp-tlm
false
[]
null
0
0
yxchar/imdb-tlm
false
[]
null
2
0
yxchar/rct-20k-tlm
false
[]
null
0
0
yxchar/sciie-tlm
false
[]
null
0
0
z-uo/female-LJSpeech-italian
false
[ "multilinguality:monolingual", "language:it" ]
null
0
0
z-uo/male-LJSpeech-italian
false
[ "multilinguality:monolingual", "language:it" ]
null
4
0
z-uo/squad-it
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:8k<n<10k", "language:it" ]
null
0
0
zapsdcn/ag
false
[]
null
0
0
zapsdcn/amazon
false
[]
null
0
0
zapsdcn/chemprot
false
[]
null
198
0
zapsdcn/citation_intent
false
[]
null
233
0
zapsdcn/hyperpartisan_news
false
[]
null
22
0
zapsdcn/imdb
false
[]
null
0
0
zapsdcn/rct-20k
false
[]
null
0
0
zapsdcn/sciie
false
[]
null
12
0
zf-org/org_dataset
false
[]
null
0
0
zfaB4Hmm/test
false
[]
null
0
0
zhangruihan1/face-recognition-validation
false
[]
null
1
1
zhangruihan1/face-recognition
false
[]
null
0
1
zhangruihan1/fr-cfp_fp
false
[]
null
0
0
zhoujun/hitab
false
[]
null
1
0
zhufy/xquad_split
false
[]
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi and Romanian. Consequently, the dataset is entirely parallel across 12 languages.
0
0
zj88zj/PubMed_200k_RCT
false
[]
null
3
1
zj88zj/SCIERC
false
[]
null
0
0
zloelias/kinopoisk-reviews-short
false
[]
null
0
0
zloelias/kinopoisk-reviews
false
[]
null
2
0
zloelias/lenta-ru-short
false
[]
null
0
0
zloelias/lenta-ru
false
[]
null
1
0
zwang199/autonlp-data-traffic_nlp_binary
false
[ "task_categories:text-classification", "language:en" ]
null
0
0
fancyerii/test
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "size_categories:10K<n<100K" ]
null
0
0
ArnavL/finetune_preprocessed_yelp
false
[]
null
0
0
huggan/anime-faces
false
[ "license:cc0-1.0" ]
null
17
4
GEM-submissions/lewtun__this-is-a-test__1646314818
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
0
0
GEM-submissions/lewtun__this-is-a-test__1646316929
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
0
0
v-card/lol
false
[]
null
0
0
fuliucansheng/wheat
false
[]
null
0
0
davanstrien/testhugit
false
[]
null
0
0
testst/dsdfasdfsaf
false
[]
null
0
0
firzens/authors
false
[]
null
0
0
NLPC-UOM/Sinhala-Tamil-Aligned-Parallel-Corpus
false
[ "language:si", "license:mit" ]
null
0
0
NLPC-UOM/AnanyaSinhalaNERDataset
false
[]
null
0
0
openclimatefix/gfs-reforecast
false
[]
This dataset consists of various NOAA datasets related to operational forecasts, including FNL Analysis files, GFS operational forecasts, and the raw observations used to initialize the grid.
0
1
nlpaueb/finer-139
false
[ "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "language:en", "license:cc-by-sa-4.0", "arxiv:2203.06482" ]
FiNER-139 is a named entity recognition dataset consisting of 10K annual and quarterly English reports (filings) of publicly traded companies downloaded from the U.S. Securities and Exchange Commission (SEC) annotated with 139 XBRL tags in the IOB2 format.
803
8
GEM-submissions/ratishsp__seqplan__1646397329
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
0
0
GEM-submissions/ratishsp__seqplan__1646397829
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
0
0
Alvenir/alvenir_asr_da_eval
false
[ "license:cc-by-4.0" ]
Dataset of a little bit more than 5hours primarily intended as an evaluation dataset for Danish.
14
5
google/xtreme_s
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:extended|multilingual_librispeech", "source_datasets:extended|covost2", "language:afr", "language:amh", "language:ara", "language:asm", "language:ast", "language:azj", "language:bel", "language:ben", "language:bos", "language:cat", "language:ceb", "language:cmn", "language:ces", "language:cym", "language:dan", "language:deu", "language:ell", "language:eng", "language:spa", "language:est", "language:fas", "language:ful", "language:fin", "language:tgl", "language:fra", "language:gle", "language:glg", "language:guj", "language:hau", "language:heb", "language:hin", "language:hrv", "language:hun", "language:hye", "language:ind", "language:ibo", "language:isl", "language:ita", "language:jpn", "language:jav", "language:kat", "language:kam", "language:kea", "language:kaz", "language:khm", "language:kan", "language:kor", "language:ckb", "language:kir", "language:ltz", "language:lug", "language:lin", "language:lao", "language:lit", "language:luo", "language:lav", "language:mri", "language:mkd", "language:mal", "language:mon", "language:mar", "language:msa", "language:mlt", "language:mya", "language:nob", "language:npi", "language:nld", "language:nso", "language:nya", "language:oci", "language:orm", "language:ory", "language:pan", "language:pol", "language:pus", "language:por", "language:ron", "language:rus", "language:bul", "language:snd", "language:slk", "language:slv", "language:sna", "language:som", "language:srp", "language:swe", "language:swh", "language:tam", "language:tel", "language:tgk", "language:tha", "language:tur", "language:ukr", "language:umb", "language:urd", "language:uzb", "language:vie", "language:wol", "language:xho", "language:yor", "language:yue", "language:zul", "license:cc-by-4.0", "arxiv:2203.10752", "arxiv:2205.12446", "arxiv:2007.10310" ]
XTREME-S covers four task families: speech recognition, classification, speech-to-text translation and retrieval. Covering 102 languages from 10+ language families, 3 different domains and 4 task families, XTREME-S aims to simplify multilingual speech representation evaluation, as well as catalyze research in “universal” speech representation learning.
367
24
anjandash/java-8m-methods-v1
false
[ "multilinguality:monolingual", "language:java", "license:mit" ]
null
0
0
PhilSad/data-guided-scp-gptj-lit
false
[]
null
0
0
elkarhizketak
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:eu", "license:cc-by-sa-4.0", "dialogue-qa" ]
ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
0
1
ruanchaves/hashset_distant_sampled
false
[ "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:hi", "language:en", "license:unknown", "word-segmentation", "arxiv:2201.06741" ]
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act as a good benchmark for hashtag segmentation tasks. HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation. HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
2
0
ruanchaves/hashset_distant
false
[ "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:hi", "language:en", "license:unknown", "word-segmentation", "arxiv:2201.06741" ]
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act as a good benchmark for hashtag segmentation tasks. HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
2
0
ruanchaves/hashset_manual
false
[ "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:hi", "language:en", "license:unknown", "word-segmentation", "arxiv:2201.06741" ]
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act as a good benchmark for hashtag segmentation tasks. HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, a list storing whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
0
0
ruanchaves/stan_large
false
[ "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:agpl-3.0", "word-segmentation" ]
The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation" by Maddela et al.. "STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset. STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art models is only around 10%. Most of the errors were related to named entities. For example, #lionhead, which refers to the “Lionhead” video game company, was labeled as “lion head”. We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations."
0
0
ruanchaves/stan_small
false
[ "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:unknown", "word-segmentation", "arxiv:1501.03210" ]
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
2
0
ruanchaves/boun
false
[ "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:unknown", "word-segmentation" ]
Dev-BOUN Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc. Test-BOUN Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
2
1
ruanchaves/dev_stanford
false
[ "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:unknown", "word-segmentation" ]
1000 hashtags manually segmented by Çelebi et al. for development purposes, randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
2
0