id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
sia-precision-education/pile_cpp
false
[]
null
0
0
sia-precision-education/pile_js
false
[]
null
0
0
sia-precision-education/pile_python
false
[]
null
2
2
sia-precision-education/sia_pile_sample
false
[]
null
0
0
sijpapi/batch13
false
[]
\\nhttps://guillaumejaume.github.io/FUNSD/
0
0
sijpapi/funsd
false
[]
null
0
0
sijpapi/funsds
false
[]
\\nhttps://guillaumejaume.github.io/FUNSD/
0
0
silentzone/test
false
[ "license:apache-2.0" ]
null
0
0
sine/zzz
false
[]
null
0
0
sismetanin/rureviews
false
[]
null
0
0
smallv0221/my-test
false
[]
null
0
0
softcatala/Europarl-catalan
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:extended|europarl_bilingual", "language:ca", "language:de", "language:en", "license:cc-by-4.0" ]
null
0
0
softcatala/Softcatala-Web-Texts-Dataset
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ca", "license:cc-by-sa-4.0", "license:cc0-1.0" ]
null
18
0
softcatala/Tilde-MODEL-Catalan
false
[ "task_categories:text2text-generation", "task_categories:translation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:extended|tilde_model", "language:ca", "language:de", "license:cc-by-4.0", "conditional-text-generation" ]
null
0
0
softcatala/ca_text_corpus
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "license:cc0-1.0" ]
null
0
0
softcatala/catalan-dictionary
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "license:gpl-2.0", "license:lgpl-2.1" ]
null
30
0
softcatala/open-source-english-catalan-corpus
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "language:en", "license:gpl-3.0" ]
null
0
0
solomonk/reddit_mental_health_posts
false
[]
null
171
3
spacemanidol/ms_marco_doc2query
false
[]
null
0
0
spacemanidol/msmarco_passage_ranking
false
[]
null
0
0
spasis/datasets-github-issues
false
[]
null
1
0
spasis/github-issues
false
[]
null
0
0
sshleifer/pseudo_bart_xsum
false
[]
Extreme Summarization (XSum) Dataset. There are two features: - document: Input news article. - summary: One sentence summary of the article.
0
0
stas/c4-en-10k
false
[ "language:en", "license:apache-2.0" ]
This is a small subset representing the first 10K records of the original C4 dataset, "en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at https://huggingface.co/datasets/c4.
43
1
stas/openwebtext-10k
false
[]
An open-source replication of the WebText dataset from OpenAI. This is a small subset representing the first 10K records from the original dataset - created for testing. The full 8M-record dataset is at https://huggingface.co/datasets/openwebtext
15,159
4
stas/oscar-en-10k
false
[ "language:en", "license:apache-2.0" ]
This is a small subset representing 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at https://huggingface.co/datasets/oscar.
1,310
2
stas/wmt14-en-de-pre-processed
false
[]
null
312
0
stas/wmt16-en-ro-pre-processed
false
[]
null
97
0
stevhliu/demo
false
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "conditional-text-generation" ]
null
9
0
stiel/skjdhjkasdhasjkd
false
[]
null
0
0
stjokerli/TextToText_boolq
false
[]
null
1
0
stjokerli/TextToText_boolq_seqio
false
[]
null
0
0
stjokerli/TextToText_cb
false
[]
null
4
0
stjokerli/TextToText_cb_seqio
false
[]
null
0
0
stjokerli/TextToText_copa
false
[]
null
0
0
stjokerli/TextToText_copa_seqio
false
[]
null
0
0
stjokerli/TextToText_mnli
false
[ "license:mit" ]
null
2
0
stjokerli/TextToText_mnli_seqio
false
[]
null
0
0
stjokerli/TextToText_rte
false
[]
null
0
0
stjokerli/TextToText_rte_seqio
false
[]
null
0
0
subiksha/OwnDataset
false
[]
null
0
0
superb/superb-data
false
[]
null
0
3
susumu2357/squad_v2_sv
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikipedia", "language:sv", "license:apache-2.0" ]
SQuAD_v2_sv is a Swedish version of SQuAD2.0. Translation was done automatically by using Google Translate API but it is not so straightforward because; 1. the span which determines the start and the end of the answer in the context may vary after translation, 2. tne translated context may not contain the translated answer if we translate both independently. More details on how to handle these will be provided in another blog post.
50
0
svakulenk0/qrecc
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|natural_questions", "source_datasets:extended|quac", "language:en", "license:cc-by-3.0", "arxiv:2010.04898" ]
null
2
3
svakulenk0/spoken_kgqa
false
[]
null
0
0
svalabs/all-nli-german-translation-wmt19
false
[]
null
0
1
svanhvit/iceErrorCorpus
false
[]
Icelandic GEC corpus. The Icelandic Error Corpus (IceEC) is a collection of texts in modern Icelandic annotated for mistakes related to spelling, grammar, and other issues. The texts are organized by genre, if which there are three: student essays, online news texts and Icelandic Wikipedia articles. Each mistake is marked according to error type using an error code, of which there are 253. The corpus consists of 4,046 texts with 56,956 categorized error instances. The corpus is divided into a development corpus, which comprises 90% of the corpus, and a test corpus, which comprises the other 10% of the corpus.
0
0
svanhvit/icelandic-ner-MIM-GOLD-NER
false
[]
This Icelandic named entity (NE) corpus, MIM-GOLD-NER, is a version of the MIM-GOLD corpus tagged for NEs. Over 48 thousand NEs are tagged in this corpus of one million tokens, which can be used for training named entity recognizers for Icelandic. The MIM-GOLD-NER corpus was developed at Reykjavik University in 2018–2020, funded by the Strategic Research and Development Programme for Language Technology (LT). Two LT students were in charge of the corpus annotation and of training named entity recognizers using machine learning methods. A semi-automatic approach was used for annotating the corpus. Lists of Icelandic person names, location names, and company names were compiled and used for extracting and classifying as many named entities as possible. Regular expressions were then used to find certain numerical entities in the corpus. After this automatic pre-processing step, the whole corpus was reviewed manually to correct any errors. The corpus is tagged for eight named entity types: PERSON – names of humans, animals and other beings, real or fictional. LOCATION – names of locations, real or fictional, i.e. buildings, street and place names, both real and fictional. All geographical and geopolitical entities such as cities, countries, counties and regions, as well as planet names and other outer space entities. ORGANIZATION – companies and other organizations, public or private, real or fictional. Schools, churches, swimming pools, community centers, musical groups, other affiliations. MISCELLANEOUS – proper nouns that don’t belong to the previous three categories, such as products, books and movie titles, events, such as wars, sports tournaments, festivals, concerts, etc. DATE – absolute temporal units of a full day or longer, such as days, months, years, centuries, both written numerically and alphabetically. TIME – absolute temporal units shorter than a full day, such as seconds, minutes, or hours, both written numerically and alphabetically. MONEY – exact monetary amounts in any currency, both written numerically and alphabetically. PERCENT – percentages, both written numerically and alphabetically MIM-GOLD-NER is intended for training of named entity recognizers for Icelandic. It is in the CoNLL format, and the position of each token within the NE is marked using the BIO tagging format. The corpus can be used in its entirety or by training on subsets of the text types that best fit the intended domain. The Named Entity Corpus corpus is distributed with the same special user license as MIM-GOLD, which is based on the MIM license, since the texts in MIM-GOLD were sampled from the MIM corpus.
0
0
tals/test
false
[]
null
0
0
tanfiona/causenet_wiki
false
[]
Crawled Wikipedia Data from CIKM 2020 paper 'CauseNet: Towards a Causality Graph Extracted from the Web.'
0
2
tasosk/airlines
false
[]
null
4
0
tau/fs
false
[]
null
0
0
tau/mrqa
false
[]
The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge. The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task.
0
0
tau/scientific_papers
false
[]
null
0
0
tau/scrolls
false
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:multiple-choice-qa", "task_ids:natural-language-inference", "language:en", "query-based-summarization", "long-texts", "arxiv:2201.03533", "arxiv:2104.02112", "arxiv:2104.07091", "arxiv:2104.05938", "arxiv:1712.07040", "arxiv:2105.03011", "arxiv:2112.08608", "arxiv:2110.01799" ]
SCROLLS: Standardized CompaRison Over Long Language Sequences. A suite of natural language datasets that require reasoning over long texts. https://scrolls-benchmark.com/
8,426
9
tesemnikov-av/toxic_dataset_classification
false
[]
null
0
0
tesemnikov-av/toxic_dataset_ner
false
[]
null
0
0
testOrganization01/test05
false
[]
null
0
0
teven/all_wikipedia_passages
false
[]
null
0
0
teven/c4_15M
false
[]
null
1
0
teven/github_all_lang_filtered
false
[]
null
0
0
teven/matched_passages_wikidata
false
[]
null
0
1
teven/mpww
false
[]
null
0
0
teven/mpww_all_passages
false
[]
null
0
0
teven/prompted_examples
false
[]
null
0
0
teven/pseudo_crawl_en_seeds
false
[]
null
0
0
teven/stackexchange
false
[]
null
55
0
tharindu/MOLD
false
[]
null
0
0
tharindu/SOLID
false
[]
null
7
0
thiemowa/argumentationreviewcorpus
false
[]
null
0
1
thiemowa/empathyreviewcorpus
false
[]
null
0
0
thomwolf/codeparrot-train
false
[]
null
0
0
thomwolf/codeparrot-valid
false
[]
null
0
0
thomwolf/codeparrot
false
[]
null
0
0
thomwolf/github-dataset
false
[]
null
0
0
thomwolf/github-python
false
[]
null
0
5
thomwolf/very-good-dataset
false
[]
null
0
0
thomwolf/very-test-dataset-2
false
[]
null
0
0
thomwolf/very-test-dataset
false
[]
null
44
0
tianxing1994/temp
false
[]
null
0
0
toddmorrill/github-issues
false
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:document-retrieval", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:unknown", "language:'en-US'" ]
null
0
0
toloka/CrowdSpeech
false
[ "task_categories:summarization", "task_categories:automatic-speech-recognition", "task_categories:text2text-generation", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0", "conditional-text-generation", "stuctured-to-text", "speech-recognition", "arxiv:2107.01091" ]
CrowdSpeech is a publicly available large-scale dataset of crowdsourced audio transcriptions. It contains annotations for more than 50 hours of English speech transcriptions from more than 1,000 crowd workers.
2
2
toloka/VoxDIY-RusNews
false
[ "task_categories:summarization", "task_categories:automatic-speech-recognition", "task_categories:text2text-generation", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:ru", "license:cc-by-4.0", "conditional-text-generation", "stuctured-to-text", "speech-recognition", "arxiv:2107.01091" ]
VoxDIY: Benchmark Dataset for Russian Crowdsourced Audio Transcription.
3
1
tommy19970714/common_voice
false
[]
Common Voice is Mozilla's initiative to help teach machines how real people speak. The dataset currently consists of 7,335 validated hours of speech in 60 languages, but we’re always adding more voices and languages.
0
0
toriving/kosimcse
false
[]
null
0
0
toriving/talktalk-sentiment-210713-multi-singleturn-custom-multiturn
false
[]
null
0
0
tranduyquang2205/vietnamese_dataset
false
[]
null
0
0
transformersbook/codeparrot-train
false
[]
null
948
1
transformersbook/codeparrot-valid
false
[]
null
67
0
transformersbook/codeparrot
false
[ "python", "code" ]
null
237
25
trnt/github-issues
false
[]
null
0
0
ttj/metadata_arxiv
false
[]
null
0
0
turingbench/TuringBench
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0" ]
This benchmark environment contains a dataset comprised of generated texts from pre-trained language models. We also have two benchmark tasks - human vs. machine (i.e., binary classification) and authorship attribution (i.e., multi-class classification). These benchmark tasks and dataset are hosted on the TuringBench website with Leaderboards for each task.
88
0
uasoyasser/rgfes
false
[]
null
0
0
ubamba98/ro_cv7_processed
false
[]
null
0
0
ucberkeley-dlab/measuring-hate-speech
false
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2009.10277", "counterspeech", "hate-speech", "text-regression", "irt", "arxiv:2009.10277" ]
null
782
9
uit-nlp/vietnamese_students_feedback
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:topic-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:vi", "license:unknown" ]
Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different research fields between sentiment analysis and education. Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are human-annotated with two different tasks: sentiment-based and topic-based classifications. To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over 91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved approximately 88% of the sentiment F1-score and over 84% of the topic F1-score.
60
2
ujjawal1612/quora
false
[]
null
0
0
unicamp-dl/mmarco
false
[ "arxiv:2108.13897", "arxiv:2105.06813" ]
mMARCO translated datasets
622
22
unicamp-dl/mrobust
false
[ "arxiv:2108.13897", "arxiv:2105.06813", "arxiv:2209.13738" ]
Robust04 translated datasets
105
0