id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
saattrupdan/doc-nli
false
[]
null
11
0
SocialGrep/the-reddit-nft-dataset
false
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
A comprehensive dataset of Reddit's NFT discussion.
4
1
princeton-nlp/glue_fairseq_format
false
[]
null
0
0
murdockthedude/sv_corpora_parliament_processed
false
[]
null
0
0
aakanksha/udpos
false
[]
Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. The annotation consists in a linguistically motivated word segmentation; a morphological layer comprising lemmas, universal part-of-speech tags, and standardized morphological features; and a syntactic layer focusing on syntactic relations between predicates, arguments and modifiers.
0
0
fut501/ds1
false
[ "license:apache-2.0" ]
null
3
0
spoiled/ecqa_explanation_classify
false
[]
null
0
0
wza/TimeTravel
false
[]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
0
0
aps/bioasq_task_b
false
[]
The data are intended to be used as training and development data for BioASQ 10, which will take place during 2022. There is one file containing the data: - training10b.json The file contains the data of the first nine editions of the challenge: 4234 questions [1] with their relevant documents, snippets, concepts and RDF triples, exact and ideal answers. Differences with BioASQ-training9b.json - 492 new questions added from BioASQ9 - The question with id 56c1f01eef6e394741000046 had identical body with 602498cb1cb411341a00009e. All relevant elements from both questions are available in the merged question with id 602498cb1cb411341a00009e. - The question with id 5c7039207c78d69471000065 had identical body with 601c317a1cb411341a000014. All relevant elements from both questions are available in the merged question with id 601c317a1cb411341a000014. - The question with id 5e4b540b6d0a27794100001c had identical body with 602828b11cb411341a0000fc. All relevant elements from both questions are available in the merged question with id 602828b11cb411341a0000fc. - The question with id 5fdb42fba43ad31278000027 had identical body with 5d35eb01b3a638076300000f. All relevant elements from both questions are available in the merged question with id 5d35eb01b3a638076300000f. - The question with id 601d76311cb411341a000045 had identical body with 6060732b94d57fd87900003d. All relevant elements from both questions are available in the merged question with id 6060732b94d57fd87900003d. [1] 4234 questions : 1252 factoid, 1148 yesno, 1018 summary, 816 list
8
0
zaraTahhhir/urduprusdataset
false
[ "license:mit" ]
null
4
0
Zaratahir123/urduprusdataset
false
[ "license:mit" ]
null
2
0
junliang/symptom
false
[]
null
0
0
janck/bigscience-lama
false
[ "task_categories:text-retrieval", "task_categories:text-classification", "task_ids:fact-checking-retrieval", "task_ids:text-scoring", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "probing" ]
null
0
0
Calin/eurosat-demo
false
[]
null
0
0
StanBienaives/jade-considerants
false
[]
null
0
0
Zaratahir123/groupData
false
[ "license:mit" ]
null
2
0
Zaratahir123/test
false
[ "license:mit" ]
null
2
0
kaizan/amisum_v1
false
[]
null
1
0
shreyasmani/whrdata2021
false
[ "license:other" ]
null
0
0
shweta2911/SCIERC
false
[]
null
1
0
PolyAI/banking77
false
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2003.04807" ]
BANKING77 dataset provides a very fine-grained set of intents in a banking domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection.
14
3
EAST/autotrain-data-Rule
false
[ "task_categories:text-classification", "language:zh" ]
null
0
0
osyvokon/pavlick-formality-scores
false
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en-US", "license:cc-by-3.0" ]
null
19
1
yerevann/common_voice_9_0
false
[]
null
0
0
NLPC-UOM/Writing-style-classification
false
[ "task_categories:text-classification", "language_creators:crowdsourced", "multilinguality:monolingual", "language:si", "license:mit" ]
null
0
0
ibrahimmoazzam/mysprecordings
false
[]
null
0
0
mrm8488/ImageNet1K-val
false
[]
null
11
0
mrm8488/ImageNet1K-train
false
[]
null
0
0
AmazonScience/massive
false
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:af-ZA", "multilinguality:am-ET", "multilinguality:ar-SA", "multilinguality:az-AZ", "multilinguality:bn-BD", "multilinguality:ca-ES", "multilinguality:cy-GB", "multilinguality:da-DK", "multilinguality:de-DE", "multilinguality:el-GR", "multilinguality:en-US", "multilinguality:es-ES", "multilinguality:fa-IR", "multilinguality:fi-FI", "multilinguality:fr-FR", "multilinguality:he-IL", "multilinguality:hi-IN", "multilinguality:hu-HU", "multilinguality:hy-AM", "multilinguality:id-ID", "multilinguality:is-IS", "multilinguality:it-IT", "multilinguality:ja-JP", "multilinguality:jv-ID", "multilinguality:ka-GE", "multilinguality:km-KH", "multilinguality:kn-IN", "multilinguality:ko-KR", "multilinguality:lv-LV", "multilinguality:ml-IN", "multilinguality:mn-MN", "multilinguality:ms-MY", "multilinguality:my-MM", "multilinguality:nb-NO", "multilinguality:nl-NL", "multilinguality:pl-PL", "multilinguality:pt-PT", "multilinguality:ro-RO", "multilinguality:ru-RU", "multilinguality:sl-SL", "multilinguality:sq-AL", "multilinguality:sv-SE", "multilinguality:sw-KE", "multilinguality:ta-IN", "multilinguality:te-IN", "multilinguality:th-TH", "multilinguality:tl-PH", "multilinguality:tr-TR", "multilinguality:ur-PK", "multilinguality:vi-VN", "multilinguality:zh-CN", "multilinguality:zh-TW", "size_categories:100K<n<1M", "source_datasets:original", "license:cc-by-4.0", "natural-language-understanding", "arxiv:2204.08582" ]
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
3,675
22
odellus/beerqa
false
[ "license:cc-by-4.0" ]
null
4
0
codeparrot/codeparrot-valid-more-filtering
false
[]
null
0
0
TahaRazzaq/urduprusdataset
false
[]
null
0
0
codeparrot/codeparrot-train-more-filtering
false
[]
null
1
1
mathigatti/spanish_imdb_synopsis
false
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:text2text-generation", "annotations_creators:no-annotation", "multilinguality:monolingual", "language:es", "license:apache-2.0" ]
null
24
0
TalTechNLP/VoxLingua107
false
[ "license:cc-by-nc-4.0" ]
null
0
0
zzzzzzttt/subtrain
false
[]
null
0
0
abidlabs/crowdsourced-speech-demo2
false
[]
null
0
0
strombergnlp/danfever
false
[ "task_categories:text-classification", "task_ids:fact-checking", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0", "knowledge-verification" ]
\
0
2
JbIPS/stanford-dogs
false
[ "license:mit" ]
null
2
0
strombergnlp/broad_twitter_corpus
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities. For more details see [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/)
3
2
strombergnlp/ipm_nel
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "named-entity-linking" ]
This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris, France vs. Paris, Texas). The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical artist, person, product, sports team, TV show, and other. The file is tab separated, in CoNLL format, with line breaks between tweets. Data preserves the tokenisation used in the Ritter datasets. PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given. In cases where a URI could not be agreed, or was not present in DBpedia, there is a NIL. See the paper for a full description of the methodology. For more details see http://www.derczynski.com/papers/ner_single.pdf or https://www.sciencedirect.com/science/article/abs/pii/S0306457314001034
0
1
strombergnlp/shaj
false
[ "task_ids:hate-speech-detection", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "arxiv:2107.13592", "doi:10.57967/hf/0514" ]
This is an abusive/offensive language detection dataset for Albanian. The data is formatted following the OffensEval convention, with three tasks: * Subtask A: Offensive (OFF) or not (NOT) * Subtask B: Untargeted (UNT) or targeted insult (TIN) * Subtask C: Type of target: individual (IND), group (GRP), or other (OTH) * The subtask A field should always be filled. * The subtask B field should only be filled if there's "offensive" (OFF) in A. * The subtask C field should only be filled if there's "targeted" (TIN) in B. The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon" See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.
0
1
strombergnlp/dkstance
false
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0", "stance-detection" ]
This dataset presents a series of stories on Reddit and the conversation around them, annotated for stance. Stories are also annotated for veracity. For more details see https://aclanthology.org/W19-6122/
0
1
strombergnlp/polstance
false
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:da", "license:cc-by-4.0", "stance-detection" ]
Political stance in Danish. Examples represent statements by politicians and are annotated for, against, or neutral to a given topic/article.
0
1
strombergnlp/bornholmsk
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:da", "license:cc-by-4.0" ]
This corpus introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian. Sammenfattnijng på borrijnholmst: Dæjnna artikkelijn introduserer natursprågsresurser å varktoi for borrijnholmst, ed språg a dær snakkes på ön Borrijnholm me rødder i danst å i nær familia me skånst.
0
1
strombergnlp/twitter_pos_vcb
false
[ "task_categories:token-classification", "task_ids:part-of-speech", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
Part-of-speech information is basic NLP task. However, Twitter text is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style. This data is the vote-constrained bootstrapped data generate to support state-of-the-art results. The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset. The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs are completely compatible over a whole tweet, is that tweet added to the dataset. This data is recommend for use a training data **only**, and not evaluation data. For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
0
2
strombergnlp/zulu_stance
false
[ "task_categories:text-classification", "task_ids:fact-checking", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:zu", "license:cc-by-4.0", "stance-detection", "arxiv:2205.03153" ]
This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts. Misinformation has become a major concern in recent last years given its spread across our information sources. In the past years, many NLP tasks have been introduced in this area, with some systems reaching good results on English language datasets. Existing AI based approaches for fighting misinformation in literature suggest automatic stance detection as an integral first step to success. Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. We also provide a stance detection dataset in the Zulu language.
0
1
Elfsong/clef_data
false
[]
null
0
1
tomasmcz/word2vec_analogy
false
[ "license:apache-2.0" ]
null
0
1
rockdrigoma/spanish-nahuatl-flagging
false
[]
null
0
2
Zaratahir123/23100065
false
[ "license:mit" ]
null
2
0
Zaratahir123/21030019
false
[]
null
0
0
Zaratahir123/23100133
false
[]
null
0
0
dwen/rocstories
false
[]
null
4
0
Zaratahir123/Group2Data
false
[]
null
0
0
BigScienceBiasEval/bias-shades
false
[ "license:cc-by-sa-4.0" ]
This is a preliminary version of the bias SHADES dataset for evaluating LMs for social biases.
0
0
jamescalam/reddit-topics
false
[]
null
7
2
aps/dynasent
false
[]
null
0
0
spoiled/pre-answer
false
[]
null
0
0
spoiled/with_label
false
[]
null
0
0
spoiled/with_random_label
false
[]
null
0
0
smallv0221/dd
false
[ "license:apache-2.0" ]
null
0
0
gusevski/factrueval2016
false
[ "arxiv:2005.00614" ]
null
1
1
multiIR/langfiles2018-6_sr
false
[]
null
0
0
Mim/autotrain-data-procell-expert
false
[ "task_categories:text-classification" ]
null
0
0
doddle124578/speechcorpus
false
[]
null
0
0
nielsr/funsd-image-feature
false
[]
null
38
0
nielsr/funsd-layoutlmv3
false
[]
https://guillaumejaume.github.io/FUNSD/
2,446
17
soyasis/wikihow_small
false
[ "language:en", "license:mit" ]
null
2
0
muibk/wmt21_metrics_task
false
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:found", "language_creators:machine-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:100K<n<1M", "language:bn-hi", "language:cs-en", "language:de-en", "language:de-fr", "language:en-cs", "language:en-de", "language:en-ha", "language:en-is", "language:en-ja", "language:en-ru", "language:en-zh", "language:fr-de", "language:ha-en", "language:hi-bn", "language:is-en", "language:ja-en", "language:ru-en", "language:xh-zh", "language:zh-en", "language:zu-xh", "license:unknown" ]
This shared task will examine automatic evaluation metrics for machine translation. We will provide you with MT system outputs along with source text and the human reference translations. We are looking for automatic metric scores for translations at the system-level, and segment-level. We will calculate the system-level, and segment-level correlations of your scores with human judgements. We invite submissions of reference-free metrics in addition to reference-based metrics.
5
0
rish16/cs4243-database-dict
false
[ "license:mit" ]
null
2
0
iamholmes/tiny-imdb
false
[]
null
10
0
Samip/scotch_try
false
[]
Scotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings. The dataset includes functions written in programming languages Python, Java, Javascript, and Go.
0
0
jamescalam/world-cities-geo
false
[]
null
15
2
aps/dynahate
false
[]
We present a human-and-model-in-the-loop process for dynamically generating datasets and training better performing and more robust hate detection models. We provide a new dataset of ~40,000 entries, generated and labelled by trained annotators over four rounds of dynamic data creation. It includes ~15,000 challenging perturbations and each hateful entry has fine-grained labels for the type and target of hate. Hateful entries make up 54% of the dataset, which is substantially higher than comparable datasets. We show that model performance is substantially improved using this approach. Models trained on later rounds of data collection perform better on test sets and are harder for annotators to trick. They also perform better on HATECHECK, a suite of functional tests for online hate detection. See https://arxiv.org/abs/2012.15761 for more details.
48
1
dalle-mini/vqgan-pairs
false
[ "task_categories:other", "source_datasets:Open Images", "license:cc-by-4.0", "license:cc-by-2.0", "license:unknown", "super-resolution", "image-enhancement" ]
null
1
1
Seledorn/SwissProt-EC
false
[ "language:protein sequences", "Protein", "Enzyme Commission", "EC" ]
null
0
0
Seledorn/SwissProt-Pfam
false
[ "language:protein sequences", "Protein", "PFam" ]
null
0
0
Seledorn/SwissProt-GO
false
[ "language:protein sequences", "Protein", "Gene Ontology", "GO" ]
null
8
0
samhellkill/spacekitty-v1
false
[ "license:other" ]
null
0
0
charly/test
false
[ "license:apache-2.0" ]
null
0
0
JEFFREY-VERDIERE/Creditcard
false
[]
null
0
0
lightonai/SwissProt-EC-leaf
false
[ "language:protein sequences", "Protein", "Enzyme Commission" ]
null
0
0
bousejin/eurosat-demo
false
[]
null
1
0
osyvokon/wiki-edits-uk
false
[ "task_categories:other", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:uk-UA", "license:cc-by-3.0" ]
null
0
1
defector/autotrain-data-company
false
[ "language:en" ]
null
0
0
valentij/train.csv
false
[]
null
0
0
valentij/test.csv
false
[]
null
0
0
Filippo/osdg_cd
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0" ]
The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs).
0
1
claytonsamples/eurosat-demo
false
[]
null
0
0
Anon126/my-raft-submission
false
[ "benchmark:raft" ]
0
0
Rodion/uno_sustainable_development_goals
false
[ "license:afl-3.0" ]
null
2
0
obokkkk/NMT_test
false
[]
null
0
0
NazaGara/wikiner-es
false
[]
Dataset used to train a NER model
0
0
hongdijk/kor_nlu_hufsice2
false
[ "license:other" ]
null
0
0
gary109/crop14_balance
false
[]
null
0
0
hongdijk/kor_nlu_hufs
false
[ "license:cc-by-sa-4.0" ]
null
0
0
Diegomejia/ds1ucb
false
[ "license:mit" ]
null
2
0
multiIR/cc_multi_2021-1_en
false
[]
null
0
0
JEFFREY-VERDIERE/Cornell_Grasping_Dataset
false
[]
null
0
0