id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
ought/raft
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "source_datasets:extended|ade_corpus_v2", "source_datasets:extended|banking77", "language:en", "license:other", "arxiv:2109.14076" ]
Large pre-trained language models have shown promise for few-shot learning, completing text-based tasks given only a few task-specific examples. Will models soon solve classification tasks that have so far been reserved for human research assistants? [RAFT](https://raft.elicit.org) is a few-shot classification benchmark that tests language models: - across multiple domains (lit review, tweets, customer interaction, etc.) - on economically valuable classification tasks (someone inherently cares about the task) - in a setting that mirrors deployment (50 examples per task, info retrieval allowed, hidden test set)
9,371
19
outman/test
false
[]
null
0
0
papluca/language-identification
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:extended|amazon_reviews_multi", "source_datasets:extended|xnli", "source_datasets:extended|stsb_multi_mt", "language:ar", "language:bg", "language:de", "language:el", "language:en", "language:es", "language:fr", "language:hi", "language:it", "language:ja", "language:nl", "language:pl", "language:pt", "language:ru", "language:sw", "language:th", "language:tr", "language:ur", "language:vi", "language:zh" ]
null
189
10
pariajm/sharif_emotional_speech_dataset
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:radio-plays", "language:fa", "license:apache-2.0" ]
null
8
1
parivartanayurveda/Malesexproblemsayurvedictreatment
false
[]
null
0
0
pasinit/scotus
false
[]
Dataset extracted from case laws of Supreme Court of United States.
2
0
pasinit/xlwic
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:bg", "language:zh", "language:hr", "language:da", "language:nl", "language:et", "language:fa", "language:ja", "language:ko", "language:it", "language:fr", "language:de", "license:cc-by-nc-4.0" ]
A system's task on any of the XL-WiC datasets is to identify the intended meaning of a word in a context of a given language. XL-WiC is framed as a binary classification task. Each instance in XL-WiC has a target word w, either a verb or a noun, for which two contexts are provided. Each of these contexts triggers a specific meaning of w. The task is to identify if the occurrences of w in the two contexts correspond to the same meaning or not. XL-WiC provides dev and test sets in the following 12 languages: Bulgarian (BG) Danish (DA) German (DE) Estonian (ET) Farsi (FA) French (FR) Croatian (HR) Italian (IT) Japanese (JA) Korean (KO) Dutch (NL) Chinese (ZH) and training sets in the following 3 languages: German (DE) French (FR) Italian (IT)
104
2
patrickvonplaten/ami_single_headset_segmented_and_chunked
false
[]
null
0
0
patrickvonplaten/common_voice_6_tr
false
[]
null
1
0
patrickvonplaten/common_voice_processed_turkish
false
[]
null
0
0
patrickvonplaten/helena_coworking
false
[]
null
0
0
patrickvonplaten/librispeech_asr_dummy
false
[]
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
11,855
0
patrickvonplaten/librispeech_local
false
[]
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
0
0
patrickvonplaten/librispeech_local_dummy
false
[]
null
0
0
patrickvonplaten/scientific_papers_dummy
false
[]
Scientific papers datasets contains two sets of long and structured documents. The datasets are obtained from ArXiv and PubMed OpenAccess repositories. Both "arxiv" and "pubmed" have two features: - article: the body of the document, pagragraphs seperated by "/n". - abstract: the abstract of the document, pagragraphs seperated by "/n". - section_names: titles of sections, seperated by "/n".
1
0
patrickvonplaten/sensitive_data_sv
false
[]
null
0
0
pdesoyres/test
false
[]
null
0
0
peixian/equity_evaluation_corpus
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "gender-classification" ]
Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems and resources. Further, there is a lack of benchmark datasets for examining inappropriate biases in system predictions. Here, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We used the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 ‘Affect in Tweets’. We found that several of the systems showed statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available, and encourage its use to evaluate biases in sentiment and other NLP tasks.
0
2
peixian/rtGender
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
RtGender is a corpus for studying responses to gender online, including posts and responses from Facebook, TED, Fitocracy, and Reddit where the gender of the source poster/speaker is known.
0
1
pelican/test_100
false
[]
null
0
0
persiannlp/parsinlu_entailment
false
[ "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|translated|mnli", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian textual entailment task (deciding `sent1` entails `sent2`).
10
0
persiannlp/parsinlu_query_paraphrasing
false
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|quora|google", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian query paraphrasing task (paraphrase or not, given two questions). The questions are partly mined using Google auto-complete, and partly translated from Quora paraphrasing dataset.
0
0
persiannlp/parsinlu_reading_comprehension
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia|google", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
9
0
persiannlp/parsinlu_sentiment
false
[ "task_ids:sentiment-analysis", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|translated|mnli", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment).
206
3
persiannlp/parsinlu_translation_en_fa
false
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:fa", "multilinguality:en", "size_categories:1K<n<10K", "source_datasets:extended", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian translation dataset (English -> Persian).
17
0
persiannlp/parsinlu_translation_fa_en
false
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:fa", "multilinguality:en", "size_categories:1K<n<10K", "source_datasets:extended", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian translation dataset (Persian -> English).
0
0
peterbonnesoeur/autonlp-data-test_text_summarization
false
[]
null
0
0
peterhsu/github-issues
false
[]
null
0
0
philschmid/germeval18
false
[]
null
241
2
philschmid/prompted-germanquad
false
[]
null
0
0
philschmid/test_german_squad
false
[]
null
0
0
phoelti/squad_dev
false
[]
null
0
0
phongdtd/VinDataVLSP
false
[ "license:apache-2.0" ]
\
3
0
phongdtd/youtube_casual_audio
false
[ "task_categories:automatic-speech-recognition", "source_datasets:extended|youtube" ]
\
0
3
phonlab-tcd/cngv1
false
[]
Corpus of written Irish.
0
0
phonlab-tcd/corpuscrawler-ga
false
[]
Irish web corpus, crawled with Corpus Crawler. Uses a list of URLs, collected by the crawler, to retrieve the files from the crawler's cache.
0
1
piEsposito/br-quad-2.0
false
[]
Translates SQuAD 2.0 from english to portuguese using Google Cloud API
0
0
piEsposito/br_quad_20
false
[]
Translates SQuAD 2.0 from english to portuguese using Google Cloud API
0
0
piEsposito/squad_20_ptbr
false
[]
Translates SQuAD 2.0 from english to portuguese using Google Cloud API
8
2
pierreant-p/jcvd-or-linkedin
false
[]
null
0
0
pierreguillou/lener_br_finetuning_language_model
false
[ "task_ids:language-modeling", "multilinguality:monolingual", "language:pt", "lener_br" ]
null
1
2
pierreguillou/test_datasetdict
false
[]
null
0
0
pierresi/cord
false
[]
https://github.com/clovaai/cord/
0
0
pietrolesci/ag_news
false
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "size_categories:100K<n<1M", "source_datasets:ag_news", "language:en", "license:unknown" ]
AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang ([email protected]) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
83
1
pile-of-law/pile-of-law
false
[ "task_categories:fill-mask", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2207.00220" ]
We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives.
1,072
62
pki/autonlp-data-cybersecurity
false
[]
null
79
0
pmc/open_access
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0", "license:cc-by-4.0", "license:cc-by-sa-4.0", "license:cc-by-nd-4.0", "license:cc-by-nc-4.0", "license:cc-by-nc-sa-4.0", "license:cc-by-nc-nd-4.0", "license:other", "license:unknown" ]
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The PMC Open Access Subset is one part of the PMC Article Datasets
32
7
polinaeterna/benchmark
false
[]
null
0
0
polinaeterna/benchmark_dataset
false
[]
null
0
0
polinaeterna/dummy_dataset
false
[]
null
0
0
MLCommons/ml_spoken_words
false
[ "task_categories:audio-classification", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:extended|common_voice", "language:ar", "language:as", "language:br", "language:ca", "language:cnh", "language:cs", "language:cv", "language:cy", "language:de", "language:dv", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fr", "language:fy", "language:ga", "language:gn", "language:ha", "language:ia", "language:id", "language:it", "language:ka", "language:ky", "language:lt", "language:lv", "language:mn", "language:mt", "language:nl", "language:or", "language:pl", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sah", "language:sk", "language:sl", "language:sv", "language:ta", "language:tr", "language:tt", "language:uk", "language:vi", "language:zh", "license:cc-by-4.0", "other-keyword-spotting" ]
Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages collectively spoken by over 5 billion people, for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset has many use cases, ranging from voice-enabled consumer devices to call center automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level audio to produce per-word timing estimates for extraction. All alignments are included in the dataset.
29
12
polinaeterna/test_opus
false
[]
null
0
0
poperson1205/mrtydi-v1.1-korean-fixed
false
[]
null
1
0
prajin/ne_corpora_parliament_processed
false
[]
null
2
0
princeton-nlp/datasets-for-simcse
false
[]
null
3
0
pritamdeka/cord-19-abstract
false
[]
null
0
1
pritamdeka/cord-19-fulltext
false
[]
null
0
1
priya3301/Graduation_admission
false
[]
null
0
0
priya3301/tes
false
[]
null
0
0
priya3301/test
false
[]
null
0
0
prk/testsq
false
[]
\ combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
0
0
proffttega/ILLUMINATI
false
[]
null
0
0
proffttega/doc
false
[]
null
0
0
proffttega/join_illuminati_to_become_rich
false
[]
null
0
0
proffttega/persian_daily_news
false
[]
null
0
0
project2you/asr
false
[]
null
0
0
projecte-aina/ancora-ca-ner
false
[ "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-4.0", "arxiv:2107.07903" ]
AnCora Catalan NER. This is a dataset for Named Eentity Reacognition (NER) from Ancora corpus adapted for Machine Learning and Language Model evaluation purposes. Since multiwords (including Named Entites) in the original Ancora corpus are aggregated as a single lexical item using underscores (e.g. "Ajuntament_de_Barcelona") we splitted them to align with word-per-line format, and added conventional Begin-Inside-Outside (IOB) tags to mark and classify Named Entites. We did not filter out the different categories of NEs from Ancora (weak and strong). We did 6 minor edits by hand. AnCora corpus is used under [CC-by] (https://creativecommons.org/licenses/by/4.0/) licence. This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB).
0
0
projecte-aina/casum
false
[ "task_categories:summarization", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-nc-4.0", "arxiv:2202.06871" ]
CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency. The corpus consists of 217,735 instances that are composed by the headline and the body.
4
0
projecte-aina/catalan_general_crawling
false
[ "task_categories:fill-mask", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "license:cc-by-4.0", "arxiv:2107.07903" ]
The Catalan General Crawling Corpus is a 435-million-token web corpus of Catalan built from the web. It has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020. It consists of 434.817.705 tokens, 19.451.691 sentences and 1.016.114 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
0
0
projecte-aina/catalan_government_crawling
false
[ "task_categories:fill-mask", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ca", "license:cc0-1.0", "arxiv:2107.07903" ]
The Catalan Government Crawling Corpus is a 39-million-token web corpus of Catalan built from the web. It has been obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government during September and October 2020. It consists of 39.117.909 tokens, 1.565.433 sentences and 71.043 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
0
0
projecte-aina/catalan_textual_corpus
false
[ "task_categories:fill-mask", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "source_datasets:extended|opus_dogc", "source_datasets:extended|cawac", "source_datasets:extended|oscar", "source_datasets:extended|open_subtitles", "source_datasets:extended|wikipedia", "source_datasets:extended|projecte-aina/catalan_general_crawling", "source_datasets:extended|projecte-aina/catalan_government_crawling", "language:ca", "license:cc-by-sa-4.0", "arxiv:2107.07903" ]
The Catalan Textual Corpus is a 1760-million-token web corpus of Catalan built from several sources: existing corpus such as DOGC, CaWac (non-dedup version), Oscar (unshuffled version), Open Subtitles, Catalan Wikipedia; and three brand new crawlings: the Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains; the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government; and the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency. It consists of 1.758.388.896 tokens, 73.172.152 sentences and 12.556.365 documents. Documents are separated by single new lines. These boundaries have been preserved as long as the license allowed it.
0
1
projecte-aina/parlament_parla
false
[ "task_categories:automatic-speech-recognition", "task_categories:text-generation", "task_ids:language-modeling", "task_ids:speaker-identification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ca", "license:cc-by-4.0" ]
This is the ParlamentParla speech corpus for Catalan prepared by Col·lectivaT. The audio segments were extracted from recordings the Catalan Parliament (Parlament de Catalunya) plenary sessions, which took place between 2007/07/11 - 2018/07/17. We aligned the transcriptions with the recordings and extracted the corpus. The content belongs to the Catalan Parliament and the data is released conforming their terms of use. Preparation of this corpus was partly supported by the Department of Culture of the Catalan autonomous government, and the v2.0 was supported by the Barcelona Supercomputing Center, within the framework of the project AINA of the Departament de Polítiques Digitals. As of v2.0 the corpus is separated into 211 hours of clean and 400 hours of other quality segments. Furthermore, each speech segment is tagged with its speaker and each speaker with their gender. The statistics are detailed in the readme file. For more information, go to https://github.com/CollectivaT-dev/ParlamentParla or mail [email protected].
0
1
projecte-aina/sts-ca
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-4.0", "arxiv:2107.07903" ]
Semantic Textual Similarity in Catalan. STS corpus is a benchmark for evaluating Semantic Text Similarity in Catalan. It consists of more than 3000 sentence pairs, annotated with the semantic similarity between them, using a scale from 0 (no similarity at all) to 5 (semantic equivalence). It is done manually by 4 different annotators following our guidelines based on previous work from the SemEval challenges (https://www.aclweb.org/anthology/S13-1004.pdf). The source data are scraped sentences from the Catalan Textual Corpus (https://doi.org/10.5281/zenodo.4519349), used under CC-by-SA-4.0 licence (https://creativecommons.org/licenses/by-sa/4.0/). The dataset is released under the same licence. This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB). This is the version 1.0.2 of the dataset with the complete human and automatic annotations and the analysis scripts. It also has a more accurate license. This dataset can be used to build and score semantic similiarity models.
3
0
projecte-aina/teca
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-nc-nd-4.0", "arxiv:2107.07903" ]
TECA consists of two subsets of textual entailment in Catalan, *catalan_TE1* and *vilaweb_TE*, which contain 14997 and 6166 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral). This dataset was developed by BSC TeMU as part of the AINA project and intended as part of the Catalan Language Understanding Benchmark (CLUB).
0
0
projecte-aina/tecla
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-nc-nd-4.0" ]
TeCla: Text Classification Catalan dataset Catalan News corpus for Text classification, crawled from ACN (Catalan News Agency) site: www.acn.cat Corpus de notícies en català per a classificació textual, extret del web de l'Agència Catalana de Notícies - www.acn.cat
0
0
projecte-aina/vilaquad
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ca", "license:cc-by-sa-4.0", "arxiv:2107.07903", "arxiv:1606.05250" ]
This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context). VilaQuad articles are extracted from the daily Vilaweb (www.vilaweb.cat) and used under CC-by-nc-sa-nd (https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence. This dataset can be used to build extractive-QA and Language Models. Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA), MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
0
0
projecte-aina/vilasum
false
[ "task_categories:summarization", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-nc-4.0", "arxiv:2202.06871" ]
VilaSum is a summarization dataset for evaluation. It is extracted from a newswire corpus crawled from Vilaweb. The corpus consists of 13,843 instances that are composed by the headline and the body.
2
0
projecte-aina/viquiquad
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ca", "license:cc-by-sa-4.0", "arxiv:2107.07903", "arxiv:1606.05250" ]
ViquiQuAD: an extractive QA dataset from Catalan Wikipedia. This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "Viquipèdia" (ca.wikipedia.org), and 1 to 5 questions with their answer for each fragment. Viquipedia articles are used under CC-by-sa licence. This dataset can be used to build extractive-QA and Language Models. Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA), MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
0
0
projecte-aina/wnli-ca
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|glue", "language:ca", "license:cc-by-4.0" ]
professional translation into Catalan of Winograd NLI dataset as published in GLUE Benchmark. The Winograd NLI dataset presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
0
0
projecte-aina/xquad-ca
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-sa-4.0", "arxiv:2107.07903", "arxiv:1606.05250", "arxiv:1910.11856" ]
Professional translation into Catalan of XQuAD dataset (https://github.com/deepmind/xquad). XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Rumanian was added later. We added the 13th language to the corpus using also professional native catalan translators. XQuAD and XQuAD-Ca datasets are released under CC-by-sa licence.
0
0
psrpsj/stop_words
false
[]
null
0
0
pstroe/cc100-latin
false
[]
null
0
2
puffy310/yandset
false
[ "license:apache-2.0" ]
null
0
0
pulmo/chest_xray
false
[]
null
1
0
qa4pc/QA4PC
false
[]
null
0
0
qanastek/ANTILLES
false
[ "task_categories:token-classification", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "size_categories:100K<n<1M", "source_datasets:original", "language:fr" ]
null
1
1
qanastek/ECDC
false
[ "task_categories:translation", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:en-sv", "multilinguality:en-pl", "multilinguality:en-hu", "multilinguality:en-lt", "multilinguality:en-sk", "multilinguality:en-ga", "multilinguality:en-fr", "multilinguality:en-cs", "multilinguality:en-el", "multilinguality:en-it", "multilinguality:en-lv", "multilinguality:en-da", "multilinguality:en-nl", "multilinguality:en-bg", "multilinguality:en-is", "multilinguality:en-ro", "multilinguality:en-no", "multilinguality:en-pt", "multilinguality:en-es", "multilinguality:en-et", "multilinguality:en-mt", "multilinguality:en-sl", "multilinguality:en-fi", "multilinguality:en-de", "size_categories:100K<n<1M", "source_datasets:extended", "language:en", "license:other" ]
null
7
1
qanastek/ELRC-Medical-V2
false
[ "task_categories:translation", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended", "language:en", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv" ]
null
10
6
qanastek/EMEA-V3
false
[ "task_categories:translation", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:bg", "multilinguality:cs", "multilinguality:da", "multilinguality:de", "multilinguality:el", "multilinguality:en", "multilinguality:es", "multilinguality:et", "multilinguality:fi", "multilinguality:fr", "multilinguality:hu", "multilinguality:it", "multilinguality:lt", "multilinguality:lv", "multilinguality:mt", "multilinguality:nl", "multilinguality:pl", "multilinguality:pt", "multilinguality:ro", "multilinguality:sk", "multilinguality:sl", "multilinguality:sv", "size_categories:100K<n<1M", "source_datasets:extended", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv" ]
null
1,610
4
qanastek/WMT-16-PubMed
false
[ "task_categories:translation", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv" ]
WMT'16 Biomedical Translation Task - PubMed parallel datasets http://www.statmt.org/wmt16/biomedical-translation-task.html
0
2
qfortier/instagram_ny
false
[]
null
0
0
quarter100/boolq_log
false
[]
null
0
0
quis/vnexpress-train
false
[]
null
0
0
qwant/squad_fr
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:extended|squad", "language:fr", "license:cc-by-4.0" ]
SQuAD-fr is a French translated version of the Stanford Question Answering Dataset (SQuAD), the reference corpus to evaluate question answering models' performances in English. It consists of 100K question-answer pairs on 500+ articles derived from the original English dataset and represents a large-scale dataset for closed-domain question answering on factoid questions in French. SQuAD-fr serves as a means of data augmentation on FQuAD and PIAF benchmarks, with 90K+ translated training pairs.
29
2
radhakri119/sv_corpora_parliament_processed
false
[]
null
0
0
rahular/itihasa
false
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:unknown", "source_datasets:original", "language:sa", "language:en", "license:apache-2.0", "conditional-text-generation" ]
A Sanskrit-English machine translation dataset.
42
0
rajeshradhakrishnan/malayalam_2020_wiki
false
[]
null
0
0
rajeshradhakrishnan/malayalam_news
false
[]
The AI4Bharat-IndicNLP dataset is an ongoing effort to create a collection of large-scale, general-domain corpora for Indian languages. Currently, it contains 2.7 billion words for 10 Indian languages from two language families. We share pre-trained word embeddings trained on these corpora. We create news article category classification datasets for 9 languages to evaluate the embeddings. We evaluate the IndicNLP embeddings on multiple evaluation tasks.
0
1
rajeshradhakrishnan/malayalam_wiki
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0" ]
Common Crawl - Malayalam.
0
1
ramitsurana/sanskrit
false
[]
null
0
0