id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
โŒ€
downloads
int64
0
1.14M
likes
int64
0
1.79k
jakemarcus/MATH
false
[]
null
134
0
jamescalam/climate-fever-similarity
false
[]
null
128
0
jamol1741/test_dataset
false
[]
null
128
0
jcmc/ga-IE_opus_dgt_train
false
[]
null
258
0
jcmc/ga_mc4_processed
false
[]
null
258
0
jdepoix/junit_test_completion
false
[]
null
130
0
jegormeister/dutch-snli
false
[]
This is the Dutch version of the original SNLI dataset. The translation was performed using Google Translate. Original SNLI available at https://nlp.stanford.edu/projects/snli/
260
0
jel/covid
false
[]
null
128
0
jeree/fr_corpora_parliament_processed
false
[]
null
258
0
jfarray/TFM
false
[]
null
128
0
jfrenz/legalglue
false
[ "task_categories:text-classification", "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:multi-label-classification", "task_ids:topic-classification", "multilinguality:multilingual", "source_datasets:extended", "language:en", "language:da", "language:de", "language:nl", "language:sv", "language:bg", "language:cs", "language:hr", "language:pl", "language:sk", "language:sl", "language:es", "language:fr", "language:it", "language:pt", "language:ro", "language:et", "language:fi", "language:hu", "language:lt", "language:lv", "language:el", "language:mt", "german-ler", "lener-br", "arxiv:2003.13016", "arxiv:2110.00806", "arxiv:2109.00904" ]
\ Legal General Language Understanding Evaluation (LegalGLUE) benchmark is a collection of datasets for evaluating model performance across a diverse set of legal NLP tasks
3,993
5
jgammack/MTL-abstracts
false
[]
null
258
0
jgammack/SAE-door-abstracts
false
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:unknown" ]
null
261
0
jgammack/THESES-abstracts
false
[]
null
263
0
jglaser/binding_affinity
false
[ "molecules", "chemistry", "SMILES" ]
A dataset to fine-tune language models on protein-ligand binding affinity prediction.
480
1
jhonparra18/spanish_billion_words_clean
false
[]
null
257
2
jhqwqq/2
false
[]
null
130
0
jianhong/dateset1
false
[]
null
130
0
jianhong/dateset2
false
[]
null
130
0
jiminsun/atc0_demo
false
[]
null
130
0
jimregan/clarinpl_sejmsenat
false
[ "task_categories:other", "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:other" ]
A collection of 97 hours of parliamentary speeches published on the ClarinPL website Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .wav format and is not converted to a float32 array. To convert the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
256
0
jimregan/clarinpl_studio
false
[ "task_categories:other", "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:other", "arxiv:1706.00245" ]
The corpus consists of 317 speakers recorded in 554 sessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of the audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words from a vocabulary of size 46361. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .wav format and is not converted to a float32 array. To convert the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
256
1
jimregan/foinse
false
[]
Foinse was an Irish-language magazine site. This script uses a list of articles retrieved from the Wayback Machine to build a corpus
130
0
jimregan/lasid
false
[]
Linguistic Atlas and Survey of Irish Dialects, volume 1
129
0
jinmang2/KorQuADv1
false
[]
KorQuAD 1.0 (Korean Question Answering Dataset v1.0) KorQuAD 1.0 is a dataset created for Korean Machine Reading Comprehension. The answers to all your questions are made up of some subareas in the corresponding Wikipedia article paragraphs. It is structured in the same way as the Stanford Question Answering Dataset (SQuAD) v1.0.
92
0
jinmang2/common-sense-mrc
false
[]
null
0
0
jinmang2/load_klue_re
false
[]
KLUE (Korean Language Understanding Evaluation) Korean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language understanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible to anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain unambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics for every task, thus establishing fair comparison across Korean language models.
3
0
jinmang2/medical-mask
false
[]
null
0
0
jinmang2/pred
false
[]
""" _LICENSE = "CC-BY-SA-4.0" _URL = "https://github.com/boostcampaitech2/data-annotation-nlp-level3-nlp-14" _DATA_URLS = { "train": "https://huggingface.co/datasets/jinmang2/pred/resolve/main/train.csv", "dev": "https://huggingface.co/datasets/jinmang2/pred/resolve/main/dev.csv", } _VERSION = "0.0.0" _LABEL = [ '๊ด€๊ณ„_์—†์Œ', '์ด๋ก :๋Œ€์ฒด์–ด', '์ด๋ก :์ƒ์œ„_์ด๋ก ', '์ด๋ก :ํ•˜์œ„_์ด๋ก ', '์ด๋ก :์ƒ์œ„_ํ•™๋ฌธ๋ถ„์•ผ', 'ํ•™๋ฌธ๋ถ„์•ผ:ํ•˜์œ„_์ด๋ก ', '์ธ๋ฌผ:์†Œ์†์ด๋ก ๋˜๋Š”ํ•™๋ฌธ๋ถ„์•ผ', '์šฉ์–ด:์น˜๋ฃŒ๊ธฐ๋ฒ•', '์šฉ์–ด:์•ฝ', '์šฉ์–ด:์ฆ์ƒ๋˜๋Š”์งˆํ™˜', '์šฉ์–ด:๋Œ€์ฒด์–ด', ] class PredConfig(datasets.BuilderConfig): def __init__(self, data_url, **kwargs): super().__init__(version=datasets.Version(_VERSION), **kwargs) self.data_url = data_url class Pred(datasets.GeneratorBasedBuilder): DEFAULT_CONFIG_NAME = "pred" BUILDER_CONFIGS = [ PredConfig( name="pred", data_url=_DATA_URLS, description=_DESCRIPTION, ) ] def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "id": datasets.Value("string"), "sentence": datasets.Value("string"), "subject_entity": { "word": datasets.Value("string"), "start_idx": datasets.Value("int32"), "end_idx": datasets.Value("int32"), "type": datasets.Value("string"), }, "object_entity": { "word": datasets.Value("string"), "start_idx": datasets.Value("int32"), "end_idx": datasets.Value("int32"), "type": datasets.Value("string"), }, "label": datasets.ClassLabel(names=_LABEL), } ), homepage=_URL, license=_LICENSE, citation=_CITATION, supervised_keys=None, ) def _split_generators(self, dl_manager):
1
0
jiyoojeong/targetizer
false
[]
null
1
0
jlh/coco
false
[]
null
2
0
jmamou/augmented-glue-sst2
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en-US", "license:unknown" ]
null
5
0
joelito/ler
false
[]
We describe a dataset developed for Named Entity Recognition in German federal court decisions. It consists of approx. 67,000 sentences with over 2 million tokens. The resource contains 54,000 manually annotated entities, mapped to 19 fine-grained semantic classes: person, judge, lawyer, country, city, street, landscape, organization, company, institution, court, brand, law, ordinance, European legal norm, regulation, contract, court decision, and legal literature. The legal documents were, furthermore, automatically annotated with more than 35,000 TimeML-based time expressions. The dataset, which is available under a CC-BY 4.0 license in the CoNNL-2002 format, was developed for training an NER service for German legal documents in the EU project Lynx.
1
0
joelito/sem_eval_2010_task_8
false
[]
The SemEval-2010 Task 8 focuses on Multi-way classification of semantic relations between pairs of nominals. The task was designed to compare different approaches to semantic relation classification and to provide a standard testbed for future research.
1
0
johnpaulbin/autonlp-data-asag-v2
false
[]
null
89
0
jonatli/youtube-sponsor
false
[]
null
1
0
jonfd/ICC
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:is", "license:cc-by-4.0" ]
null
0
1
jozierski/ecomwebtexts-pl
false
[]
null
1
0
jpcorb20/multidogo
false
[ "task_categories:text-classification", "task_categories:other", "task_ids:intent-classification", "task_ids:dialogue-modeling", "task_ids:slot-filling", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10k<n<100k", "source_datasets:original", "language:en", "license:other" ]
null
0
0
jsfactory/mental_health_reddit_posts
false
[]
null
0
0
ju-bezdek/conll2003-SK-NER
false
[ "task_categories:other", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|conll2003", "language:sk", "license:unknown", "structure-prediction" ]
This is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags
2
0
julien-c/dummy-dataset-from-colab
false
[]
null
1
0
julien-c/persistent-space-dataset
false
[]
null
0
2
julien-c/reactiongif
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:2105.09967" ]
null
1
1
juliensimon/autonlp-data-song-lyrics-demo
false
[ "task_categories:text-classification", "language:en" ]
null
0
0
juliensimon/autonlp-data-song-lyrics
false
[ "task_categories:text-classification", "language:en" ]
null
2
0
juniorrios/roi_leish_test
false
[]
null
0
0
juny116/few_glue
false
[ "arxiv:2012.15723" ]
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, improved resources, and a new public leaderboard.
152
1
justinqbui/covid_fact_checked_google_api
false
[]
null
0
0
justinqbui/covid_fact_checked_polifact
false
[]
null
0
1
k-halid/ar
false
[]
The corpus is a part of the MultiUN corpus.It is a collection of translated documents from the United Nations.The corpus is download from the following website : [open parallel corpus](http://opus.datasetsl.eu/) \
0
0
k0t1k/test
false
[]
null
0
0
karinev/lanuitdudroit
false
[]
null
0
0
kartikay/review-summarizer
false
[]
null
0
1
katanaml/cord
false
[]
https://huggingface.co/datasets/katanaml/cord
188
1
katoensp/VR-OP
false
[]
null
0
0
kaushikacharya/github-issues
false
[]
null
0
0
kenlevine/CUAD
false
[]
null
0
0
keshan/clean-si-mc4
false
[]
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 dataset by AllenAI.
0
0
keshan/large-sinhala-asr-dataset
false
[]
This data set contains ~185K transcribed audio data for Sinhala. The data set consists of wave files, and a TSV file. The file utt_spk_text.tsv contains a FileID, anonymized UserID and the transcription of audio in the file. The data set has been manually quality checked, but there might still be errors. See LICENSE.txt file for license information. Copyright 2016, 2017, 2018 Google, Inc.
0
0
keshan/multispeaker-tts-sinhala
false
[]
\\nThis data set contains multi-speaker high quality transcribed audio data for Sinhala. The data set consists of wave files, and a TSV file. The file si_lk.lines.txt contains a FileID, which in tern contains the UserID and the Transcription of audio in the file. The data set has been manually quality checked, but there might still be errors. Part of this dataset was collected by Google in Sri Lanka and the rest was contributed by Path to Nirvana organization.
0
0
keshan/wit-dataset
false
[]
\\nWikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
0
1
kevinassobo/sales_2015_dataset
false
[]
null
0
0
kevinjesse/ManyTypes4TypeScript
false
[ "annotations_creators:found", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:code", "license:cc-by-4.0" ]
null
0
1
kevinlu1248/personificationgen
false
[]
null
0
0
khalidsaifullaah/detecThreats
false
[]
null
0
0
khanbaba/online_love
false
[]
null
0
0
kiamehr74/CoarseWSD-20
false
[]
The CoarseWSD-20 dataset is a coarse-grained sense disambiguation built from Wikipedia (nouns only) targetting 2 to 5 senses of 20 ambiguous words. It was specifically designed to provide an ideal setting for evaluating WSD models (e.g. no senses in test sets missing from training), both quantitavely and qualitatively.
0
1
kingabzpro/Rick-bot-flags
false
[]
null
0
1
kingabzpro/ar_corpora_parliament_processed
false
[]
null
0
0
kingabzpro/ga_corpora_parliament_processed
false
[]
null
0
0
kingabzpro/pan_corpora_parliament_processed
false
[]
null
0
0
kingabzpro/savtadepth-flags
false
[]
null
0
1
kingabzpro/tt_corpora_parliament_processed
false
[]
null
0
0
kiyoung2/aistage-mrc
false
[]
null
0
4
kiyoung2/temp
false
[]
null
0
0
kleinay/qa_srl
false
[]
The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. This dataset loads the train split from "QASRL Bank", a.k.a "QASRL-v2" or "QASRL-LS" (Large Scale), which was constructed via crowdsourcing and presented at (FitzGeralds et. al., ACL 2018), and the dev and test splits from QASRL-GS (Gold Standard), introduced in (Roit et. al., ACL 2020).
0
0
kmfoda/booksum
false
[ "license:bsd-3-clause", "arxiv:2105.08209" ]
null
1,287
11
kmfoda/name_finder_v1
false
[]
null
0
0
kmyoo/klue-tc-dev
false
[]
null
0
0
knilakshan20/wikigold
false
[]
WikiGold dataset,Origianl dataset labels converted to IOB-format. Dataloading file based on https://github.com/huggingface/datasets/blob/master/datasets/conllpp/conllpp.py and https://huggingface.co/docs/datasets/add_dataset.html
3
0
krandiash/sc09
false
[]
null
0
0
kresnik/librispeech_asr_test
false
[]
\ LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
20
2
kresnik/zeroth_korean
false
[]
This is Zeroth-Korean corpus, licensed under Attribution 4.0 International (CC BY 4.0) The data set contains transcriebed audio data for Korean. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people). This corpus also contains pre-trained/designed language model, lexicon and morpheme-based segmenter(morfessor). Zeroth project introduces free Korean speech corpus and aims to make Korean speech recognition more broadly accessible to everyone. This project was developed in collaboration between Lucas Jo(@Atlas Guide Inc.) and Wonkyum Lee(@Gridspace Inc.). Contact: Lucas Jo([email protected]), Wonkyum Lee([email protected])
91
5
kroshan/BioASQ
false
[]
null
1
1
kroshan/qa_evaluator
false
[]
null
0
0
kudo-research/mustc-en-es-text-only
false
[ "annotations_creators:other", "language_creators:other", "multilinguality:translation", "size_categories:unknown", "language:en", "language:es", "license:cc-by-nc-nd-4.0" ]
null
0
0
kyryl0s/ukbbc
false
[ "license:wtfpl" ]
null
0
3
laion/filtered-wit
false
[ "arxiv:2103.00020" ]
null
0
2
laion/laion_100m_vqgan_f8
false
[]
null
0
2
lara-martin/Scifi_TV_Shows
false
[ "task_categories:other", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0", "Story Generation" ]
null
12
2
larcane/ko-WIT
false
[]
null
0
0
laugustyniak/abusive-clauses-pl
false
[ "task_categories:text-classification", "annotations_creators:hired_annotators", "language_creators:found", "multilinguality:monolingual", "size_categories:10<n<10K", "language:pl", "license:cc-by-nc-sa-4.0" ]
null
58
2
lavis-nlp/german_legal_sentences
false
[ "task_categories:text-retrieval", "task_ids:semantic-similarity-scoring", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n>1M", "source_datasets:original", "language:de", "license:unknown", "arxiv:2005.13342", "arxiv:2010.10252" ]
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/)
5
2
layboard/layboard.in
false
[]
null
0
1
lbox/lbox_open
false
[ "license:cc-by-nc-4.0" ]
null
182
1
lc-col/sv_corpora_parliament_processed
false
[]
null
0
0
leetdavid/celera
false
[]
null
0
0
leetdavid/market-positivity-bert-tokenized
false
[]
null
0
0
leiping/jj
false
[]
null
0
0