id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
leiping/teeee
false
[]
null
0
0
leoapolonio/AMI_Meeting_Corpus
false
[]
null
2
0
leonadase/fdner
false
[]
用于故障诊断领域相关知识的命名实体识别语料
0
0
leonadase/mycoll3
false
[]
\ The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
0
0
lewtun/asr-preds-test
false
[ "benchmark:superb" ]
null
0
0
lewtun/asr_dummy
false
[]
Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
66
0
lewtun/benchmark-test
false
[]
null
0
0
lewtun/binary_classification_dummy
false
[]
null
0
0
lewtun/bulk-superb-s3p-superb-49606
false
[ "benchmark:superb" ]
null
0
1
lewtun/drug-reviews
false
[]
null
6
4
lewtun/gem-multi-dataset-predictions
false
[]
null
0
0
lewtun/gem-sub-03
false
[ "benchmark:gem" ]
null
0
0
lewtun/gem-test-predictions
false
[]
null
0
0
lewtun/gem-test-references
false
[]
null
0
0
lewtun/github-issues-test
false
[]
null
0
0
lewtun/github-issues
false
[ "arxiv:2005.00614" ]
null
448
4
lewtun/mnist-preds
false
[ "benchmark:test" ]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
0
0
lewtun/my-awesome-dataset
false
[ "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0" ]
null
0
0
lewtun/s3prl-sd-dummy
false
[]
null
0
0
lewtun/test
false
[]
null
0
0
lewtun/text_classification_dummy
false
[]
null
1
0
lgrobol/openminuscule
false
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100k<n<1M", "source_datasets:original", "language:en", "language:fr", "license:cc-by-4.0" ]
null
6
0
lhoestq/conll2003
false
[]
null
5
0
lhoestq/custom_squad
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-4.0", "arxiv:1606.05250" ]
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
13
0
lhoestq/demo1
false
[]
null
4,491
0
lhoestq/squad
false
[]
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
0
0
lhoestq/test
false
[ "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:mit" ]
This is a test dataset.
185
0
lhoestq/test2
false
[]
null
0
0
lhoestq/test_commit_descriptions
false
[]
null
0
0
lhoestq/test_zip_txt
false
[]
null
0
0
lhoestq/wikipedia_bn
false
[]
Bengali Wikipedia from the dump of 03/20/2021. The data was processed using the huggingface datasets wikipedia script early april 2021. The dataset was built from the Wikipedia dump (https://dumps.wikimedia.org/). Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).
0
0
liam168/nlp_c4_sentiment
false
[]
null
0
0
lidia/202111
false
[]
null
0
0
lijingxin/github-issues
false
[]
null
0
0
lijingxin/squad_zen
false
[]
null
14
1
lijingxin/squad_zh_1
false
[]
null
0
1
limjiayi/hateful_memes_expanded
false
[]
null
1
0
lincoln/newsquadfr
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:private", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "source_datasets:newspaper", "source_datasets:online", "language:fr-FR", "license:cc-by-nc-sa-4.0" ]
null
0
2
linhd-postdata/pulpo
false
[]
null
77
0
linhd-postdata/stanzas
false
[]
Stanzas
0
0
liweili/c4_200m
false
[ "task_categories:text-generation", "source_datasets:allenai/c4", "language:en", "grammatical-error-correction" ]
\ GEC Dataset Generated from C4
62
12
lkarjun/Malayalam-Articles
false
[]
null
0
0
lkiouiou/o9ui7877687
false
[]
null
0
0
lkndsjkndgskjngkjsndkj/jsjdjsdvkjvszlhdskb
false
[]
null
0
0
llangnickel/long-covid-classification-data
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
null
22
0
lohanna/testedjkcxkf
false
[]
null
0
0
loretoparisi/spoken-punctuation
false
[]
null
0
1
lorsorlah/Dadedadedam
false
[]
null
0
0
loveguruji609/dfdfsdfsdfsdfsdfsd
false
[]
null
0
0
lpsc-fiuba/melisa
false
[ "task_categories:text-classification", "task_ids:language-modeling", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "source_datasets:original", "language:es", "language:pt", "license:other" ]
null
0
2
lsb/ancient-latin-passages
false
[ "license:agpl-3.0" ]
null
0
0
lsb/million-english-numbers
false
[ "arxiv:1803.09010" ]
null
0
0
lucien/sciencemission
false
[]
null
0
0
lucien/voacantonesed
false
[]
null
0
0
lucien/wsaderfffjjjhhh
false
[]
null
0
0
lucio/common_voice_eval
false
[]
null
0
0
lukasmasuch/my-test-repo-3
false
[]
null
0
0
lukasmasuch/my-test-repo-4
false
[]
null
0
0
lukasmasuch/test-2
false
[]
null
0
0
lukasmasuch/test-3
false
[]
null
0
0
lukasmasuch/test
false
[]
null
0
0
lukesjordan/worldbank-project-documents
false
[ "task_categories:table-to-text", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:abstractive-qa", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "task_ids:language-modeling", "task_ids:named-entity-recognition", "task_ids:text-simplification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:other", "conditional-text-generation", "structure-prediction" ]
null
10
1
luketheduke/stsb
false
[]
null
0
0
luofengge/mydata
false
[]
null
0
0
luofengge/testDataset
false
[]
null
0
0
luomingshuang/GRID_audio
false
[]
null
0
0
luomingshuang/GRID_text
false
[]
null
0
0
luomingshuang/grid_lip_160_80
false
[]
null
0
0
luozhouyang/dureader
false
[]
null
30
3
luozhouyang/kgclue-knowledge
false
[]
null
0
0
luozhouyang/question-answering-datasets
false
[]
null
4
0
lvwerra/abc-test
false
[]
null
0
0
lvwerra/abc
false
[]
null
0
0
codeparrot/codeparrot-clean-train
false
[]
null
753
6
codeparrot/codeparrot-clean-valid
false
[]
null
656
3
codeparrot/codeparrot-clean
false
[ "python", "code" ]
null
224
23
lvwerra/codeparrot-valid-clean-minimal
false
[]
null
22
0
lvwerra/codeparrot-valid
false
[]
null
1
0
lvwerra/github-alphacode
false
[]
null
3
0
codeparrot/github-code
false
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:unknown", "language:code", "license:other" ]
The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the GitHub dataset on BiqQuery.
1,240
86
lvwerra/important_dataset
false
[]
null
0
0
lvwerra/lm_ar_wikipedia
false
[]
null
0
0
lvwerra/red-wine
false
[]
null
0
2
lvwerra/repo-images
false
[]
null
0
0
lvwerra/test
false
[]
null
0
0
lysandre/image-to-text
false
[]
null
0
0
lysandre/my-cool-dataset
false
[]
null
0
0
m3hrdadfi/recipe_nlg_lite
false
[]
RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation - Lite version The dataset we publish contains 7,198 cooking recipes (>7K). It's processed in more careful way and provides more samples than any other dataset in the area.
4
1
mad/IndonesiaNewsDataset
false
[]
null
0
0
maindadwitiya/weather_dataset
false
[]
null
2
0
maji/npo_mission_statement_ucf
false
[]
null
0
0
majod/CleanNaturalQuestionsDataset
false
[]
null
0
0
makanan/umich
false
[]
null
0
0
malay-huggingface/pembalakan
false
[]
null
0
0
mammut/mammut-corpus-venezuela-test-set
false
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:es", "license:cc-by-nc-nd-4.0" ]
null
0
0
mammut/mammut-corpus-venezuela
false
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:es", "license:cc-by-nc-nd-4.0" ]
null
0
0
manifoldix/sg_testset_fhnw
false
[]
null
0
0
manifoldix/swg_parliament_fhnw
false
[]
null
0
0
manishk31/Demo
false
[]
null
0
0
manu/fr_corpora_parliament_processed-lowercased
false
[]
null
0
0