id
stringlengths 2
115
| private
bool 1
class | tags
list | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
illuin/fr_corpora_parliament_processed | false | [
"language:fr"
] | null | 0 | 0 |
marinone94/nst_no | false | [] | This database was created by Nordic Language Technology for the development of automatic speech recognition and dictation in Norwegian.
In this version, the organization of the data have been altered to improve the usefulness of the database.
In the original version of the material, the files were organized in a specific folder structure where the folder names were meaningful.
However, the file names were not meaningful, and there were also cases of files with identical names in different folders.
This proved to be impractical, since users had to keep the original folder structure in order to use the data.
The files have been renamed, such that the file names are unique and meaningful regardless of the folder structure.
The original metadata files were in spl format. These have been converted to JSON format.
The converted metadata files are also anonymized and the text encoding has been converted from ANSI to UTF-8.
See the documentation file for a full description of the data and the changes made to the database. | 0 | 0 |
marinone94/nst_sv | false | [] | This database was created by Nordic Language Technology for the development
of automatic speech recognition and dictation in Swedish.
In this updated version, the organization of the data have been altered to improve the usefulness of the database.
In the original version of the material,
the files were organized in a specific folder structure where the folder names were meaningful.
However, the file names were not meaningful, and there were also cases of files with identical names in different folders.
This proved to be impractical, since users had to keep the original folder structure in order to use the data.
The files have been renamed, such that the file names are unique and meaningful regardless of the folder structure.
The original metadata files were in spl format. These have been converted to JSON format.
The converted metadata files are also anonymized and the text encoding has been converted from ANSI to UTF-8.
See the documentation file for a full description of the data and the changes made to the database. | 2 | 0 |
mariosasko/test_multi_dir_dataset | false | [] | null | 0 | 0 |
markscrivo/OddsOn | false | [] | null | 0 | 0 |
martodaniel/terere | false | [] | null | 0 | 0 |
masked-neuron/amazon | false | [] | 0 | 0 |
|
masked-neuron/ccd | false | [] | The consumer compaint data set is derived from the consumer complaint database
for the purpose of benchmarking quantification / label shift algorithms. The
data set consists of records of compaints about consumer financial products and
services that the Consumer Financial Protection Bureau sent to companies for
response. Each record has a corresponding product / sub product field which can
be used as labels for text classification. | 0 | 0 |
masked-neuron/qb | false | [] | 0 | 0 |
|
mattchurgin/sv_corpora_parliament_processed | false | [] | null | 0 | 0 |
matteopilotto/github-issues | false | [] | null | 0 | 0 |
maximedb/mcqa_light | false | [] | MQA is a multilingual corpus of questions and answers parsed from the Common Crawl. Questions are divided between Frequently Asked Questions (FAQ) pages and Community Question Answering (CQA) pages. | 0 | 0 |
maximedb/mfaq_light | false | [] | MQA is a multilingual corpus of questions and answers parsed from the Common Crawl. Questions are divided between Frequently Asked Questions (FAQ) pages and Community Question Answering (CQA) pages. | 0 | 0 |
maximedb/paws-x-all | false | [] | PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages.
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine
translated training pairs in six typologically distinct languages: French, Spanish, German,
Chinese, Japanese, and Korean. English language is available by default. All translated
pairs are sourced from examples in PAWS-Wiki.
For further details, see the accompanying paper: PAWS-X: A Cross-lingual Adversarial Dataset
for Paraphrase Identification (https://arxiv.org/abs/1908.11828)
NOTE: There might be some missing or wrong labels in the dataset and we have replaced them with -1. | 0 | 0 |
maximedb/vaccinchat | false | [] | null | 0 | 0 |
maximedb/vaccinchat_retrieval | false | [] | null | 0 | 0 |
maximedb/wow | false | [] | In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically "generate and hope" generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction. | 8 | 0 |
maxmoynan/SemEval2017-Task4aEnglish | false | [] | null | 7 | 0 |
maydogan/TRSAv1 | false | [] | null | 0 | 1 |
mbateman/github-issues | false | [
"arxiv:2005.00614"
] | null | 0 | 0 |
medzaf/test | false | [] | null | 0 | 0 |
meghanabhange/chaii | false | [] | null | 0 | 0 |
meghanabhange/hilm141021 | false | [
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"license:other"
] | null | 0 | 0 |
meghanabhange/hitalm141021 | false | [
"annotations_creators:other",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:ta",
"license:other"
] | null | 0 | 0 |
meghanabhange/hitalmsandbox | false | [] | null | 0 | 0 |
meghanabhange/talm141021 | false | [
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ta",
"license:other"
] | null | 0 | 0 |
merve/coco | false | [] | null | 2 | 2 |
merve/folk-mythology-tales | false | [] | null | 0 | 1 |
merve/poetry | false | [] | null | 3 | 6 |
merve/qqp | false | [] | null | 7 | 0 |
metaeval/blimp_classification | false | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"cola"
] | Acceptable/non acceptable sentences (recasted as a classification task) | 20 | 1 |
metaeval/crowdflower | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en"
] | Collection of crowdflower classification datasets | 130 | 0 |
metaeval/ethics | false | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en"
] | Probing for ethics understanding | 595 | 3 |
metaeval/linguisticprobing | false | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en"
] | 10 probing tasks designed to capture simple linguistic features of sentences, | 120 | 0 |
metaeval/recast | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"nli",
"natural-language-inference"
] | A diverse collection of tasks recasted as natural language inference tasks. | 112 | 0 |
metalearning/kaggale-nlp-tutorial | false | [] | null | 0 | 0 |
metamong1/summarization_optimization | false | [] | Aihub Document summarization data | 1 | 1 |
michaelbenayoun/wikipedia-bert-128 | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 0 |
microsoft/codexglue_method_generation | false | [] | null | 23 | 2 |
midas/citeulike180 | false | [] | \ | 0 | 0 |
midas/cstr | false | [] | \ | 0 | 0 |
midas/duc2001 | false | [] | \ | 260 | 1 |
midas/inspec | false | [
"arxiv:1910.08840"
] | Benchmark dataset for automatic identification of keyphrases from text published with the work - Improved automatic keyword extraction given more linguistic knowledge. Anette Hulth. In Proceedings of EMNLP 2003. p. 216-223. | 158 | 3 |
midas/inspec_ke_tagged | false | [] | null | 0 | 0 |
midas/kdd | false | [] | \ | 0 | 0 |
midas/kp20k | false | [] | \ | 10 | 2 |
midas/kpcrowd | false | [] | \ | 23 | 1 |
midas/kptimes | false | [] | \ | 0 | 0 |
midas/krapivin | false | [] | \ | 0 | 0 |
midas/ldke3k_medium | false | [] | null | 0 | 0 |
midas/ldke3k_small | false | [] | null | 0 | 0 |
midas/ldkp10k | false | [] | This new dataset is designed to solve kp NLP task and is crafted with a lot of care. | 0 | 2 |
midas/ldkp3k | false | [] | This new dataset is designed to solve kp NLP task and is crafted with a lot of care. | 0 | 2 |
midas/ldkp3k_small | false | [] | null | 0 | 0 |
midas/nus | false | [] | \ | 0 | 0 |
midas/oagkx | false | [] | \ | 3 | 0 |
midas/openkp | false | [] | \ | 253 | 2 |
midas/pubmed | false | [] | \ | 0 | 0 |
midas/semeval2010 | false | [
"arxiv:1910.08840"
] | \ | 17 | 0 |
midas/semeval2010_ke_tagged | false | [] | null | 0 | 0 |
midas/semeval2017 | false | [
"arxiv:1704.02853",
"arxiv:1910.08840"
] | \ | 46 | 1 |
midas/semeval2017_ke_tagged | false | [] | null | 1 | 0 |
midas/test_ldkp | false | [] | This new dataset is designed to solve kp NLP task and is crafted with a lot of care. | 0 | 0 |
midas/www | false | [] | \ | 0 | 0 |
mideind/icelandic-common-crawl-corpus-IC3 | false | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:is",
"license:unknown"
] | null | 0 | 0 |
mideind/icelandic-error-corpus-IceEC | false | [
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:is",
"license:cc-by-4.0"
] | The Icelandic Error Corpus (IceEC) is a collection of texts in modern Icelandic annotated for mistakes related to spelling, grammar, and other issues. The texts are organized by genre. The current version includes sentences from student essays, online news texts and Wikipedia articles.
Sentences within texts in the student essays had to be shuffled due to the license which they were originally published under, but neither the online news texts nor the Wikipedia articles needed to be shuffled. | 0 | 1 |
miesnerjacob/github-issues | false | [] | null | 0 | 0 |
mikeee/model-z | false | [] | null | 0 | 0 |
mirari/sv_corpora_parliament_processed | false | [] | null | 0 | 0 |
mishig/sample_images | false | [] | null | 0 | 0 |
mksaad/Arabic_news | false | [] | null | 0 | 0 |
ml6team/cnn_dailymail_nl | false | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail",
"language:nl",
"license:mit"
] | This dataset is the CNN/Dailymail dataset translated to Dutch.
This is the original dataset:
```
load_dataset("cnn_dailymail", '3.0.0')
```
And this is the HuggingFace translation pipeline:
```
pipeline(
task='translation_en_to_nl',
model='Helsinki-NLP/opus-mt-en-nl',
tokenizer='Helsinki-NLP/opus-mt-en-nl')
``` | 36 | 12 |
ml6team/xsum_nl | false | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|xsum",
"language:nl",
"license:unknown"
] | null | 0 | 2 |
mldmm/glass_alloy_composition | false | [] | This is an alloy composition dataset | 0 | 0 |
mmcquade11-test/reuters-for-summarization-two | false | [] | null | 0 | 0 |
mmm-da/rutracker_anime_torrent_titles | false | [] | null | 0 | 0 |
mnaylor/evaluating-student-writing | false | [] | null | 0 | 0 |
mnemlaghi/widdd | false | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"arxiv:1810.09164"
] | WiDDD stands for WIkiData Disambig with Descriptions. The former dataset comes from [Cetoli & al](https://arxiv.org/pdf/1810.09164.pdf) paper, and is aimed at solving Named Entity Disambiguation. This datasets tries to extract relevant information from entities descriptions only, instead of working with graphs. In order to do so, we mapped every Wikidata id (correct id and wrong id) in the original paper with its WikiData description. If not found, row is discarded for this version. | 2 | 1 |
mohamed-illiyas/wav2vec-malayalam-data | false | [] | null | 0 | 0 |
mohamed-illiyas/wav2vec-malayalam-new-data | false | [] | null | 0 | 0 |
mohamed-illiyas/wav2vec2-base-lj-demo-colab | false | [] | null | 0 | 0 |
morganchen1007/1215 | false | [] | null | 0 | 0 |
morganchen1007/1216 | false | [] | null | 0 | 0 |
morganchen1007/1216_00 | false | [] | null | 0 | 0 |
morganchen1007/test_1213_00 | false | [] | null | 0 | 0 |
mostol/wiktionary-ipa | false | [] | null | 0 | 2 |
moumeneb1/French_arpa_lm | false | [] | null | 0 | 0 |
moumeneb1/filtered | false | [] | null | 0 | 0 |
moumeneb1/filtered_300 | false | [] | null | 0 | 0 |
moumeneb1/fr_lm_dataset | false | [] | null | 0 | 0 |
moumeneb1/large_vocabulary_dataset | false | [] | null | 0 | 0 |
moumeneb1/osc_processed_lm | false | [] | null | 0 | 0 |
moumeneb1/testing | false | [] | null | 0 | 0 |
moxi43/github-issues | false | [] | null | 0 | 0 |
mpierrau/sv_corpora_parliament_processed | false | [] | null | 0 | 0 |
mr-robot/ec | false | [] | null | 0 | 0 |
mrm8488/fake-news | false | [] | null | 0 | 0 |
mrm8488/goemotions | false | [
"arxiv:2005.00547"
] | null | 6 | 5 |
mrojas/abbreviation | false | [] | \ | 0 | 0 |
mrojas/body | false | [] | \ | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.