id
stringlengths 2
115
| private
bool 1
class | tags
list | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
laion/laion-high-resolution | false | [
"license:cc-by-4.0"
]
| null | 49 | 31 |
nadhifikbarw/id_ner_nimas | false | [
"task_categories:token-classification",
"language:id"
]
| null | 0 | 0 |
peandrew/conceptnet_en_simple | false | []
| null | 0 | 0 |
jeremyf/fanfiction_z | false | [
"language:en",
"fanfiction"
]
| null | 49 | 1 |
kejian/pile-severetoxicTEST-chunk-0 | false | []
| null | 0 | 0 |
hidude562/textsources | false | []
| null | 0 | 0 |
hidude562/BadWikipedia | false | []
| null | 0 | 0 |
hsiehpinghan/github-issues | false | []
| null | 0 | 0 |
peandrew/conceptnet_en_nomalized | false | []
| null | 0 | 1 |
parvezmrobin/MCMD | false | []
| null | 0 | 0 |
nateraw/imagenet-sketch | false | [
"license:mit"
]
| ImageNet-Sketch data set consists of 50000 images, 50 images for each of the 1000 ImageNet classes.
We construct the data set with Google Image queries "sketch of __", where __ is the standard class name.
We only search within the "black and white" color scheme. We initially query 100 images for every class,
and then manually clean the pulled images by deleting the irrelevant images and images that are for similar
but different classes. For some classes, there are less than 50 images after manually cleaning, and then we
augment the data set by flipping and rotating the images. | 12 | 0 |
shzhang/tutorial_datasets_github_issues | false | []
| null | 0 | 0 |
johnowhitaker/imagewoof2-320 | false | []
| null | 0 | 0 |
johnowhitaker/imagenette2-320 | false | []
| null | 0 | 0 |
peandrew/dialy_dialogue_with_recoginized_concept_raw | false | []
| null | 40 | 0 |
sayanhf22/amazonfashionmeta | false | []
| null | 0 | 0 |
bananabot/engMollywoodSummaries | false | [
"license:wtfpl"
]
| null | 0 | 1 |
ufukhaman/uspto_balanced_200k_ipc_classification | false | [
"license:mit"
]
| null | 2 | 0 |
vinaykudari/acled-ie-actors | false | []
| null | 0 | 0 |
nguyenvulebinh/fsd50k | false | [
"license:cc-by-4.0"
]
| null | 0 | 0 |
pile-of-law/eoir_privacy | false | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2207.00220"
]
| A living legal dataset. | 17 | 3 |
lilitket/voxlingua107 | false | [
"license:apache-2.0"
]
| null | 0 | 0 |
kejian/pile-severetoxic-chunk-0 | false | []
| null | 0 | 0 |
kejian/pile-severetoxic-balanced | false | []
| null | 0 | 0 |
avacaondata/covidqa_translated-intermediate | false | []
| null | 0 | 0 |
avacaondata/covidqa_translated | false | []
| null | 0 | 0 |
strombergnlp/rustance | false | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ru",
"license:cc-by-4.0",
"stance-detection",
"arxiv:1809.01574"
]
| This is a stance prediction dataset in Russian. The dataset contains comments on news articles,
and rows are a comment, the title of the news article it responds to, and the stance of the comment
towards the article. | 1 | 1 |
FelixE/t4d01 | false | []
| null | 0 | 0 |
deepakvk/conversational_dialogues_001 | false | []
| null | 0 | 0 |
Fhrozen/AudioSet2K22 | false | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:100K<n<100M",
"source_datasets:unknown",
"license:cc-by-sa-4.0",
"audio-slot-filling"
]
| null | 9 | 1 |
Maddy132/bottles | false | [
"license:afl-3.0"
]
| null | 1 | 0 |
LukeSajkowski/github-issues | false | []
| null | 0 | 0 |
ccdv/WCEP-10 | false | [
"task_categories:summarization",
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"conditional-text-generation",
"arxiv:2005.10070",
"arxiv:2110.08499"
]
| WCEP10 dataset for summarization.
From paper: "A Large-Scale Multi-Document Summarization Dataset from the Wikipedia
Current Events Portal" by D. Gholipour et al."
From paper: "PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document
Summarization" by W. Xiao et al." | 28 | 2 |
HugoLaurencon/libri_light | false | []
| Libri-light is a large dataset of 60K hours of unlabelled speech from audiobooks in English.
It is a benchmark for the training of automatic speech recognition (ASR) systems with limited or no supervision. | 0 | 1 |
IljaSamoilov/ERR-transcription-to-subtitles | false | [
"license:afl-3.0"
]
| null | 0 | 0 |
mmillet/copy | false | [
"license:other"
]
| null | 0 | 0 |
benyang123/code | false | []
| null | 0 | 0 |
domischwimmbeck/germdata | false | []
| null | 0 | 0 |
jerpint/imagenette | false | []
| # ImageNette
Imagenette is a subset of 10 easily classified classes from Imagenet (tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute).
'Imagenette' is pronounced just like 'Imagenet', except with a corny inauthentic French accent.
If you've seen Peter Sellars in The Pink Panther, then think something like that.
It's important to ham up the accent as much as possible, otherwise people might not be sure whether you're refering to "Imagenette" or "Imagenet".
(Note to native French speakers: to avoid confusion, be sure to use a corny inauthentic American accent when saying "Imagenet".
Think something like the philosophy restaurant skit from Monty Python's The Meaning of Life.)
This version of the dataset allows researchers/practitioners to quickly try out
ideas and share with others. The dataset comes in three variants:
* Full size
* 320 px
* 160 px
The '320 px' and '160 px' versions have their shortest side resized to that size, with their aspect ratio maintained.
Too easy for you? In that case, you might want to try Imagewoof.
# Imagewoof
Imagewoof is a subset of 10 classes from Imagenet that aren't so easy to classify, since they're all dog breeds.
The breeds are: Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu, English foxhound, Rhodesian ridgeback, Dingo, Golden retriever, Old English sheepdog.
(No we will not enter in to any discussion in to whether a dingo is in fact a dog.
Any suggestions to the contrary are un-Australian. Thank you for your cooperation.)
Full size download;
320 px download;
160 px download. | 1 | 0 |
dianatixi/NoticiasEcuador | false | []
| null | 0 | 0 |
Pengfei/test22 | false | []
| null | 0 | 0 |
Eigen/twttone | false | []
| null | 0 | 0 |
milesbutler/consumer_complaints | false | [
"license:mit"
]
| null | 8 | 0 |
CEBaB/CEBaB | false | []
| null | 1,607 | 5 |
domenicrosati/QA2D | false | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:machine-generated",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"source_datasets:extended|squad",
"source_datasets:extended|race",
"source_datasets:extended|newsqa",
"source_datasets:extended|qamr",
"source_datasets:extended|movieQA",
"license:mit",
"arxiv:1809.02922"
]
| null | 2 | 1 |
mdroth/github_issues_300 | false | []
| null | 0 | 0 |
fancyerii/github-issues | false | []
| null | 0 | 0 |
DFKI-SLT/brat | false | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found"
]
| null | 0 | 1 |
kejian/pile-severetoxic-random100k | false | []
| null | 0 | 0 |
kejian/pile-severetoxic-balanced2 | false | []
| null | 0 | 0 |
Pavithree/eli5_55 | false | []
| null | 0 | 0 |
SberDevices/Golos | false | [
"arxiv:1910.10261",
"arxiv:2106.10161"
]
| null | 0 | 2 |
drAbreu/sd-nlp-2 | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0"
]
| This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain. | 0 | 0 |
laugustyniak/political-advertising-pl | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:hired_annotators",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10<n<10K",
"language:pl",
"license:other"
]
| null | 0 | 1 |
mteb/raw_arxiv | false | [
"language:en"
]
| null | 4 | 0 |
strombergnlp/offenseval_2020 | false | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"arxiv:2006.07235",
"arxiv:2004.02192",
"arxiv:1908.04531",
"arxiv:2004.14454",
"arxiv:2003.07459"
]
| OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
* Arabic
* Danish
* English
* Greek
* Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
The following sub-tasks were organized:
* Sub-task A - Offensive language identification;
* Sub-task B - Automatic categorization of offense types;
* Sub-task C - Offense target identification.
The English training data isn't included here (the text isn't available and needs rehydration of 9 million tweets;
see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)) | 0 | 1 |
MilaNLProc/honest | false | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"license:mit"
]
| HONEST dataset comprises a set of templates for measuring hurtful sentence completions in language models. The templates are provided in six languages (English, Italian, French, Portuguese, Romanian, and Spanish) for binary gender and in English for LGBTQAI+ individuals. WARNING: This dataset contains content that are offensive and/or hateful in nature. | 299 | 3 |
Maddy132/sample1 | false | []
| null | 0 | 0 |
laurens88/laurensthesis | false | []
| null | 0 | 0 |
Pavithree/askHistorians_55 | false | []
| null | 0 | 0 |
mteb/arxiv-clustering-s2s | false | [
"language:en"
]
| null | 198 | 0 |
Bolishetti/dataset | false | []
| null | 0 | 0 |
Pavithree/askScience_55 | false | []
| null | 0 | 0 |
Pavithree/eli5Split_55 | false | []
| null | 0 | 0 |
mteb/arxiv-clustering-p2p | false | [
"language:en"
]
| null | 128 | 0 |
LukeSajkowski/github-issues-embeddings | false | []
| null | 0 | 0 |
tomekkorbak/cursing-debugging | false | []
| null | 0 | 0 |
mteb/raw_biorxiv | false | [
"language:en"
]
| null | 0 | 0 |
mteb/raw_medrxiv | false | [
"language:en"
]
| null | 5 | 0 |
facebook/voxpopuli | false | [
"task_categories:automatic-speech-recognition",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:fr",
"language:es",
"language:pl",
"language:it",
"language:ro",
"language:hu",
"language:cs",
"language:nl",
"language:fi",
"language:hr",
"language:sk",
"language:sl",
"language:et",
"language:lt",
"license:cc0-1.0",
"license:other",
"arxiv:2101.00390"
]
| A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. | 579 | 9 |
leo19941227/g2p | false | [
"license:apache-2.0"
]
| null | 0 | 0 |
allenai/mup | false | [
"license:odc-by"
]
| null | 154 | 1 |
s3prl/g2p | false | [
"license:apache-2.0"
]
| null | 0 | 0 |
Leyo/TGIF | false | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1604.02748"
]
| The Tumblr GIF (TGIF) dataset contains 100K animated GIFs and 120K sentences describing visual content of the animated GIFs.
The animated GIFs have been collected from Tumblr, from randomly selected posts published between May and June of 2015.
We provide the URLs of animated GIFs in this release. The sentences are collected via crowdsourcing, with a carefully designed
annotationinterface that ensures high quality dataset. We provide one sentence per animated GIF for the training and validation splits,
and three sentences per GIF for the test split. The dataset shall be used to evaluate animated GIF/video description techniques. | 0 | 1 |
anuragshas/bn_opus100_processed | false | []
| null | 0 | 0 |
bigscience/bloom-book-prompts | false | []
| null | 0 | 1 |
HFFErica/labelled | false | []
| null | 0 | 0 |
anuragshas/ur_opus100_processed_cv9 | false | []
| null | 0 | 0 |
strombergnlp/nordic_langid | false | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:da",
"language:nn",
"language:nb",
"language:fo",
"language:is",
"language:sv",
"license:cc-by-sa-3.0",
"language-identification"
]
| Automatic language identification is a challenging problem. Discriminating
between closely related languages is especially difficult. This paper presents
a machine learning approach for automatic language identification for the
Nordic languages, which often suffer miscategorisation by existing
state-of-the-art tools. Concretely we will focus on discrimination between six
Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
Faroese and Icelandic.
This is the data for the tasks. Two variants are provided: 10K and 50K, with
holding 10,000 and 50,000 examples for each language respectively. | 0 | 3 |
alisaqallah/emotion-with-length | false | []
| null | 4 | 0 |
HuggingFaceM4/howto100m | false | []
| HowTo100M is a large-scale dataset of narrated videos with an emphasis on instructional videos where content creators teach complex tasks with an explicit intention of explaining the visual content on screen. HowTo100M features a total of
- 136M video clips with captions sourced from 1.2M YouTube videos (15 years of video)
- 23k activities from domains such as cooking, hand crafting, personal care, gardening or fitness
Each video is associated with a narration available as subtitles automatically downloaded from YouTube. | 2 | 0 |
bigscience/collaborative_catalog | false | [
"license:cc-by-4.0"
]
| null | 302 | 0 |
lk2/lk3 | false | [
"license:afl-3.0"
]
| null | 0 | 0 |
FollishBoi/autotrain-data-tpsmay22 | false | []
| null | 0 | 0 |
eduardopds/github-issues | false | []
| null | 0 | 0 |
HuggingFaceM4/epic_kitchens_100 | false | [
"license:cc-by-nc-4.0"
]
| EPIC-KITCHENS-100 is a large-scale dataset in first-person (egocentric) vision; multi-faceted, audio-visual,
non-scripted recordings in native environments - i.e. the wearers' homes, capturing all daily activities
in the kitchen over multiple days. Annotations are collected using a novel 'Pause-and-Talk' narration interface.
EPIC-KITCHENS-100 is an extension of the EPIC-KITCHENS dataset released in 2018, to 100 hours of footage. | 1 | 0 |
avacaondata/bioasq22-es | false | []
| null | 0 | 0 |
enoriega/odinsynth_dataset | false | []
| Supervised training data for odinsynth | 0 | 0 |
enoriega/odinsynth_temp | false | []
| null | 0 | 0 |
YYan/csnc_retrieval | false | [
"license:other"
]
| null | 0 | 0 |
manirai91/yt-nepali-movie-reviews | false | [
"license:apache-2.0"
]
| null | 0 | 0 |
NbAiLab/NST_hesitate | false | []
| This database was created by Nordic Language Technology for the development of automatic speech recognition and dictation in Norwegian. In this version, the organization of the data have been altered to improve the usefulness of the database.
The acoustic databases described below were developed by the firm Nordisk språkteknologi holding AS (NST_hesitate), which went bankrupt in 2003. In 2006, a consortium consisting of the University of Oslo, the University of Bergen, the Norwegian University of Science and Technology, the Norwegian Language Council and IBM bought the bankruptcy estate of NST_hesitate, in order to ensure that the language resources developed by NST_hesitate were preserved. In 2009, the Norwegian Ministry of Culture charged the National Library of Norway with the task of creating a Norwegian language bank, which they initiated in 2010. The resources from NST_hesitate were transferred to the National Library in May 2011, and are now made available in Språkbanken, for the time being without any further modification. Språkbanken is open for feedback from users about how the resources can be improved, and we are also interested in improved versions of the databases that users wish to share with other users. Please send response and feedback to [email protected]. | 0 | 0 |
mteb/biorxiv-clustering-s2s | false | [
"language:en"
]
| null | 1,719 | 0 |
mteb/biorxiv-clustering-p2p | false | [
"language:en"
]
| null | 86 | 0 |
mteb/medrxiv-clustering-s2s | false | [
"language:en"
]
| null | 1,835 | 0 |
mteb/medrxiv-clustering-p2p | false | [
"language:en"
]
| null | 84 | 0 |
HuggingFaceM4/charades | false | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1604.01753"
]
| Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. | 21 | 0 |
RuiqianLi/Li_singlish | false | [
"license:apache-2.0"
]
| This is a public domain speech dataset consisting of 3579 short audio clips of singlish | 0 | 0 |
mteb/stackexchange-clustering-p2p | false | [
"language:en"
]
| null | 106 | 0 |
pere/italian_tweets_500k | false | []
| \\nItalian tweets. | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.