id
stringlengths 2
115
| private
bool 1
class | tags
list | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
ruanchaves/test_stanford | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:1501.03210"
] | Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al.. | 2 | 0 |
batterydata/paper-abstracts | false | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0"
] | null | 3 | 0 |
Davis/Swahili-tweet-sentiment | false | [
"license:mit"
] | null | 3 | 2 |
codyburker/yelp_review_sampled | false | [] | null | 175 | 0 |
OrfeasTsk/TriviaQA | false | [] | null | 0 | 0 |
ruanchaves/nru_hse | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ru",
"license:unknown",
"word-segmentation",
"arxiv:1911.03270"
] | 2000 real hashtags collected from several pages about civil services on vk.com (a Russian social network)
and then segmented manually. | 0 | 0 |
ruanchaves/loyola | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation"
] | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | 0 | 0 |
AhmedSSoliman/QRCD | false | [] | null | 0 | 0 |
mbartolo/synQA | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:1606.05250"
] | SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper. | 1 | 1 |
Paulosdeanllons/sedar | false | [
"license:afl-3.0"
] | null | 0 | 0 |
ruanchaves/bt11 | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation"
] | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | 0 | 0 |
ruanchaves/binkley | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation"
] | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Binkley is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | 0 | 0 |
ruanchaves/jhotdraw | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation"
] | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | 0 | 0 |
ruanchaves/lynx | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation"
] | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | 0 | 0 |
ruanchaves/snap | false | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation"
] | Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data". | 2 | 1 |
crystina-z/nocs-mrtydi | false | [] | null | 0 | 0 |
crystina-z/nocs-mrtydi-corpus | false | [] | null | 0 | 0 |
rocca/emojis | false | [] | null | 1 | 0 |
Siyam/mydata | false | [] | null | 0 | 0 |
flxclxc/english-norwegian-bible-set | false | [] | null | 0 | 0 |
Carlisle/msmarco-passage-non-abs | false | [
"license:mit"
] | null | 2 | 0 |
Carlisle/msmarco-passage-abs | false | [
"license:mit"
] | null | 2 | 0 |
wypoon/github-issues | false | [] | null | 0 | 0 |
gustavecortal/fr_covid_news | false | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fr",
"license:unknown"
] | null | 4 | 1 |
teven/hal_tests | false | [] | null | 0 | 0 |
Cyberfish/multiwoz2.1 | false | [] | null | 0 | 0 |
Jiejie/asr_book_lm_v1.1 | false | [] | null | 0 | 0 |
mateiut1/sv_corpora_parliament_processed | false | [] | null | 0 | 0 |
jackwyndham/gloru | false | [] | null | 0 | 0 |
flxclxc/en-no-semantic-search-set | false | [] | null | 0 | 0 |
m-newhauser/senator-tweets | false | [] | null | 5 | 0 |
FinScience/FS-distilroberta-fine-tuned | false | [
"language:en"
] | null | 0 | 0 |
flxclxc/en_no_with_embeddings | false | [] | null | 0 | 0 |
Carlisle/msmacro-test | false | [
"license:mit"
] | null | 2 | 0 |
Carlisle/msmacro-passage-non-abs-small | false | [
"license:mit"
] | null | 2 | 0 |
flxclxc/en_no_with_embeddings2 | false | [] | null | 0 | 0 |
Carlisle/msmacro-test-corpus | false | [
"license:mit"
] | null | 2 | 0 |
msollami-sf/processed_mnist | false | [] | null | 0 | 0 |
Sunghun/1 | false | [] | null | 0 | 0 |
Sunghun/Example1 | false | [] | null | 0 | 0 |
Sunghun/Example2 | false | [] | null | 0 | 0 |
pensieves/mimicause | false | [
"license:apache-2.0",
"arxiv:2110.07090"
] | MIMICause Dataset: A dataset for representation and automatic extraction of causal relation types from clinical notes.
The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences.
The dataset has following nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippet:
(1) Cause(E1,E2)
(2) Cause(E2,E1)
(3) Enable(E1,E2)
(4) Enable(E2,E1)
(5) Prevent(E1,E2)
(6) Prevent(E2,E1)
(7) Hinder(E1,E2)
(8) Hinder(E2,E1)
(9) Other | 0 | 2 |
ayberkuckun/hu_corpora_parliament_processed | false | [] | null | 0 | 0 |
chiarab/tweet-text-full | false | [] | null | 1 | 0 |
chiarab/tweets-dict | false | [] | null | 0 | 0 |
helloway/data-test | false | [] | null | 0 | 0 |
FanFan/sentiment-amazon-test | false | [] | null | 0 | 0 |
nielsr/rvl-cdip-demo | false | [] | null | 11 | 0 |
z-uo/qasper-squad | false | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en"
] | null | 1 | 0 |
lhoestq/test_none_image | false | [] | null | 0 | 0 |
jquiros/suicide | false | [] | null | 4 | 2 |
Noricum/der_standard_processed | false | [] | null | 0 | 0 |
nielsr/rvlcdip-demo | false | [] | null | 115 | 0 |
davanstrien/iiif_biblissima_w_image | false | [] | null | 0 | 0 |
shpotes/bosch-small-traffic-lights-dataset | false | [
"license:other"
] | This dataset contains 13427 camera images at a resolution of 1280x720 pixels and contains about
24000 annotated traffic lights. The annotations include bounding boxes of traffic lights as well
as the current state (active light) of each traffic light. The camera images are provided as raw
12bit HDR images taken with a red-clear-clear-blue filter and as reconstructed 8-bit RGB color
images. The RGB images are provided for debugging and can also be used for training. However, the
RGB conversion process has some drawbacks. Some of the converted images may contain artifacts and
the color distribution may seem unusual. | 2 | 3 |
Huseyin/tummul | false | [] | null | 0 | 0 |
Marianina/Example2 | false | [] | null | 0 | 0 |
Carlosholivan/base | false | [
"license:apache-2.0"
] | null | 0 | 0 |
Marianina/sentiment-banking | false | [] | null | 0 | 0 |
franz96521/BilletesMexico | false | [] | null | 0 | 0 |
crystina-z/no-nonself-mrtydi | false | [] | null | 0 | 0 |
jquiros/clean | false | [] | null | 0 | 0 |
SocialGrep/the-antiwork-subreddit-dataset | false | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0"
] | This dataset follows the notorious subreddit /r/Antiwork, a place for many Redditors to share resources and discuss grievances with the current labour market. | 0 | 1 |
laion/laion2B-en | false | [
"license:cc-by-4.0"
] | null | 642 | 96 |
christianloyal/loyal_clinc_MLE | false | [
"license:mit"
] | null | 2 | 0 |
crystina-z/no-nonself-mrtydi-corpus | false | [] | null | 0 | 0 |
laion/laion2B-multi | false | [
"license:cc-by-4.0"
] | null | 19 | 26 |
chiarab/jan-2021-unlabeled-full | false | [] | null | 0 | 0 |
chiarab/combined-train | false | [] | null | 0 | 0 |
chiarab/may-2020-unlabeled-full | false | [] | null | 0 | 0 |
hadehuang/testdataset | false | [] | null | 0 | 0 |
abdusah/adi5 | false | [] | null | 1 | 0 |
nortizf/risk_multilabel | false | [] | null | 2 | 0 |
khcy82dyc/zzzz | false | [
"license:apache-2.0"
] | null | 0 | 0 |
ai4bharat/IndicParaphrase | false | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437"
] | This is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M. | 31 | 1 |
albertvillanova/wikipedia | false | [] | null | 0 | 1 |
rubrix/sst2_with_predictions | false | [] | null | 0 | 1 |
nthngdy/oscar-mini | false | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571"
] | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | 7,196 | 1 |
laion/laion1B-nolang | false | [
"license:cc-by-4.0"
] | null | 3 | 3 |
ia-bentebib/tweet_eval_sentiment_fr | false | [] | null | 0 | 0 |
drAbreu/bc4chemd_ner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:GitHub",
"language:en",
"license:unknown"
] | The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ | 44 | 1 |
fuliucansheng/unitorch-datasets | false | [] | null | 0 | 0 |
Non-Residual-Prompting/C2Gen | false | [
"task_categories:text-generation",
"size_categories:<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1911.03705"
] | The task of C2Gen is to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context. | 2 | 0 |
FanFan/sentiment-amazon-clean | false | [] | null | 0 | 0 |
davanstrien/iiif_manuscripts_label_ge_100 | false | [] | null | 0 | 0 |
christianloyal/loyal_clinc_MLE_unlabeled | false | [] | null | 0 | 0 |
CLUTRR/v1 | false | [
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"arxiv:1908.06177"
] | CLUTRR (Compositional Language Understanding and Text-based Relational Reasoning),
a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177)
to test the systematic generalization and inductive reasoning capabilities of NLU systems. | 31 | 2 |
damlab/uniprot | false | [] | null | 13 | 2 |
agemagician/u50_test | false | [] | null | 0 | 0 |
chiarab/vaccine-sentiment-clean | false | [] | null | 0 | 0 |
chiarab/vaccine-sentiment-clean-2 | false | [] | null | 0 | 0 |
juched/spotifinders | false | [] | null | 0 | 0 |
ArnavL/TWTEval-Pretraining-Processed | false | [] | null | 0 | 0 |
juched/spotifinders-dataset | false | [
"license:mit"
] | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | 2 | 0 |
PaddlePaddle/dureader_robust | false | [
"license:apache-2.0"
] | DureaderRobust is a chinese reading comprehension dataset, designed to evaluate the MRC models from three aspects: over-sensitivity, over-stability and generalization. | 461 | 0 |
kyleinincubated/autonlp-data-cat33 | false | [
"task_categories:text-classification",
"language:zh"
] | null | 0 | 0 |
Georgii/poetry-genre | false | [] | null | 0 | 1 |
joangaes/depression | false | [] | null | 0 | 0 |
ai4bharat/IndicHeadlineGeneration | false | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:27K<n<341K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437"
] | This is the new headline generation dataset released as part of IndicNLG Suite. Each
input document is paired an output title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.43M. | 42 | 0 |
ai4bharat/IndicSentenceSummarization | false | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:5K<n<112K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437"
] | This is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta and te. The total
size of the dataset is 431K. | 21 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.