id
stringlengths 2
115
| private
bool 1
class | tags
list | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
mrojas/disease | false | [] | \ | 0 | 0 |
mrojas/family | false | [] | \ | 0 | 0 |
mrojas/finding | false | [] | \ | 0 | 0 |
mrojas/medication | false | [] | \ | 0 | 0 |
mrojas/procedure | false | [] | \ | 0 | 0 |
mrp/Thai-Semantic-Textual-Similarity-Benchmark | false | [] | null | 0 | 0 |
msarmi9/korean-english-multitarget-ted-talks-task | false | [
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:translation",
"multilinguality:multilingual",
"language:en",
"language:ko",
"license:cc-by-nc-nd-4.0"
] | null | 21 | 0 |
msivanes/github-issues | false | [] | null | 0 | 0 |
mswedrowski/multiwiki_90k | false | [] | null | 0 | 0 |
mtfelix/datasetdemo | false | [] | null | 0 | 0 |
mtlew/0001_Angry_test | false | [] | null | 0 | 0 |
muhtasham/autonlp-data-Doctor_DE | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"language:de"
] | null | 0 | 0 |
mulcyber/europarl-mono | false | [] | Europarl Monolingual Dataset.
The Europarl parallel corpus is extracted from the proceedings of the
European Parliament (from 2000 to 2011). It includes versions in 21 European
languages: Romanic (French, Italian, Spanish, Portuguese, Romanian),
Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian,
Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian),
Baltic (Latvian, Lithuanian), and Greek.
Upstream url: https://www.statmt.org/europarl/ | 3 | 0 |
indonesian-nlp/mc4-id | false | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:id",
"license:odc-by",
"arxiv:1910.10683"
] | A thoroughly cleaned version of the Italian portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file. | 0 | 1 |
mustafa12/db_ee | false | [] | null | 0 | 0 |
mustafa12/edaaaas | false | [] | null | 0 | 0 |
mustafa12/thors | false | [] | null | 0 | 0 |
mvarma/medwiki | false | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:en-US",
"language:en",
"license:cc-by-4.0",
"arxiv:2110.08228"
] | MedWiki is a large-scale sentence dataset collected from Wikipedia with medical entity (UMLS) annotations. This dataset is intended for pretraining. | 21 | 1 |
mvip/tr_corpora_parliament_processed | false | [] | null | 0 | 0 |
mvip/tr_corpora_parliament_processed_non_hatted | false | [] | null | 0 | 0 |
nateraw/auto-cats-and-dogs | false | [
"task_categories:other",
"auto-generated",
"image-classification"
] | null | 2 | 0 |
nateraw/auto-exp-2 | false | [
"task_categories:other",
"auto-generated",
"image-classification"
] | null | 0 | 0 |
nateraw/beans | false | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit"
] | Beans is a dataset of images of beans taken in the field using smartphone
cameras. It consists of 3 classes: 2 disease classes and the healthy class.
Diseases depicted include Angular Leaf Spot and Bean Rust. Data was annotated
by experts from the National Crops Resources Research Institute (NaCRRI) in
Uganda and collected by the Makerere AI research lab. | 0 | 0 |
nateraw/beans_old | false | [] | null | 0 | 0 |
nateraw/blahblah | false | [] | null | 0 | 0 |
nateraw/bulk-dummy | false | [] | null | 0 | 0 |
nateraw/cats-and-dogs | false | [] | null | 8 | 0 |
nateraw/cats_vs_dogs | false | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | null | 0 | 0 |
nateraw/dummy-csv-dataset | false | [] | null | 0 | 0 |
nateraw/filings-10k | false | [] | null | 0 | 0 |
nateraw/food101 | false | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-foodspotting",
"language:en",
"license:unknown"
] | null | 4 | 1 |
nateraw/food101_old | false | [
"task_categories:other",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"source_datasets:extended|other-foodspotting",
"license:unknown"
] | null | 0 | 0 |
nateraw/huggingpics-data-2 | false | [] | null | 0 | 0 |
nateraw/huggingpics-data | false | [] | null | 0 | 0 |
nateraw/image-folder | false | [] | null | 30 | 0 |
nateraw/imagefolder | false | [] | null | 4 | 1 |
nateraw/imagenette | false | [] | Imagenette is a subset of 10 easily classified classes from the Imagenet
dataset. It was originally prepared by Jeremy Howard of FastAI. The objective
behind putting together a small version of the Imagenet dataset was mainly
because running new ideas/algorithms/experiments on the whole Imagenet take a
lot of time.
This version of the dataset allows researchers/practitioners to quickly try out
ideas and share with others. The dataset comes in three variants:
* Full size
* 320 px
* 160 px
Note: The v2 config correspond to the new 70/30 train/valid split (released
in Dec 6 2019). | 0 | 2 |
nateraw/img-demo | false | [] | null | 0 | 0 |
nateraw/punks | false | [] | null | 0 | 0 |
nateraw/rock_paper_scissors | false | [] | null | 0 | 0 |
nateraw/sync_food101 | false | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-foodspotting",
"language:en",
"license:unknown"
] | null | 0 | 0 |
nateraw/test | false | [] | null | 0 | 0 |
nateraw/wit | false | [] | Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set
of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its
size enables WIT to be used as a pretraining dataset for multimodal machine learning models. | 0 | 0 |
nathanlsl/news | false | [] | null | 0 | 0 |
naver-clova-conversation/klue-tc-dev-tsv | false | [] | null | 0 | 0 |
naver-clova-conversation/klue-tc-tsv | false | [] | null | 2 | 0 |
navjordj/nak_nb | false | [] | null | 0 | 0 |
ncats/EpiSet4BinaryClassification | false | [
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:unknown",
"language:en",
"license:cc-by-4.0"
] | INSERT DESCRIPTION | 3 | 0 |
ncats/EpiSet4NER-v1 | false | [
"task_ids:named-entity-recognition",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:other"
] | **REWRITE*
EpiSet4NER is a dataset generated from 620 rare disease abstracts labeled using statistical and rule-base methods. The test set was then manually corrected by a rare disease expert.
For more details see *INSERT PAPER* and https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard | 0 | 1 |
ncats/GARD_EpiSet4TextClassification | false | [] | INSERT DESCRIPTION | 0 | 0 |
ncduy/github-issues | false | [] | null | 0 | 0 |
ncduy/mt-en-vi | false | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:own",
"source_datasets:open_subtitles",
"source_datasets:tatoeba",
"source_datasets:opus_tedtalks",
"source_datasets:qed_amara",
"source_datasets:opus_wikipedia",
"language:en",
"language:vi",
"license:mit"
] | null | 23 | 1 |
ncoop57/athena_data | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 0 |
ncoop57/csnc_human_judgement | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 0 |
ncoop57/rico_captions | false | [] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 0 | 1 |
neelalex/raft-predictions | false | [
"benchmark:raft"
] | \\nThis dataset contains a corpus of AI papers. The first task is to determine\\n whether or not a datapoint is an AI safety paper. The second task is to\\n determine what type of paper it is. | 0 | 1 |
nferruz/UR50_2021_04 | false | [
"size_categories:unknown"
] | null | 3 | 1 |
ngdiana/hu_severity | false | [] | null | 1 | 0 |
ngdiana/uaspeech | false | [] | null | 1 | 0 |
ngdiana/uaspeech_severity | false | [] | null | 0 | 0 |
ngdiana/uaspeech_severity_high | false | [] | null | 1 | 0 |
ngdiana/uaspeech_severity_low | false | [] | null | 0 | 0 |
nickmuchi/financial-classification | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"language:en",
"finance"
] | null | 20 | 4 |
nickmuchi/trade-the-event-finance | false | [] | null | 2 | 1 |
nid989/FNC-1 | false | [] | null | 39 | 2 |
nielsr/FUNSD_layoutlmv2 | false | [
"language:en",
"arxiv:1905.13538"
] | https://guillaumejaume.github.io/FUNSD/ | 408 | 3 |
nielsr/XFUN | false | [] | null | 113 | 3 |
nielsr/funsd | false | [] | https://guillaumejaume.github.io/FUNSD/ | 20,953 | 5 |
nlpconnect/dpr-nq-reader-v2 | false | [] | null | 0 | 0 |
nlpconnect/dpr-nq-reader | false | [] | null | 0 | 0 |
nlpconnect/ms_marco_subset_v2.1 | false | [] | null | 0 | 0 |
nlpufg/brwac-pt | false | [] | null | 0 | 0 |
nlpufg/brwac | false | [] | null | 0 | 0 |
nlpufg/oscar-pt | false | [] | null | 0 | 0 |
nlpyeditepe/tr-qnli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"language:tr-TR",
"license:mit"
] | null | 0 | 0 |
nlpyeditepe/tr_rte | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"language:tr-TR",
"license:mit"
] | null | 0 | 0 |
nntadotzip/iuQAchatbot | false | [] | null | 0 | 0 |
notional/notional-python | false | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:py",
"license:unknown"
] | null | 5 | 1 |
nouamanetazi/ar_common_voice_processed | false | [] | null | 0 | 0 |
nouamanetazi/ar_opus100_processed | false | [] | null | 0 | 0 |
ntagg/data1 | false | [] | null | 0 | 0 |
nthngdy/bananas | false | [] | null | 1 | 0 |
nthngdy/ccnews_split | false | [] | CC-News containing news articles from news sites all over the world The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has 708241 articles. It represents a small portion of English language subset of the CC-News dataset created using news-please(Hamborg et al.,2017) to collect and extract English language portion of CC-News. | 3 | 0 |
nthngdy/openwebtext_split | false | [] | An open-source replication of the WebText dataset from OpenAI. | 0 | 0 |
ntutexas/amazon | false | [] | null | 0 | 0 |
nucklehead/ht-voice-dataset | false | [] | null | 0 | 0 |
nykodmar/cs_corpora_parliament_processed | false | [] | null | 0 | 0 |
oelkrise/CRT | false | [] | null | 0 | 0 |
omar-sharif/BAD-Bengali-Aggressive-Text-Dataset | false | [] | null | 0 | 1 |
openclimatefix/eumetsat_uk_hrv | false | [] | The EUMETSAT Spinning Enhanced Visible and InfraRed Imager (SEVIRI) rapid scanning service (RSS) takes an image of the northern third of the Meteosat disc every five minutes (see the EUMETSAT website for more information on SEVIRI RSS ). The original EUMETSAT dataset contains data from 2008 to the present day from 12 channels, and for a wide geographical extent covering North Africa, Saudi Arabia, all of Europe, and Western Russia. In contrast, this dataset on Google Cloud is a small subset of the entire SEVIRI RSS dataset: This Google Cloud dataset is from a single channel: the "high resolution visible" (HRV) channel; and contains data from January 2020 to November 2021. The geographical extent of this dataset on Google Cloud is a small subset of the total SEVIRI RSS extent: This Google Cloud dataset includes data over the United Kingdom and over North Western Europe.
This dataset is slightly transformed: It does not contain the original numerical values.
The original data is copyright EUMETSAT. EUMETSAT has given permission to redistribute this transformed data. The data was transformed by Open Climate Fix using satip.
This public dataset is hosted in Google Cloud Storage and available free to use. | 3 | 1 |
openclimatefix/gfs | false | [
"license:mit"
] | null | 2 | 0 |
openclimatefix/goes-l2 | false | [
"license:mit"
] | null | 2 | 0 |
openclimatefix/goes-mrms | false | [] | null | 14 | 0 |
openclimatefix/goes | false | [
"license:mit"
] | The National Oceanic and Atmospheric Administration (NOAA) operates a constellation of Geostationary Operational Environmental Satellites (GOES) to provide continuous weather imagery and monitoring of meteorological and space environment data for the protection of life and property across the United States. GOES satellites provide critical atmospheric, oceanic, climatic and space weather products supporting weather forecasting and warnings, climatologic analysis and prediction, ecosystems management, safe and efficient public and private transportation, and other national priorities.
The satellites provide advanced imaging with increased spatial resolution, 16 spectral channels, and up to 1 minute scan frequency for more accurate forecasts and timely warnings.
The real-time feed and full historical archive of original resolution Advanced Baseline Imager (ABI) radiance data (Level 1b) and full resolution Cloud and Moisture Imager (CMI) products (Level 2) are freely available on Amazon S3 for anyone to use. | 2 | 2 |
openclimatefix/hrrr | false | [
"license:mit"
] | null | 2 | 0 |
orisuchy/Descriptive_Sentences_He | false | [
"license:afl-3.0"
] | null | 0 | 2 |
osanseviero/codeparrot-train | false | [] | null | 0 | 0 |
osanseviero/llama_test | false | [] | null | 1 | 0 |
osanseviero/test | false | [] | null | 0 | 0 |
ought/raft-submission | false | [] | null | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.