sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
a0b014ffa0bf56b0a490676d298b3d73ca52b8d6 |
<div align="center">
<img width="640" alt="keremberke/pokemon-classification" src="https://huggingface.co/datasets/keremberke/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Porygon', 'Goldeen', 'Hitmonlee', 'Hitmonchan', 'Gloom', 'Aerodactyl', 'Mankey', 'Seadra', 'Gengar', 'Venonat', 'Articuno', 'Seaking', 'Dugtrio', 'Machop', 'Jynx', 'Oddish', 'Dodrio', 'Dragonair', 'Weedle', 'Golduck', 'Flareon', 'Krabby', 'Parasect', 'Ninetales', 'Nidoqueen', 'Kabutops', 'Drowzee', 'Caterpie', 'Jigglypuff', 'Machamp', 'Clefairy', 'Kangaskhan', 'Dragonite', 'Weepinbell', 'Fearow', 'Bellsprout', 'Grimer', 'Nidorina', 'Staryu', 'Horsea', 'Electabuzz', 'Dratini', 'Machoke', 'Magnemite', 'Squirtle', 'Gyarados', 'Pidgeot', 'Bulbasaur', 'Nidoking', 'Golem', 'Dewgong', 'Moltres', 'Zapdos', 'Poliwrath', 'Vulpix', 'Beedrill', 'Charmander', 'Abra', 'Zubat', 'Golbat', 'Wigglytuff', 'Charizard', 'Slowpoke', 'Poliwag', 'Tentacruel', 'Rhyhorn', 'Onix', 'Butterfree', 'Exeggcute', 'Sandslash', 'Pinsir', 'Rattata', 'Growlithe', 'Haunter', 'Pidgey', 'Ditto', 'Farfetchd', 'Pikachu', 'Raticate', 'Wartortle', 'Vaporeon', 'Cloyster', 'Hypno', 'Arbok', 'Metapod', 'Tangela', 'Kingler', 'Exeggutor', 'Kadabra', 'Seel', 'Voltorb', 'Chansey', 'Venomoth', 'Ponyta', 'Vileplume', 'Koffing', 'Blastoise', 'Tentacool', 'Lickitung', 'Paras', 'Clefable', 'Cubone', 'Marowak', 'Nidorino', 'Jolteon', 'Muk', 'Magikarp', 'Slowbro', 'Tauros', 'Kabuto', 'Spearow', 'Sandshrew', 'Eevee', 'Kakuna', 'Omastar', 'Ekans', 'Geodude', 'Magmar', 'Snorlax', 'Meowth', 'Pidgeotto', 'Venusaur', 'Persian', 'Rhydon', 'Starmie', 'Charmeleon', 'Lapras', 'Alakazam', 'Graveler', 'Psyduck', 'Rapidash', 'Doduo', 'Magneton', 'Arcanine', 'Electrode', 'Omanyte', 'Poliwhirl', 'Mew', 'Alolan Sandslash', 'Mewtwo', 'Weezing', 'Gastly', 'Victreebel', 'Ivysaur', 'MrMime', 'Shellder', 'Scyther', 'Diglett', 'Primeape', 'Raichu']
```
### Number of Images
```json
{'train': 4869, 'valid': 1390, 'test': 732}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| keremberke/pokemon-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
]
| 2023-01-15T18:40:15+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Gaming"]} | 2023-01-15T18:41:29+00:00 | []
| []
| TAGS
#task_categories-image-classification #roboflow #roboflow2huggingface #Gaming #region-us
|
<div align="center">
<img width="640" alt="keremberke/pokemon-classification" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
Public Domain
### Dataset Summary
This dataset was exported via URL on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nPublic Domain",
"### Dataset Summary\nThis dataset was exported via URL on December 20, 2022 at 5:34 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 6991 images.\nPokemon are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 224x224 (Fit (black edges))\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-image-classification #roboflow #roboflow2huggingface #Gaming #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nPublic Domain",
"### Dataset Summary\nThis dataset was exported via URL on December 20, 2022 at 5:34 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 6991 images.\nPokemon are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 224x224 (Fit (black edges))\n\nNo image augmentation techniques were applied."
]
|
d85cd40e7fcc63db6f6a3a2d509df4006e4a9ecc | The data comes from tweets collected and classified through Crowdbreaks.org [Muller, Martin M., and Marcel Salathe. "Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing." Frontiers in public health 7 (2019).]. Tweets have been classified as pro-vaccine (1), neutral (0) or anti-vaccine (-1). | allevelly/dataset | [
"license:creativeml-openrail-m",
"region:us"
]
| 2023-01-15T19:30:56+00:00 | {"license": "creativeml-openrail-m"} | 2023-01-15T19:35:20+00:00 | []
| []
| TAGS
#license-creativeml-openrail-m #region-us
| The data comes from tweets collected and classified through URL [Muller, Martin M., and Marcel Salathe. "Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing." Frontiers in public health 7 (2019).]. Tweets have been classified as pro-vaccine (1), neutral (0) or anti-vaccine (-1). | []
| [
"TAGS\n#license-creativeml-openrail-m #region-us \n"
]
|
e987f0f12e99e9d25aea1c3bcaa21394282864b2 | **Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.**
# E621 Rising: Mini Image Dataset v1
**9,999** images (~4GB) downloaded from `e621.net` with [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-curated/raw/main/meta/tag-counts.json).
This is a small sample of the E621 Rising: Raw Dataset [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw).
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 4096]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw/tree/main/meta) | hearmeneigh/e621-rising-v1-mini | [
"size_categories:1K<n<10K",
"not-for-all-audiences",
"region:us"
]
| 2023-01-15T21:05:19+00:00 | {"size_categories": ["1K<n<10K"], "pretty_name": "E621 Rising: Mini Image Dataset v1", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4051563749.765, "num_examples": 9999}], "download_size": 3979423376, "dataset_size": 4051563749.765}, "viewer": false, "tags": ["not-for-all-audiences"]} | 2023-05-12T15:35:30+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #not-for-all-audiences #region-us
| Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.
# E621 Rising: Mini Image Dataset v1
9,999 images (~4GB) downloaded from 'URL' with tags.
This is a small sample of the E621 Rising: Raw Dataset available here.
## Image Processing
* Only 'jpg' and 'png' images were considered
* Image width and height have been clamped to '(0, 4096]px'; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to 'jpg' format
* All images have been converted to TrueColor 'RGB'
* All images have been verified to load with 'Pillow'
* Metadata from E621 is available here | [
"# E621 Rising: Mini Image Dataset v1\n\n9,999 images (~4GB) downloaded from 'URL' with tags.\n\nThis is a small sample of the E621 Rising: Raw Dataset available here.",
"## Image Processing\n* Only 'jpg' and 'png' images were considered\n* Image width and height have been clamped to '(0, 4096]px'; larger images have been resized to meet the limit\n* Alpha channels have been removed\n* All images have been converted to 'jpg' format\n* All images have been converted to TrueColor 'RGB'\n* All images have been verified to load with 'Pillow'\n* Metadata from E621 is available here"
]
| [
"TAGS\n#size_categories-1K<n<10K #not-for-all-audiences #region-us \n",
"# E621 Rising: Mini Image Dataset v1\n\n9,999 images (~4GB) downloaded from 'URL' with tags.\n\nThis is a small sample of the E621 Rising: Raw Dataset available here.",
"## Image Processing\n* Only 'jpg' and 'png' images were considered\n* Image width and height have been clamped to '(0, 4096]px'; larger images have been resized to meet the limit\n* Alpha channels have been removed\n* All images have been converted to 'jpg' format\n* All images have been converted to TrueColor 'RGB'\n* All images have been verified to load with 'Pillow'\n* Metadata from E621 is available here"
]
|
9f7b568454c0fc27942cc932d37538cbabbfa725 |
# Hand-picked class images:
`mai.class.768`: **contains most of the Images of the below datasets, not including animefull**
- 1082 hand-picked images containing at least `1girl`, generated by various finetuned models
- other inputs include `cowboy shot`, `a clean illustration of`, `best quality`, etc
`mk11_mixed_1girl_clip1_768.zip`: 893 images;
- mk11_last + some similar ones (mk9, mk7, mk12f, etc); **clip1**;
- various sampler/cfg/steps; with/without hires fix
- **manually picked**
```
1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 1049498024, Size: 768x768, Model hash: e02601f3, Denoising strength: 0.7, First pass size: 384x384
a clean illustration of 1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3047399039, Size: 768x768, Model hash: e02601f3, Model: tmp_models_miko11_last, Batch size: 2, Batch pos: 0, Denoising strength: 0.7, First pass size: 384x384
a clean illustration of 1girl, (best quality), cowboy shot, by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 25, Sampler: Euler a, CFG scale: 7.5, Seed: 4047034818, Size: 768x768, Model hash: e02601f3, Denoising strength: 0.7, First pass size: 384x384
```
<br>
`NOTmk11_mixed_clip1_768.zip`: 141 images; **manually picked**
- images that look good, possibly from evt_v2, evt_v3, gd series, claus series, etc
- cl17_miko9_040; CMA10hf3_mk12f_cl17_03(d0c); d0c_nice_035_clip1; evtv3_clip1; mk11_cl11_030; mk11f.class.768.clip2; mk12f; others
```
a clean illustration of 1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 3011937418, Size: 768x768, Model hash: e02601f3, Denoising strength: 0.7, First pass size: 384x384
a clean illustration of 1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 3755499482, Size: 768x768, Model hash: 2a535ddd, Denoising strength: 0.7, First pass size: 384x384
```
<br>
`mk11_bqsks_1girl_clip2_768`: 236 images; mk11_last.ckpt
```
1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 3053897408, Size: 768x768, Model hash: e02601f3, Clip skip: 2
```
<br>
<br>
# Manually-inspected:
`cropped_hands.512.class`: 5958 images of cropped hands from [anime crop dataset](https://www.gwern.net/Crops#hand-model)
- inspected & removed most of the non-hand images
- upscaled to 512x512
<br>
<br>
# Auto-generated:
ไนๅ็ๆ็Class Images
`animefull_1girl_clip2_512.zip`: 746 images
```
1girl
Steps: 35, Sampler: DDIM, CFG scale: 7, Seed: 5109255, Size: 512x512, Model hash: e6e8e1fc, Clip skip: 2
```
<br>
`animefull_mabq_1girl_clip2_512.zip`: 102 images
```
masterpiece, best quality, 1girl
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 25, Sampler: DDIM, CFG scale: 7, Seed: 2653130834, Size: 512x512, Model hash: e6e8e1fc, Clip skip: 2
```
<br>
| trojblue/RegImages | [
"license:openrail",
"region:us"
]
| 2023-01-16T00:50:10+00:00 | {"license": "openrail"} | 2023-03-04T18:19:38+00:00 | []
| []
| TAGS
#license-openrail #region-us
|
# Hand-picked class images:
'URL.768': contains most of the Images of the below datasets, not including animefull
- 1082 hand-picked images containing at least '1girl', generated by various finetuned models
- other inputs include 'cowboy shot', 'a clean illustration of', 'best quality', etc
'mk11_mixed_1girl_clip1_768.zip': 893 images;
- mk11_last + some similar ones (mk9, mk7, mk12f, etc); clip1;
- various sampler/cfg/steps; with/without hires fix
- manually picked
<br>
'NOTmk11_mixed_clip1_768.zip': 141 images; manually picked
- images that look good, possibly from evt_v2, evt_v3, gd series, claus series, etc
- cl17_miko9_040; CMA10hf3_mk12f_cl17_03(d0c); d0c_nice_035_clip1; evtv3_clip1; mk11_cl11_030; URL.768.clip2; mk12f; others
<br>
'mk11_bqsks_1girl_clip2_768': 236 images; mk11_last.ckpt
<br>
<br>
# Manually-inspected:
'cropped_hands.URL': 5958 images of cropped hands from anime crop dataset
- inspected & removed most of the non-hand images
- upscaled to 512x512
<br>
<br>
# Auto-generated:
ไนๅ็ๆ็Class Images
'animefull_1girl_clip2_512.zip': 746 images
<br>
'animefull_mabq_1girl_clip2_512.zip': 102 images
<br>
| [
"# Hand-picked class images:\n\n\n'URL.768': contains most of the Images of the below datasets, not including animefull\n- 1082 hand-picked images containing at least '1girl', generated by various finetuned models\n- other inputs include 'cowboy shot', 'a clean illustration of', 'best quality', etc\n \n\n\n'mk11_mixed_1girl_clip1_768.zip': 893 images; \n- mk11_last + some similar ones (mk9, mk7, mk12f, etc); clip1;\n- various sampler/cfg/steps; with/without hires fix\n- manually picked\n\n\n\n<br>\n\n'NOTmk11_mixed_clip1_768.zip': 141 images; manually picked\n- images that look good, possibly from evt_v2, evt_v3, gd series, claus series, etc\n- cl17_miko9_040; CMA10hf3_mk12f_cl17_03(d0c); d0c_nice_035_clip1; evtv3_clip1; mk11_cl11_030; URL.768.clip2; mk12f; others\n\n\n\n<br>\n\n'mk11_bqsks_1girl_clip2_768': 236 images; mk11_last.ckpt\n\n\n\n<br>\n<br>",
"# Manually-inspected:\n\n\n'cropped_hands.URL': 5958 images of cropped hands from anime crop dataset\n- inspected & removed most of the non-hand images\n- upscaled to 512x512\n\n\n\n\n<br>\n<br>",
"# Auto-generated:\n\nไนๅ็ๆ็Class Images\n\n'animefull_1girl_clip2_512.zip': 746 images\n\n\n<br>\n\n\n'animefull_mabq_1girl_clip2_512.zip': 102 images\n\n\n<br>"
]
| [
"TAGS\n#license-openrail #region-us \n",
"# Hand-picked class images:\n\n\n'URL.768': contains most of the Images of the below datasets, not including animefull\n- 1082 hand-picked images containing at least '1girl', generated by various finetuned models\n- other inputs include 'cowboy shot', 'a clean illustration of', 'best quality', etc\n \n\n\n'mk11_mixed_1girl_clip1_768.zip': 893 images; \n- mk11_last + some similar ones (mk9, mk7, mk12f, etc); clip1;\n- various sampler/cfg/steps; with/without hires fix\n- manually picked\n\n\n\n<br>\n\n'NOTmk11_mixed_clip1_768.zip': 141 images; manually picked\n- images that look good, possibly from evt_v2, evt_v3, gd series, claus series, etc\n- cl17_miko9_040; CMA10hf3_mk12f_cl17_03(d0c); d0c_nice_035_clip1; evtv3_clip1; mk11_cl11_030; URL.768.clip2; mk12f; others\n\n\n\n<br>\n\n'mk11_bqsks_1girl_clip2_768': 236 images; mk11_last.ckpt\n\n\n\n<br>\n<br>",
"# Manually-inspected:\n\n\n'cropped_hands.URL': 5958 images of cropped hands from anime crop dataset\n- inspected & removed most of the non-hand images\n- upscaled to 512x512\n\n\n\n\n<br>\n<br>",
"# Auto-generated:\n\nไนๅ็ๆ็Class Images\n\n'animefull_1girl_clip2_512.zip': 746 images\n\n\n<br>\n\n\n'animefull_mabq_1girl_clip2_512.zip': 102 images\n\n\n<br>"
]
|
271eef3bfe83ae04b2feadc47a041b151392edd5 |
This dataset is derived from the RICO SCA presented by Google Research in the seq2act paper. This is a synthetically generated dataset for UI RefExp task.
See original repo for details and licensing info:
https://github.com/google-research/google-research/blob/master/seq2act/data_generation/README.md#generate-ricosca-dataset
The splits in this dataset are consistent with the splits in the crowdsourced [UIBert RefExp](https://huggingface.co/datasets/ivelin/ui_refexp_saved) dataset. Training split examples do not include images from the Validation or Test examples in the UI Bert RefExp dataset. Respectively the images in Validation and Test splits here match the images in the Validation and Test splits of UIBert RefExp.
| ivelin/rico_sca_refexp_synthetic | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-01-16T01:18:23+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "RICO SCA RefExp", "dataset_info": [{"config_name": "rico_sca_refexp", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "string"}, {"name": "labels", "list": [{"name": "prompt", "dtype": "string"}, {"name": "target_bounding_box", "struct": [{"name": "xmin", "dtype": "float32"}, {"name": "ymin", "dtype": "float32"}, {"name": "xmax", "dtype": "float32"}, {"name": "ymax", "dtype": "float32"}]}]}], "splits": [{"name": "train", "num_bytes": 2605508469, "num_examples": 24063}, {"name": "validation", "num_bytes": 21192787, "num_examples": 160}, {"name": "test", "num_bytes": 22057836, "num_examples": 185}], "download_size": 6514703641, "dataset_size": 2605508469}]} | 2023-01-19T20:11:53+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
This dataset is derived from the RICO SCA presented by Google Research in the seq2act paper. This is a synthetically generated dataset for UI RefExp task.
See original repo for details and licensing info:
URL
The splits in this dataset are consistent with the splits in the crowdsourced UIBert RefExp dataset. Training split examples do not include images from the Validation or Test examples in the UI Bert RefExp dataset. Respectively the images in Validation and Test splits here match the images in the Validation and Test splits of UIBert RefExp.
| []
| [
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n"
]
|
ed2ac81fd8ca23e630c0877bf6e0363ffdba9a11 | ์ฑ๋ด ํ์ต์ฉ ๋ฌธ๋ต ํ์ด 11,876๊ฐ๋ก ๊ตฌ์ฑ๋์์ต๋๋ค.
https://github.com/songys/Chatbot_data
---
dataset_info:
features:
- name: index
dtype: int64
- name: Q
dtype: string
- name: A
dtype: string
splits:
- name: train
num_bytes: 773618
num_examples: 9465
- name: test
num_bytes: 246115
num_examples: 2358
download_size: 557106
dataset_size: 1019733
---
# Dataset Card for "chatbot_emotion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jeongah/chatbot_emotion | [
"region:us"
]
| 2023-01-16T02:55:04+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "Q", "dtype": "string"}, {"name": "A", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 773618, "num_examples": 9465}, {"name": "test", "num_bytes": 246115, "num_examples": 2358}], "download_size": 557106, "dataset_size": 1019733}} | 2023-01-16T04:29:58+00:00 | []
| []
| TAGS
#region-us
| ์ฑ๋ด ํ์ต์ฉ ๋ฌธ๋ต ํ์ด 11,876๊ฐ๋ก ๊ตฌ์ฑ๋์์ต๋๋ค.
URL
---
dataset_info:
features:
- name: index
dtype: int64
- name: Q
dtype: string
- name: A
dtype: string
splits:
- name: train
num_bytes: 773618
num_examples: 9465
- name: test
num_bytes: 246115
num_examples: 2358
download_size: 557106
dataset_size: 1019733
---
# Dataset Card for "chatbot_emotion"
More Information needed | [
"# Dataset Card for \"chatbot_emotion\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"chatbot_emotion\"\n\nMore Information needed"
]
|
1cb382c54fe39823c40f4899760f870bdf78d714 | # Dataset Card for "speech2text2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | qbaro/speech2text2 | [
"region:us"
]
| 2023-01-16T04:13:31+00:00 | {"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "sampling_rate", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2091887413, "num_examples": 2994}, {"name": "valid", "num_bytes": 275249571, "num_examples": 361}], "download_size": 2351520332, "dataset_size": 2367136984}} | 2023-01-16T04:26:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "speech2text2"
More Information needed | [
"# Dataset Card for \"speech2text2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"speech2text2\"\n\nMore Information needed"
]
|
7b60795487a35accda5ba59a3cfe1dfde7acd1e7 | train ๋ฌธ์ฅ 4452๊ฐ, test ๋ฌธ์ฅ 1113๊ฐ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค.
์์ค์ผ ๊ฒฝ์ฐ spam ๊ฐ์ด 1, ์์ค์ ํด๋นํ์ง ์๋ ๊ฒฝ์ฐ 0์ผ๋ก ๋ผ๋ฒจ๋ง ๋์ด ์์ต๋๋ค.
https://github.com/2runo/Curse-detection-data
---
dataset_info:
features:
- name: index
dtype: int64
- name: sentence
dtype: string
- name: ' spam'
dtype: int64
splits:
- name: train
num_bytes: 429333
num_examples: 4452
- name: test
num_bytes: 106670
num_examples: 1113
download_size: 364457
dataset_size: 536003
---
# Dataset Card for "curse-detection-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jeongah/curse-detection-data | [
"region:us"
]
| 2023-01-16T05:27:47+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "document", "dtype": "string"}, {"name": " label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 429333, "num_examples": 4452}, {"name": "test", "num_bytes": 106670, "num_examples": 1113}], "download_size": 364473, "dataset_size": 536003}} | 2023-01-16T06:41:20+00:00 | []
| []
| TAGS
#region-us
| train ๋ฌธ์ฅ 4452๊ฐ, test ๋ฌธ์ฅ 1113๊ฐ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค.
์์ค์ผ ๊ฒฝ์ฐ spam ๊ฐ์ด 1, ์์ค์ ํด๋นํ์ง ์๋ ๊ฒฝ์ฐ 0์ผ๋ก ๋ผ๋ฒจ๋ง ๋์ด ์์ต๋๋ค.
URL
---
dataset_info:
features:
- name: index
dtype: int64
- name: sentence
dtype: string
- name: ' spam'
dtype: int64
splits:
- name: train
num_bytes: 429333
num_examples: 4452
- name: test
num_bytes: 106670
num_examples: 1113
download_size: 364457
dataset_size: 536003
---
# Dataset Card for "curse-detection-data"
More Information needed | [
"# Dataset Card for \"curse-detection-data\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"curse-detection-data\"\n\nMore Information needed"
]
|
341735d2902a73423a6cf145ac6759eb36d64e34 | # Dataset Card for "artfaces"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/artfaces | [
"region:us"
]
| 2023-01-16T07:21:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "style", "dtype": "string"}, {"name": "src_image", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65636859.275, "num_examples": 30163}], "download_size": 51043102, "dataset_size": 65636859.275}} | 2023-01-16T07:21:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "artfaces"
More Information needed | [
"# Dataset Card for \"artfaces\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"artfaces\"\n\nMore Information needed"
]
|
23935f59573d24083168480d48aff51cbb0408b3 | # AutoTrain Dataset for project: consunmer-complain-multiclass-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project consunmer-complain-multiclass-classification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": null,
"text": "This is awful and borderline abuse. I can't imagine thinking that's even slightly okay",
"target": 5
},
{
"feat_Unnamed: 0": null,
"text": "i didnt feel so hot",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4', '5'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 20663 |
| valid | 5167 |
| harperlucy2023/autotrain-data-consunmer-complain-multiclass-classification | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2023-01-16T09:25:28+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2023-01-16T09:45:42+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #language-English #region-us
| AutoTrain Dataset for project: consunmer-complain-multiclass-classification
===========================================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project consunmer-complain-multiclass-classification.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
f366e69fc822e3bfc75cc6666ea4883f986ce3da | # Dataset Card for "PickaPic-selected-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-selected-prompts | [
"region:us"
]
| 2023-01-16T09:59:33+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10527, "num_examples": 200}], "download_size": 0, "dataset_size": 10527}} | 2023-01-17T16:01:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "PickaPic-selected-prompts"
More Information needed | [
"# Dataset Card for \"PickaPic-selected-prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"PickaPic-selected-prompts\"\n\nMore Information needed"
]
|
4738131d40903d0576531a93bc000888c78c045d |
# Dataset Card for LFID Magnetic Field Data
You will need the package
https://chaosmagpy.readthedocs.io/en/master/
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LIFD DataSets homepage](https://cemac.github.io/LIFD_ML_Datasets/)
- **Repository:** [LIFD GitHub Repo](https://github.com/cemac/LIFD_ML_Datasets/)
- **Point of Contact:** [*coming soon*]()
### Dataset Summary
A description of the dataset:
The gufm1 model is a global geomagnetic model based on spherical harmonics, covering the period 1590 - 1990, and is described in the publication:
[Andrew Jackson, Art R. T. Jonkers and Matthew R. Walker (2000), โFour centuries of geomagnetic secular variation from historical recordsโ, Phil. Trans. R. Soc. A.358957โ990, http://doi.org/10.1098/rsta.2000.0569](https://royalsocietypublishing.org/doi/10.1098/rsta.2000.0569)
### Supported Tasks and Leaderboards
### Data Fields
The dataset has dimension (181, 361, 401) whose axes represent co-latitude, longitude, time, and whose values are the radial magnetic field at the core-mantle boundary (radius 3485km) in nT.
The colatitude takes values (in degrees): 0,1,2,3,โฆ180; longitude (degrees) takes values -180,-179,โฆ.180; and time is yearly 1590, 1591, โฆ1990.
## Dataset Creation
The native model representation is converted into a discrete dataset in physical space and time, using the Python package [Chaosmagpy](https://chaosmagpy.readthedocs.io/en/master/)
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
MIT Licence
### Citation Information
### Contributions
| cemachelen/LIFD_Magnetic_Field_Data | [
"task_categories:feature-extraction",
"task_categories:image-to-image",
"task_categories:time-series-forecasting",
"task_categories:object-detection",
"task_categories:unconditional-image-generation",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:gufm1 model",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-16T10:43:30+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["gufm1 model"], "task_categories": ["feature-extraction", "image-to-image", "time-series-forecasting", "object-detection", "unconditional-image-generation"], "task_ids": ["multivariate-time-series-forecasting"], "pretty_name": "LIFD Magnetic Fields", "tags": []} | 2023-12-04T10:19:32+00:00 | []
| [
"en"
]
| TAGS
#task_categories-feature-extraction #task_categories-image-to-image #task_categories-time-series-forecasting #task_categories-object-detection #task_categories-unconditional-image-generation #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #source_datasets-gufm1 model #language-English #license-mit #region-us
|
# Dataset Card for LFID Magnetic Field Data
You will need the package
URL
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Dataset Structure
- Data Fields
- Dataset Creation
- Source Data
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: LIFD DataSets homepage
- Repository: LIFD GitHub Repo
- Point of Contact: [*coming soon*]()
### Dataset Summary
A description of the dataset:
The gufm1 model is a global geomagnetic model based on spherical harmonics, covering the period 1590 - 1990, and is described in the publication:
Andrew Jackson, Art R. T. Jonkers and Matthew R. Walker (2000), โFour centuries of geomagnetic secular variation from historical recordsโ, Phil. Trans. R. Soc. A.358957โ990, URL
### Supported Tasks and Leaderboards
### Data Fields
The dataset has dimension (181, 361, 401) whose axes represent co-latitude, longitude, time, and whose values are the radial magnetic field at the core-mantle boundary (radius 3485km) in nT.
The colatitude takes values (in degrees): 0,1,2,3,โฆ180; longitude (degrees) takes values -180,-179,โฆ.180; and time is yearly 1590, 1591, โฆ1990.
## Dataset Creation
The native model representation is converted into a discrete dataset in physical space and time, using the Python package Chaosmagpy
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
MIT Licence
### Contributions
| [
"# Dataset Card for LFID Magnetic Field Data\n\nYou will need the package\nURL",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: LIFD DataSets homepage\n- Repository: LIFD GitHub Repo\n- Point of Contact: [*coming soon*]()",
"### Dataset Summary\n\nA description of the dataset:\n\nThe gufm1 model is a global geomagnetic model based on spherical harmonics, covering the period 1590 - 1990, and is described in the publication:\nAndrew Jackson, Art R. T. Jonkers and Matthew R. Walker (2000), โFour centuries of geomagnetic secular variation from historical recordsโ, Phil. Trans. R. Soc. A.358957โ990, URL",
"### Supported Tasks and Leaderboards",
"### Data Fields\n\nThe dataset has dimension (181, 361, 401) whose axes represent co-latitude, longitude, time, and whose values are the radial magnetic field at the core-mantle boundary (radius 3485km) in nT.\nThe colatitude takes values (in degrees): 0,1,2,3,โฆ180; longitude (degrees) takes values -180,-179,โฆ.180; and time is yearly 1590, 1591, โฆ1990.",
"## Dataset Creation\n\nThe native model representation is converted into a discrete dataset in physical space and time, using the Python package Chaosmagpy",
"### Source Data",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nMIT Licence",
"### Contributions"
]
| [
"TAGS\n#task_categories-feature-extraction #task_categories-image-to-image #task_categories-time-series-forecasting #task_categories-object-detection #task_categories-unconditional-image-generation #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #source_datasets-gufm1 model #language-English #license-mit #region-us \n",
"# Dataset Card for LFID Magnetic Field Data\n\nYou will need the package\nURL",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: LIFD DataSets homepage\n- Repository: LIFD GitHub Repo\n- Point of Contact: [*coming soon*]()",
"### Dataset Summary\n\nA description of the dataset:\n\nThe gufm1 model is a global geomagnetic model based on spherical harmonics, covering the period 1590 - 1990, and is described in the publication:\nAndrew Jackson, Art R. T. Jonkers and Matthew R. Walker (2000), โFour centuries of geomagnetic secular variation from historical recordsโ, Phil. Trans. R. Soc. A.358957โ990, URL",
"### Supported Tasks and Leaderboards",
"### Data Fields\n\nThe dataset has dimension (181, 361, 401) whose axes represent co-latitude, longitude, time, and whose values are the radial magnetic field at the core-mantle boundary (radius 3485km) in nT.\nThe colatitude takes values (in degrees): 0,1,2,3,โฆ180; longitude (degrees) takes values -180,-179,โฆ.180; and time is yearly 1590, 1591, โฆ1990.",
"## Dataset Creation\n\nThe native model representation is converted into a discrete dataset in physical space and time, using the Python package Chaosmagpy",
"### Source Data",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nMIT Licence",
"### Contributions"
]
|
be211332f4ea671c2bc7918a43bac4aa74cb429a |
# Dataset Card for ScandiWiki
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Total amount of disk used:** 4485.90 MB
### Dataset Summary
ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmรฅl,
Norwegian Nynorsk, Swedish, Icelandic and Faroese.
### Supported Tasks and Leaderboards
This dataset is intended for general language modelling.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian Bokmรฅl (`nb`),
Norwegian Nynorsk (`nn`), Icelandic (`is`) and Faroese (`fo`).
## Dataset Structure
### Data Instances
- **Total amount of disk used:** 4485.90 MB
An example from the `train` split of the `fo` subset looks as follows.
```
{
'id': '3380',
'url': 'https://fo.wikipedia.org/wiki/Enk%C3%B6pings%20kommuna',
'title': 'Enkรถpings kommuna',
'text': 'Enkรถpings kommuna (svenskt: Enkรถpings kommun), er ein kommuna รญ Uppsala lรคn รญ Svรธrรญki. Enkรถpings kommuna hevur umleiรฐ 40.656 รญbรบgvar (2013).\n\nKeldur \n\nKommunur รญ Svรธrรญki'
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Subsets
| name | samples |
|----------|----------:|
| sv | 2,469,978 |
| nb | 596,593 |
| da | 287,216 |
| nn | 162,776 |
| is | 55,418 |
| fo | 12,582 |
## Dataset Creation
### Curation Rationale
It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so
this dataset is primarily for convenience.
### Source Data
The original data is from the [wikipedia
dataset](https://huggingface.co/datasets/wikipedia).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/), in accordance with the same
license of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia).
| alexandrainst/scandi-wiki | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_categories:feature-extraction",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:wikipedia",
"language:da",
"language:sv",
"language:no",
"language:nb",
"language:nn",
"language:is",
"language:fo",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-16T12:29:34+00:00 | {"language": ["da", "sv", false, "nb", "nn", "is", "fo"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["wikipedia"], "task_categories": ["fill-mask", "text-generation", "feature-extraction"], "task_ids": ["language-modeling"], "pretty_name": "ScandiWiki"} | 2023-01-16T13:55:38+00:00 | []
| [
"da",
"sv",
"no",
"nb",
"nn",
"is",
"fo"
]
| TAGS
#task_categories-fill-mask #task_categories-text-generation #task_categories-feature-extraction #task_ids-language-modeling #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-wikipedia #language-Danish #language-Swedish #language-Norwegian #language-Norwegian Bokmรฅl #language-Norwegian Nynorsk #language-Icelandic #language-Faroese #license-cc-by-sa-4.0 #region-us
| Dataset Card for ScandiWiki
===========================
Dataset Description
-------------------
* Point of Contact: Dan Saattrup Nielsen
* Total amount of disk used: 4485.90 MB
### Dataset Summary
ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmรฅl,
Norwegian Nynorsk, Swedish, Icelandic and Faroese.
### Supported Tasks and Leaderboards
This dataset is intended for general language modelling.
### Languages
The dataset is available in Danish ('da'), Swedish ('sv'), Norwegian Bokmรฅl ('nb'),
Norwegian Nynorsk ('nn'), Icelandic ('is') and Faroese ('fo').
Dataset Structure
-----------------
### Data Instances
* Total amount of disk used: 4485.90 MB
An example from the 'train' split of the 'fo' subset looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'id': a 'string' feature.
* 'url': a 'string' feature.
* 'title': a 'string' feature.
* 'text': a 'string' feature.
### Data Subsets
Dataset Creation
----------------
### Curation Rationale
It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so
this dataset is primarily for convenience.
### Source Data
The original data is from the wikipedia
dataset.
Additional Information
----------------------
### Dataset Curators
Dan Saattrup Nielsen from the The Alexandra
Institute curated this dataset.
### Licensing Information
The dataset is licensed under the CC BY-SA 4.0
license, in accordance with the same
license of the wikipedia dataset.
| [
"### Dataset Summary\n\n\nScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmรฅl,\nNorwegian Nynorsk, Swedish, Icelandic and Faroese.",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for general language modelling.",
"### Languages\n\n\nThe dataset is available in Danish ('da'), Swedish ('sv'), Norwegian Bokmรฅl ('nb'),\nNorwegian Nynorsk ('nn'), Icelandic ('is') and Faroese ('fo').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Total amount of disk used: 4485.90 MB\n\n\nAn example from the 'train' split of the 'fo' subset looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature.\n* 'url': a 'string' feature.\n* 'title': a 'string' feature.\n* 'text': a 'string' feature.",
"### Data Subsets\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nIt takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so\nthis dataset is primarily for convenience.",
"### Source Data\n\n\nThe original data is from the wikipedia\ndataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense, in accordance with the same\nlicense of the wikipedia dataset."
]
| [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_categories-feature-extraction #task_ids-language-modeling #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-wikipedia #language-Danish #language-Swedish #language-Norwegian #language-Norwegian Bokmรฅl #language-Norwegian Nynorsk #language-Icelandic #language-Faroese #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmรฅl,\nNorwegian Nynorsk, Swedish, Icelandic and Faroese.",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for general language modelling.",
"### Languages\n\n\nThe dataset is available in Danish ('da'), Swedish ('sv'), Norwegian Bokmรฅl ('nb'),\nNorwegian Nynorsk ('nn'), Icelandic ('is') and Faroese ('fo').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Total amount of disk used: 4485.90 MB\n\n\nAn example from the 'train' split of the 'fo' subset looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature.\n* 'url': a 'string' feature.\n* 'title': a 'string' feature.\n* 'text': a 'string' feature.",
"### Data Subsets\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nIt takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so\nthis dataset is primarily for convenience.",
"### Source Data\n\n\nThe original data is from the wikipedia\ndataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense, in accordance with the same\nlicense of the wikipedia dataset."
]
|
6d77cf32bc2906b848430bc8155f88dece2d1254 |
# hand.json
3,000 image data about "Hand" retrieved from Unsplash.
# portrait.json
10,000 image data about "Portrait" retrieved from Unsplash.
# pose.json
10,000 image data about "Pose" retrieved from Unsplash.
# Tool
- [unsplash-wizard](https://github.com/p1atdev/unsplash-wizard)
```typescript
deno task build
./unsplash download ./hand.json -o ./hand --color --relatedTags --likes 50
```
# Type Definition
```typescript
interface Photo {
id: string
color: string
description: string | null
alt_description: string | null
tags: string[]
likes: number
urls: {
raw: string
full: string
regular: string
small: string
thumb: string
small_s3: string
}
width: number
height: number
related_tags: string[]
location: {
name: string | null
city: string | null
country: string | null
position: {
latitude: number | null
longitude: number | null
}
}
exif: {
make: string | null
model: string | null
exposure_time: string | null
aperture: string | null
focal_length: string | null
iso: number | null
}
views: number
downloads: number
}
``` | p1atdev/resplash | [
"language:en",
"license:mit",
"region:us"
]
| 2023-01-16T12:30:11+00:00 | {"language": ["en"], "license": "mit"} | 2023-01-18T12:42:03+00:00 | []
| [
"en"
]
| TAGS
#language-English #license-mit #region-us
|
# URL
3,000 image data about "Hand" retrieved from Unsplash.
# URL
10,000 image data about "Portrait" retrieved from Unsplash.
# URL
10,000 image data about "Pose" retrieved from Unsplash.
# Tool
- unsplash-wizard
# Type Definition
| [
"# URL\n\n3,000 image data about \"Hand\" retrieved from Unsplash.",
"# URL\n\n10,000 image data about \"Portrait\" retrieved from Unsplash.",
"# URL\n\n10,000 image data about \"Pose\" retrieved from Unsplash.",
"# Tool\n\n- unsplash-wizard",
"# Type Definition"
]
| [
"TAGS\n#language-English #license-mit #region-us \n",
"# URL\n\n3,000 image data about \"Hand\" retrieved from Unsplash.",
"# URL\n\n10,000 image data about \"Portrait\" retrieved from Unsplash.",
"# URL\n\n10,000 image data about \"Pose\" retrieved from Unsplash.",
"# Tool\n\n- unsplash-wizard",
"# Type Definition"
]
|
6cb367d92796f6c007070df6838a9e0015036301 | Regularization dataset with photorealistic men in fantasy armor for small-scale finetunes/LoRAs.
Produced with various Stable Diffusion derivatives
Body horrors and extreme crops were hand pruned, though some were left
Prompts were cycled for a variety of poses and environments and to reduce full frontal static portraits and 'sameface' (still suffers from it, though).
Work in progress | AntaFluorescent/man_in_armor | [
"size_categories:n<1K",
"license:cc0-1.0",
"region:us"
]
| 2023-01-16T12:35:53+00:00 | {"license": "cc0-1.0", "size_categories": ["n<1K"]} | 2023-01-19T02:42:20+00:00 | []
| []
| TAGS
#size_categories-n<1K #license-cc0-1.0 #region-us
| Regularization dataset with photorealistic men in fantasy armor for small-scale finetunes/LoRAs.
Produced with various Stable Diffusion derivatives
Body horrors and extreme crops were hand pruned, though some were left
Prompts were cycled for a variety of poses and environments and to reduce full frontal static portraits and 'sameface' (still suffers from it, though).
Work in progress | []
| [
"TAGS\n#size_categories-n<1K #license-cc0-1.0 #region-us \n"
]
|
a798d6a570781433c592737494184fb1104a5d05 | # AutoTrain Dataset for project: test1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test1.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Konjam porunga Vishwasam trailor varatum appo therium yaaru gethu nu",
"target": 0
},
{
"text": "Last 2 dialogues bigil ku vecha mathri oru feel....",
"target": 4
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['Mixed_feelings', 'Negative', 'Positive', 'not-Tamil', 'unknown_state'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 12593 |
| valid | 3151 |
| dmontaner/autotrain-data-test1 | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2023-01-16T13:01:30+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2023-01-16T13:03:19+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #language-English #region-us
| AutoTrain Dataset for project: test1
====================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project test1.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
285509a3b132668cf7911ceafbe6c48ed6ecf4bb | # Dataset Card for "bert_dataset_202203"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nthngdy/bert_dataset_202203 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:en",
"license:apache-2.0",
"language-modeling",
"masked-language-modeling",
"region:us"
]
| 2023-01-16T14:40:52+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "fill-mask"], "pretty_name": "BERT Dataset (BookCorpus + Wikipedia 03/2022)", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24635440616, "num_examples": 146707688}], "download_size": 14651841592, "dataset_size": 24635440616}, "tags": ["language-modeling", "masked-language-modeling"]} | 2023-01-17T10:10:06+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #task_categories-fill-mask #language-English #license-apache-2.0 #language-modeling #masked-language-modeling #region-us
| # Dataset Card for "bert_dataset_202203"
More Information needed | [
"# Dataset Card for \"bert_dataset_202203\"\n\nMore Information needed"
]
| [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #language-English #license-apache-2.0 #language-modeling #masked-language-modeling #region-us \n",
"# Dataset Card for \"bert_dataset_202203\"\n\nMore Information needed"
]
|
07cc4a29341ef26e8614ae1139847f4d4888727d |
# Dataset Card for KorFin-ABSA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated.
The samples were collected from [KLUE-TC](https://klue-benchmark.com/tasks/66/overview/description) and
analyst reports from [Naver Finance](https://finance.naver.com).
Annotation of the dataset is described in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
```
{
"title": "LGU๏ผ 1๋ถ๊ธฐ ์์
์ต 1์ฒ706์ต์โฆ๋ง์ผํ
๋น์ฉ ๊ฐ์",
"aspect": "LG U+",
'sentiment': 'NEUTRAL',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008363739',
'annotator_id': 'A_01',
'Type': 'single'
}
```
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("amphora/KorFin-ASC")
```
Please find more information about the code and how the data was collected in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
The best-performing model on this dataset can be found at [link](https://huggingface.co/amphora/KorFinASC-XLM-RoBERTa).
### Licensing Information
KorFin-ASC is licensed under the terms of the [cc-by-sa-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
Please cite this data using:
```
@article{son2023removing,
title={Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance},
author={Son, Guijin and Lee, Hanwool and Kang, Nahyeon and Hahm, Moonjeong},
journal={arXiv preprint arXiv:2301.03136},
year={2023}
}
```
### Contributions
Thanks to [@Albertmade](https://github.com/h-albert-lee), [@amphora](https://github.com/guijinSON) for making this dataset. | amphora/korfin-asc | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:klue",
"language:ko",
"license:cc-by-sa-4.0",
"sentiment analysis",
"aspect based sentiment analysis",
"finance",
"arxiv:2301.03136",
"region:us"
]
| 2023-01-16T14:53:48+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ko"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["klue"], "task_categories": ["text-classification"], "task_ids": ["topic-classification", "sentiment-classification"], "pretty_name": "KorFin-ABSA", "tags": ["sentiment analysis", "aspect based sentiment analysis", "finance"]} | 2023-01-16T15:26:46+00:00 | [
"2301.03136"
]
| [
"ko"
]
| TAGS
#task_categories-text-classification #task_ids-topic-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-klue #language-Korean #license-cc-by-sa-4.0 #sentiment analysis #aspect based sentiment analysis #finance #arxiv-2301.03136 #region-us
|
# Dataset Card for KorFin-ABSA
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
### Dataset Summary
The KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated.
The samples were collected from KLUE-TC and
analyst reports from Naver Finance.
Annotation of the dataset is described in the paper Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance.
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
Please find more information about the code and how the data was collected in the paper Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance.
The best-performing model on this dataset can be found at link.
### Licensing Information
KorFin-ASC is licensed under the terms of the cc-by-sa-4.0
Please cite this data using:
### Contributions
Thanks to @Albertmade, @amphora for making this dataset. | [
"# Dataset Card for KorFin-ABSA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThe KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated. \nThe samples were collected from KLUE-TC and \nanalyst reports from Naver Finance. \nAnnotation of the dataset is described in the paper Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance.",
"### Supported Tasks and Leaderboards\n\nThis dataset supports the following tasks:\n\n* Aspect-Based Sentiment Classification",
"### Languages\n\nKorean",
"## Dataset Structure",
"### Data Instances\n\nEach instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).",
"### Data Fields\n\n* title: \n* aspect: \n* sentiment: \n* url: \n* annotator_id: \n* url:",
"### Data Splits\n\nThe dataset currently does not contain standard data splits.",
"## Additional Information\n\nYou can download the data via:\n \nPlease find more information about the code and how the data was collected in the paper Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance.\nThe best-performing model on this dataset can be found at link.",
"### Licensing Information\n\nKorFin-ASC is licensed under the terms of the cc-by-sa-4.0\n\n\n\nPlease cite this data using:",
"### Contributions\n\nThanks to @Albertmade, @amphora for making this dataset."
]
| [
"TAGS\n#task_categories-text-classification #task_ids-topic-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-klue #language-Korean #license-cc-by-sa-4.0 #sentiment analysis #aspect based sentiment analysis #finance #arxiv-2301.03136 #region-us \n",
"# Dataset Card for KorFin-ABSA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThe KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated. \nThe samples were collected from KLUE-TC and \nanalyst reports from Naver Finance. \nAnnotation of the dataset is described in the paper Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance.",
"### Supported Tasks and Leaderboards\n\nThis dataset supports the following tasks:\n\n* Aspect-Based Sentiment Classification",
"### Languages\n\nKorean",
"## Dataset Structure",
"### Data Instances\n\nEach instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).",
"### Data Fields\n\n* title: \n* aspect: \n* sentiment: \n* url: \n* annotator_id: \n* url:",
"### Data Splits\n\nThe dataset currently does not contain standard data splits.",
"## Additional Information\n\nYou can download the data via:\n \nPlease find more information about the code and how the data was collected in the paper Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance.\nThe best-performing model on this dataset can be found at link.",
"### Licensing Information\n\nKorFin-ASC is licensed under the terms of the cc-by-sa-4.0\n\n\n\nPlease cite this data using:",
"### Contributions\n\nThanks to @Albertmade, @amphora for making this dataset."
]
|
533dfaba159e53e81e76224437091c1d667e6872 | # Dataset Card for "raven_properties"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jkwiatkowski/raven_properties | [
"region:us"
]
| 2023-01-16T15:34:05+00:00 | {"dataset_info": {"features": [{"name": "Description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7234653, "num_examples": 42000}, {"name": "val", "num_bytes": 2410755, "num_examples": 14000}, {"name": "test", "num_bytes": 2412471, "num_examples": 14000}], "download_size": 997897, "dataset_size": 12057879}} | 2023-01-16T16:56:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "raven_properties"
More Information needed | [
"# Dataset Card for \"raven_properties\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"raven_properties\"\n\nMore Information needed"
]
|
cdef4ff24bb27140d0e4e239ad795904343194ad | # Dataset Card for "twitter_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | StoneSeller/twitter_raw | [
"region:us"
]
| 2023-01-16T17:36:36+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "Q", "dtype": "string"}, {"name": "A", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2149019, "num_examples": 10607}, {"name": "valid", "num_bytes": 478895, "num_examples": 2652}], "download_size": 1304645, "dataset_size": 2627914}} | 2023-01-16T17:36:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "twitter_raw"
More Information needed | [
"# Dataset Card for \"twitter_raw\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"twitter_raw\"\n\nMore Information needed"
]
|
73f3245a756410b696934d4f048787174ce5a715 | # Open Images Dataset V7 (test set)
Original paper: [A Step Toward More Inclusive People Annotations for Fairness](https://arxiv.org/abs/2105.02317)
Homepage: https://storage.googleapis.com/openimages/web/extended.html
Bibtex:
```
@inproceedings{miap_aies,
title = {A Step Toward More Inclusive People Annotations for Fairness},
author = {Candice Schumann and Susanna Ricco and Utsav Prabhu and Vittorio Ferrari and Caroline Rebecca Pantofaru},
booktitle = {Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES)},
year = {2021}
}
``` | nlphuji/open_images_dataset_v7 | [
"arxiv:2105.02317",
"region:us"
]
| 2023-01-16T18:20:56+00:00 | {} | 2023-01-17T11:49:56+00:00 | [
"2105.02317"
]
| []
| TAGS
#arxiv-2105.02317 #region-us
| # Open Images Dataset V7 (test set)
Original paper: A Step Toward More Inclusive People Annotations for Fairness
Homepage: URL
Bibtex:
| [
"# Open Images Dataset V7 (test set)\n\nOriginal paper: A Step Toward More Inclusive People Annotations for Fairness\n\nHomepage: URL\n\nBibtex:"
]
| [
"TAGS\n#arxiv-2105.02317 #region-us \n",
"# Open Images Dataset V7 (test set)\n\nOriginal paper: A Step Toward More Inclusive People Annotations for Fairness\n\nHomepage: URL\n\nBibtex:"
]
|
727d7f6446526483efcd7ca677ea795f36b8942d | # Dollar Street (test set)
Original paper: [The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World](https://openreview.net/forum?id=qnfYsave0U4)
Homepage: https://www.kaggle.com/datasets/mlcommons/the-dollar-street-dataset
Bibtex:
```
@inproceedings{
rojas2022the,
title={The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World},
author={William A Gaviria Rojas and Sudnya Diamos and Keertan Ranjan Kini and David Kanter and Vijay Janapa Reddi and Cody Coleman},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=qnfYsave0U4}
}
``` | nlphuji/dollar_street_test | [
"region:us"
]
| 2023-01-16T19:12:34+00:00 | {} | 2023-01-17T21:05:24+00:00 | []
| []
| TAGS
#region-us
| # Dollar Street (test set)
Original paper: The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World
Homepage: URL
Bibtex:
| [
"# Dollar Street (test set)\n\nOriginal paper: The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World\n\nHomepage: URL\n\nBibtex:"
]
| [
"TAGS\n#region-us \n",
"# Dollar Street (test set)\n\nOriginal paper: The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World\n\nHomepage: URL\n\nBibtex:"
]
|
c84603c049571d37f3d9a48772f12083ab41ac95 | # FairFace (val set)
Original paper: [Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf)
Homepage: https://github.com/joojs/fairface
Bibtex:
```
@inproceedings{karkkainenfairface,
title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation},
author={Karkkainen, Kimmo and Joo, Jungseock},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year={2021},
pages={1548--1558}
}
``` | nlphuji/fairface_val_padding_125 | [
"region:us"
]
| 2023-01-16T19:50:46+00:00 | {} | 2023-01-18T22:59:22+00:00 | []
| []
| TAGS
#region-us
| # FairFace (val set)
Original paper: Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation
Homepage: URL
Bibtex:
| [
"# FairFace (val set)\n\nOriginal paper: Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation\n\nHomepage: URL\n\nBibtex:"
]
| [
"TAGS\n#region-us \n",
"# FairFace (val set)\n\nOriginal paper: Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation\n\nHomepage: URL\n\nBibtex:"
]
|
799c189759a6c6eff6cf0840a002181fc54aaa47 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alikanakar/dreambooth-hackathon-images | [
"region:us"
]
| 2023-01-16T20:00:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 13484975.0, "num_examples": 20}], "download_size": 0, "dataset_size": 13484975.0}} | 2023-01-16T20:17:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dreambooth-hackathon-images"
More Information needed | [
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
]
|
030dcb9ec61c436299b1df10d90ae1cbe1d1b401 |
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling']
```
### Number of Images
```json
{'train': 10885, 'test': 1558, 'valid': 3128}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/indoor-scene-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface)
### Citation
```
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| keremberke/indoor-scene-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Retail",
"Pest Control",
"Benchmark",
"region:us"
]
| 2023-01-16T20:56:17+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Retail", "Pest Control", "Benchmark"]} | 2023-01-16T21:04:18+00:00 | []
| []
| TAGS
#task_categories-image-classification #roboflow #roboflow2huggingface #Retail #Pest Control #Benchmark #region-us
|
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
MIT
### Dataset Summary
This dataset was exported via URL on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nMIT",
"### Dataset Summary\nThis dataset was exported via URL on October 24, 2022 at 4:09 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 15571 images.\nIndoor-scenes are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-image-classification #roboflow #roboflow2huggingface #Retail #Pest Control #Benchmark #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nMIT",
"### Dataset Summary\nThis dataset was exported via URL on October 24, 2022 at 4:09 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 15571 images.\nIndoor-scenes are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied."
]
|
a549a284a1fefdc761ad459ee85f50c5ad8138ef |
<div align="center">
<img width="640" alt="keremberke/german-traffic-sign-detection" src="https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['animals', 'construction', 'cycles crossing', 'danger', 'no entry', 'pedestrian crossing', 'school crossing', 'snow', 'stop', 'bend', 'bend left', 'bend right', 'give way', 'go left', 'go left or straight', 'go right', 'go right or straight', 'go straight', 'keep left', 'keep right', 'no overtaking', 'no overtaking -trucks-', 'no traffic both ways', 'no trucks', 'priority at next intersection', 'priority road', 'restriction ends', 'restriction ends -overtaking -trucks--', 'restriction ends -overtaking-', 'restriction ends 80', 'road narrows', 'roundabout', 'slippery road', 'speed limit 100', 'speed limit 120', 'speed limit 20', 'speed limit 30', 'speed limit 50', 'speed limit 60', 'speed limit 70', 'speed limit 80', 'traffic signal', 'uneven road']
```
### Number of Images
```json
{'test': 54, 'valid': 108, 'train': 383}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/german-traffic-sign-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1](https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ gtsdb---german-traffic-sign-detection-benchmark_dataset,
title = { GTSDB - German Traffic Sign Detection Benchmark Dataset },
type = { Open Source Dataset },
author = { Mohamed Traore },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark } },
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:04 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 545 images.
Signs are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/german-traffic-sign-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Self Driving",
"Transportation",
"region:us"
]
| 2023-01-16T21:04:50+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Self Driving", "Transportation"]} | 2023-01-16T21:06:06+00:00 | []
| []
| TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #Self Driving #Transportation #region-us
|
<div align="center">
<img width="640" alt="keremberke/german-traffic-sign-detection" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on January 16, 2023 at 9:04 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 545 images.
Signs are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 16, 2023 at 9:04 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 545 images.\nSigns are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #Self Driving #Transportation #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 16, 2023 at 9:04 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 545 images.\nSigns are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
|
9d6cd89e55db7fbc129449387b3da7debcf7b6c4 |
<div align="center">
<img width="640" alt="keremberke/satellite-building-segmentation" src="https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['building']
```
### Number of Images
```json
{'train': 6764, 'valid': 1934, 'test': 967}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/satellite-building-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ buildings-instance-segmentation_dataset,
title = { Buildings Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:09 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 9665 images.
Buildings are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/satellite-building-segmentation | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Aerial",
"Logistics",
"Construction",
"Damage Risk",
"Other",
"region:us"
]
| 2023-01-16T21:09:30+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface", "Aerial", "Logistics", "Construction", "Damage Risk", "Other"]} | 2023-01-18T09:41:34+00:00 | []
| []
| TAGS
#task_categories-image-segmentation #roboflow #roboflow2huggingface #Aerial #Logistics #Construction #Damage Risk #Other #region-us
|
<div align="center">
<img width="640" alt="keremberke/satellite-building-segmentation" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on January 16, 2023 at 9:09 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 9665 images.
Buildings are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 16, 2023 at 9:09 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 9665 images.\nBuildings are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #Aerial #Logistics #Construction #Damage Risk #Other #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 16, 2023 at 9:09 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 9665 images.\nBuildings are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
|
694c61350faf9a6622586d6cf50f45e1631862dc |
<div align="center">
<img width="640" alt="keremberke/hard-hat-detection" src="https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['hardhat', 'no-hardhat']
```
### Number of Images
```json
{'test': 2001, 'train': 13782, 'valid': 3962}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/hard-hat-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2](https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ hard-hats-fhbh5_dataset,
title = { Hard Hats Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:17 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 19745 images.
Hardhat-ppe are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/hard-hat-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Construction",
"Utilities",
"Manufacturing",
"Logistics",
"Ppe",
"Assembly Line",
"Warehouse",
"Factory",
"Damage Risk",
"region:us"
]
| 2023-01-16T21:22:25+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Construction", "Utilities", "Manufacturing", "Logistics", "Ppe", "Assembly Line", "Warehouse", "Factory", "Construction", "Logistics", "Utilities", "Damage Risk", "Ppe"]} | 2023-01-16T21:39:24+00:00 | []
| []
| TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #Construction #Utilities #Manufacturing #Logistics #Ppe #Assembly Line #Warehouse #Factory #Damage Risk #region-us
|
<div align="center">
<img width="640" alt="keremberke/hard-hat-detection" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on January 16, 2023 at 9:17 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 19745 images.
Hardhat-ppe are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 16, 2023 at 9:17 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 19745 images.\nHardhat-ppe are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #Construction #Utilities #Manufacturing #Logistics #Ppe #Assembly Line #Warehouse #Factory #Damage Risk #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 16, 2023 at 9:17 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 19745 images.\nHardhat-ppe are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
]
|
08e0f818471ccb445da08d847a20d3a654e0d50e |
<div align="center">
<img width="640" alt="keremberke/excavator-detector" src="https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['excavators', 'dump truck', 'wheel loader']
```
### Number of Images
```json
{'test': 144, 'train': 2245, 'valid': 267}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/excavator-detector", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3](https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ excavators-cwlh0_dataset,
title = { Excavators Dataset },
type = { Open Source Dataset },
author = { Mohamed Sabek },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on April 4, 2022 at 8:56 AM GMT
It includes 2656 images.
Excavator are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/excavator-detector | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"Construction",
"Machinery",
"region:us"
]
| 2023-01-16T21:40:15+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Manufacturing", "Construction", "Machinery"]} | 2023-01-16T21:43:21+00:00 | []
| []
| TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #Manufacturing #Construction #Machinery #region-us
|
<div align="center">
<img width="640" alt="keremberke/excavator-detector" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on April 4, 2022 at 8:56 AM GMT
It includes 2656 images.
Excavator are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on April 4, 2022 at 8:56 AM GMT\n\nIt includes 2656 images.\nExcavator are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #Manufacturing #Construction #Machinery #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on April 4, 2022 at 8:56 AM GMT\n\nIt includes 2656 images.\nExcavator are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
]
|
5c714d8eb8a75d11a4c984ced60c3aa10cc89cb8 | # Dataset Card for "nilc-masked-punctuation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tiagoblima/nilc-masked-punctuation | [
"region:us"
]
| 2023-01-17T00:09:50+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "reference", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 376331, "num_examples": 1236}], "download_size": 228368, "dataset_size": 376331}} | 2023-01-17T00:11:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "nilc-masked-punctuation"
More Information needed | [
"# Dataset Card for \"nilc-masked-punctuation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"nilc-masked-punctuation\"\n\nMore Information needed"
]
|
9a8114051c0c4015bc8fe02801a047ea7d461fc3 | # Dataset Card for "pmcoa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gabrielaltay/pmcoa | [
"region:us"
]
| 2023-01-17T00:15:57+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "accession_id", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "last_updated", "dtype": "string"}, {"name": "retracted", "dtype": "string"}, {"name": "citation", "dtype": "string"}, {"name": "decoded_as", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "year", "dtype": "int32"}, {"name": "doi", "dtype": "string"}, {"name": "oa_subset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206274456770, "num_examples": 4935779}, {"name": "validation", "num_bytes": 4046140044, "num_examples": 87794}], "download_size": 111297924087, "dataset_size": 210320596814}} | 2023-01-17T01:13:20+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "pmcoa"
More Information needed | [
"# Dataset Card for \"pmcoa\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"pmcoa\"\n\nMore Information needed"
]
|
960448f73503112d4226baeb8eb41d3fb5ae2506 |
## Dataset Description
- **Repository:** https://reasonwithpal.com/
- **Paper:** [PaL: Program-Aided Language Model](https://arxiv.org/abs/2211.10435)
### Dataset Summary
This is the harder version of gsm8k math reasoning dataset (https://huggingface.co/datasets/gsm8k).
We construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.
### Supported Tasks and Leaderboards
This dataset is used to evaluate math reasoning
### Languages
English - Numbers
## Dataset Structure
```python
dataset = load_dataset("reasoning-machines/gsm-hard")
DatasetDict({
train: Dataset({
features: ['input', 'code', 'target'],
num_rows: 1319
})
})
```
### Data Fields
train/dev/test:
- input: The question
- code: The corresponding code solution to the question
- target: The answer
### Citation Information
```
@article{gao2022pal,
title={PAL: Program-aided Language Models},
author={Gao, Luyu and Madaan, Aman and Zhou, Shuyan and Alon, Uri and Liu, Pengfei and Yang, Yiming and Callan, Jamie and Neubig, Graham},
journal={arXiv preprint arXiv:2211.10435},
year={2022}
}
``` | reasoning-machines/gsm-hard | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:gsm8k (https://huggingface.co/datasets/gsm8k)",
"language:code",
"license:mit",
"math_reasoning",
"symbolic_reasoning",
"arxiv:2211.10435",
"region:us"
]
| 2023-01-17T03:05:50+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["gsm8k (https://huggingface.co/datasets/gsm8k)"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "gsm-hard", "tags": ["math_reasoning", "symbolic_reasoning"]} | 2023-01-17T03:21:10+00:00 | [
"2211.10435"
]
| [
"code"
]
| TAGS
#task_categories-text2text-generation #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-gsm8k (https-//huggingface.co/datasets/gsm8k) #language-code #license-mit #math_reasoning #symbolic_reasoning #arxiv-2211.10435 #region-us
|
## Dataset Description
- Repository: URL
- Paper: PaL: Program-Aided Language Model
### Dataset Summary
This is the harder version of gsm8k math reasoning dataset (URL
We construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.
### Supported Tasks and Leaderboards
This dataset is used to evaluate math reasoning
### Languages
English - Numbers
## Dataset Structure
### Data Fields
train/dev/test:
- input: The question
- code: The corresponding code solution to the question
- target: The answer
| [
"## Dataset Description\n- Repository: URL\n- Paper: PaL: Program-Aided Language Model",
"### Dataset Summary\nThis is the harder version of gsm8k math reasoning dataset (URL\nWe construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.\n\u0001",
"### Supported Tasks and Leaderboards\nThis dataset is used to evaluate math reasoning",
"### Languages\nEnglish - Numbers",
"## Dataset Structure",
"### Data Fields\ntrain/dev/test:\n- input: The question\n- code: The corresponding code solution to the question\n- target: The answer"
]
| [
"TAGS\n#task_categories-text2text-generation #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-gsm8k (https-//huggingface.co/datasets/gsm8k) #language-code #license-mit #math_reasoning #symbolic_reasoning #arxiv-2211.10435 #region-us \n",
"## Dataset Description\n- Repository: URL\n- Paper: PaL: Program-Aided Language Model",
"### Dataset Summary\nThis is the harder version of gsm8k math reasoning dataset (URL\nWe construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.\n\u0001",
"### Supported Tasks and Leaderboards\nThis dataset is used to evaluate math reasoning",
"### Languages\nEnglish - Numbers",
"## Dataset Structure",
"### Data Fields\ntrain/dev/test:\n- input: The question\n- code: The corresponding code solution to the question\n- target: The answer"
]
|
4c6c8c51d5b175257930879e1354d7c1f88c3a53 |
# Quakeflow_NC
## Introduction
This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
Cite the NCEDC and PhaseNet:
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
Acknowledge the NCEDC:
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
```
Group: / len:16227
|- Group: /nc71111584 len:2
| |-* begin_time = 2020-01-02T07:01:19.620
| |-* depth_km = 3.69
| |-* end_time = 2020-01-02T07:03:19.620
| |-* event_id = nc71111584
| |-* event_time = 2020-01-02T07:01:48.240
| |-* event_time_index = 2862
| |-* latitude = 37.6545
| |-* longitude = -118.8798
| |-* magnitude = -0.15
| |-* magnitude_type = D
| |-* num_stations = 2
| |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
| | | |-* distance_km = 1.9
| | | |-* dt_s = 0.01
| | | |-* elevation_m = 2391.0
| | | |-* emergence_angle = 159.0
| | | |-* event_id = ['nc71111584' 'nc71111584']
| | | |-* latitude = 37.6444
| | | |-* location =
| | | |-* longitude = -118.8968
| | | |-* network = NC
| | | |-* phase_index = [3000 3101]
| | | |-* phase_polarity = ['U' 'N']
| | | |-* phase_remark = ['IP' 'ES']
| | | |-* phase_score = [1 2]
| | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
| | | |-* phase_type = ['P' 'S']
| | | |-* snr = [2.82143 3.055604 1.8412642]
| | | |-* station = MCB
| | | |-* unit = 1e-6m/s
| |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
......
```
## How to use
### Requirements
- datasets
- h5py
- fsspec
- torch (for PyTorch)
### Usage
Import the necessary packages:
```python
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset
```
We have 6 configurations for the dataset:
- "station"
- "event"
- "station_train"
- "event_train"
- "station_test"
- "event_test"
"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
The sample of `station` is a dictionary with the following keys:
- `data`: the waveform with shape `(3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
- `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
The sample of `event` is a dictionary with the following keys:
- `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
- `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
- `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
- `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
- `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
```python
# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time
# to load "station_test" with test split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# to load "event" with train split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
```
#### Usage for `station`
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
```python
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting when using iterable dataset
# if you want to use dataset directly, just use
# quakeflow_nc.with_format("torch")
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
```
#### Usage for `event`
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
```python
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
``` | AI4EPS/quakeflow_nc | [
"license:mit",
"doi:10.57967/hf/0716",
"region:us"
]
| 2023-01-17T06:40:21+00:00 | {"license": "mit"} | 2024-01-06T21:20:05+00:00 | []
| []
| TAGS
#license-mit #doi-10.57967/hf/0716 #region-us
|
# Quakeflow_NC
## Introduction
This dataset is part of the data (1970-2020) from NCEDC (Northern California Earthquake Data Center) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at AI4EPS)
Cite the NCEDC and PhaseNet:
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
Acknowledge the NCEDC:
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
## How to use
### Requirements
- datasets
- h5py
- fsspec
- torch (for PyTorch)
### Usage
Import the necessary packages:
We have 6 configurations for the dataset:
- "station"
- "event"
- "station_train"
- "event_train"
- "station_test"
- "event_test"
"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
The sample of 'station' is a dictionary with the following keys:
- 'data': the waveform with shape '(3, nt)', the default time length is 8192
- 'phase_pick': the probability of the phase pick with shape '(3, nt)', the first dimension is noise, P and S
- 'event_location': the event location with shape '(4,)', including latitude, longitude, depth and time
- 'station_location': the station location with shape '(3,)', including latitude, longitude and depth
The sample of 'event' is a dictionary with the following keys:
- 'data': the waveform with shape '(n_station, 3, nt)', the default time length is 8192
- 'phase_pick': the probability of the phase pick with shape '(n_station, 3, nt)', the first dimension is noise, P and S
- 'event_center': the probability of the event time with shape '(n_station, feature_nt)', default feature time length is 512
- 'event_location': the space-time coordinates of the event with shape '(n_staion, 4, feature_nt)'
- 'event_location_mask': the probability mask of the event time with shape '(n_station, feature_nt)'
- 'station_location': the space coordinates of the station with shape '(n_station, 3)', including latitude, longitude and depth
The default configuration is 'station_test'. You can specify the configuration by argument 'name'. For example:
#### Usage for 'station'
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
#### Usage for 'event'
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
| [
"# Quakeflow_NC",
"## Introduction\nThis dataset is part of the data (1970-2020) from NCEDC (Northern California Earthquake Data Center) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at AI4EPS)\n\nCite the NCEDC and PhaseNet:\n\nZhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.\n\nNCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.\n\nAcknowledge the NCEDC:\n\nWaveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.",
"## How to use",
"### Requirements\n- datasets\n- h5py\n- fsspec\n- torch (for PyTorch)",
"### Usage\nImport the necessary packages:\n\nWe have 6 configurations for the dataset: \n- \"station\"\n- \"event\"\n- \"station_train\"\n- \"event_train\"\n- \"station_test\"\n- \"event_test\"\n\n\"station\" yields station-based samples one by one, while \"event\" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix \"_train\" and \"_test\" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.\n\nThe sample of 'station' is a dictionary with the following keys:\n- 'data': the waveform with shape '(3, nt)', the default time length is 8192\n- 'phase_pick': the probability of the phase pick with shape '(3, nt)', the first dimension is noise, P and S\n- 'event_location': the event location with shape '(4,)', including latitude, longitude, depth and time\n- 'station_location': the station location with shape '(3,)', including latitude, longitude and depth\n\nThe sample of 'event' is a dictionary with the following keys:\n- 'data': the waveform with shape '(n_station, 3, nt)', the default time length is 8192\n- 'phase_pick': the probability of the phase pick with shape '(n_station, 3, nt)', the first dimension is noise, P and S\n- 'event_center': the probability of the event time with shape '(n_station, feature_nt)', default feature time length is 512\n- 'event_location': the space-time coordinates of the event with shape '(n_staion, 4, feature_nt)'\n- 'event_location_mask': the probability mask of the event time with shape '(n_station, feature_nt)'\n- 'station_location': the space coordinates of the station with shape '(n_station, 3)', including latitude, longitude and depth\n\nThe default configuration is 'station_test'. You can specify the configuration by argument 'name'. For example:",
"#### Usage for 'station'\nThen you can change the dataset into PyTorch format iterable dataset, and view the first sample:",
"#### Usage for 'event'\n\nThen you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):"
]
| [
"TAGS\n#license-mit #doi-10.57967/hf/0716 #region-us \n",
"# Quakeflow_NC",
"## Introduction\nThis dataset is part of the data (1970-2020) from NCEDC (Northern California Earthquake Data Center) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at AI4EPS)\n\nCite the NCEDC and PhaseNet:\n\nZhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.\n\nNCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.\n\nAcknowledge the NCEDC:\n\nWaveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.",
"## How to use",
"### Requirements\n- datasets\n- h5py\n- fsspec\n- torch (for PyTorch)",
"### Usage\nImport the necessary packages:\n\nWe have 6 configurations for the dataset: \n- \"station\"\n- \"event\"\n- \"station_train\"\n- \"event_train\"\n- \"station_test\"\n- \"event_test\"\n\n\"station\" yields station-based samples one by one, while \"event\" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix \"_train\" and \"_test\" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.\n\nThe sample of 'station' is a dictionary with the following keys:\n- 'data': the waveform with shape '(3, nt)', the default time length is 8192\n- 'phase_pick': the probability of the phase pick with shape '(3, nt)', the first dimension is noise, P and S\n- 'event_location': the event location with shape '(4,)', including latitude, longitude, depth and time\n- 'station_location': the station location with shape '(3,)', including latitude, longitude and depth\n\nThe sample of 'event' is a dictionary with the following keys:\n- 'data': the waveform with shape '(n_station, 3, nt)', the default time length is 8192\n- 'phase_pick': the probability of the phase pick with shape '(n_station, 3, nt)', the first dimension is noise, P and S\n- 'event_center': the probability of the event time with shape '(n_station, feature_nt)', default feature time length is 512\n- 'event_location': the space-time coordinates of the event with shape '(n_staion, 4, feature_nt)'\n- 'event_location_mask': the probability mask of the event time with shape '(n_station, feature_nt)'\n- 'station_location': the space coordinates of the station with shape '(n_station, 3)', including latitude, longitude and depth\n\nThe default configuration is 'station_test'. You can specify the configuration by argument 'name'. For example:",
"#### Usage for 'station'\nThen you can change the dataset into PyTorch format iterable dataset, and view the first sample:",
"#### Usage for 'event'\n\nThen you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):"
]
|
72269a262d92a4461a3dc00cb2081783810a5def | # Dataset Card for "beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000 | [
"region:us"
]
| 2023-01-17T07:34:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 148583825.0, "num_examples": 265}], "download_size": 148582108, "dataset_size": 148583825.0}} | 2023-01-17T07:34:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000\"\n\nMore Information needed"
]
|
5656dbae459cf15b3a112d46bb6b5484cabcd2d2 |
# Dataset Card for DocLayNet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [[email protected]](mailto:[email protected]).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743โ3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
| ds4sd/DocLayNet | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
]
| 2023-01-17T07:51:59+00:00 | {"annotations_creators": ["crowdsourced"], "license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection", "image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "DocLayNet", "tags": ["layout-segmentation", "COCO", "document-understanding", "PDF"]} | 2023-01-25T17:01:19+00:00 | []
| []
| TAGS
#task_categories-object-detection #task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #license-other #layout-segmentation #COCO #document-understanding #PDF #region-us
|
# Dataset Card for DocLayNet
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Annotations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact:
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square '1025 x 1025px'
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
The 'doc_category' field uses one of the following constants:
### Data Splits
The dataset provides three splits
- 'train'
- 'val'
- 'test'
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the Deep Search team at IBM Research.
You can contact us at deepsearch-core@URL.
Curators:
- Christoph Auer, @cau-git
- Michele Dolfi, @dolfim-ibm
- Ahmed Nassar, @nassarofficial
- Peter Staar, @PeterStaar-IBM
### Licensing Information
License: CDLA-Permissive-1.0
### Contributions
Thanks to @dolfim-ibm, @cau-git for adding this dataset.
| [
"# Dataset Card for DocLayNet",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.",
"### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL",
"## Dataset Structure",
"### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:",
"### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'",
"## Dataset Creation",
"### Annotations",
"#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.",
"#### Who are the annotators?\n\nAnnotations are crowdsourced.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM",
"### Licensing Information\n\nLicense: CDLA-Permissive-1.0",
"### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset."
]
| [
"TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #license-other #layout-segmentation #COCO #document-understanding #PDF #region-us \n",
"# Dataset Card for DocLayNet",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.",
"### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL",
"## Dataset Structure",
"### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:",
"### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'",
"## Dataset Creation",
"### Annotations",
"#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.",
"#### Who are the annotators?\n\nAnnotations are crowdsourced.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM",
"### Licensing Information\n\nLicense: CDLA-Permissive-1.0",
"### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset."
]
|
4f3d26f6e6fe500cc866c471056265d9c4a5ad5e |
# Dataset Card for "super_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.66 MB
- **Size of the generated dataset:** 238.01 MB
- **Total amount of disk used:** 293.67 MB
### Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
passage and a yes/no question about the passage. The questions are provided anonymously and
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### axb
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.26 MB
An example of 'test' looks as follows.
```
```
#### axg
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.05 MB
- **Total amount of disk used:** 0.06 MB
An example of 'test' looks as follows.
```
```
#### boolq
- **Size of downloaded dataset files:** 3.93 MB
- **Size of the generated dataset:** 9.92 MB
- **Total amount of disk used:** 13.85 MB
An example of 'train' looks as follows.
```
```
#### cb
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.19 MB
- **Total amount of disk used:** 0.27 MB
An example of 'train' looks as follows.
```
```
#### copa
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.16 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### axb
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### axg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### boolq
- `question`: a `string` feature.
- `passage`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `False` (0), `True` (1).
#### cb
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
#### copa
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
### Data Splits
#### axb
| |test|
|---|---:|
|axb|1104|
#### axg
| |test|
|---|---:|
|axg| 356|
#### boolq
| |train|validation|test|
|-----|----:|---------:|---:|
|boolq| 9427| 3270|3245|
#### cb
| |train|validation|test|
|---|----:|---------:|---:|
|cb | 250| 56| 250|
#### copa
| |train|validation|test|
|----|----:|---------:|---:|
|copa| 400| 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each SuperGLUE dataset has its own citation. Please see the source to
get the correct citation for each contained dataset.
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | Xieyiyiyi/ceshi0119 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:natural-language-inference",
"task_ids:word-sense-disambiguation",
"task_ids:coreference-resolution",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"superglue",
"NLU",
"natural language understanding",
"region:us"
]
| 2023-01-17T10:08:24+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["text-classification", "token-classification", "question-answering"], "task_ids": ["natural-language-inference", "word-sense-disambiguation", "coreference-resolution", "extractive-qa"], "pretty_name": "SuperGLUE", "tags": ["superglue", "NLU", "natural language understanding"], "dataset_info": [{"config_name": "boolq", "features": [{"name": "question", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 2107997, "num_examples": 3245}, {"name": "train", "num_bytes": 6179206, "num_examples": 9427}, {"name": "validation", "num_bytes": 2118505, "num_examples": 3270}], "download_size": 4118001, "dataset_size": 10405708}, {"config_name": "cb", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "contradiction", "2": "neutral"}}}}], "splits": [{"name": "test", "num_bytes": 93660, "num_examples": 250}, {"name": "train", "num_bytes": 87218, "num_examples": 250}, {"name": "validation", "num_bytes": 21894, "num_examples": 56}], "download_size": 75482, "dataset_size": 202772}, {"config_name": "copa", "features": [{"name": "premise", "dtype": "string"}, {"name": "choice1", "dtype": "string"}, {"name": "choice2", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "choice1", "1": "choice2"}}}}], "splits": [{"name": "test", "num_bytes": 60303, "num_examples": 500}, {"name": "train", "num_bytes": 49599, "num_examples": 400}, {"name": "validation", "num_bytes": 12586, "num_examples": 100}], "download_size": 43986, "dataset_size": 122488}, {"config_name": "multirc", "features": [{"name": "paragraph", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "idx", "struct": [{"name": "paragraph", "dtype": "int32"}, {"name": "question", "dtype": "int32"}, {"name": "answer", "dtype": "int32"}]}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 14996451, "num_examples": 9693}, {"name": "train", "num_bytes": 46213579, "num_examples": 27243}, {"name": "validation", "num_bytes": 7758918, "num_examples": 4848}], "download_size": 1116225, "dataset_size": 68968948}, {"config_name": "record", "features": [{"name": "passage", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "entities", "sequence": "string"}, {"name": "entity_spans", "sequence": [{"name": "text", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "end", "dtype": "int32"}]}, {"name": "answers", "sequence": "string"}, {"name": "idx", "struct": [{"name": "passage", "dtype": "int32"}, {"name": "query", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 179232052, "num_examples": 100730}, {"name": "validation", "num_bytes": 17479084, "num_examples": 10000}, {"name": "test", "num_bytes": 17200575, "num_examples": 10000}], "download_size": 51757880, "dataset_size": 213911711}, {"config_name": "rte", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}], "splits": [{"name": "test", "num_bytes": 975799, "num_examples": 3000}, {"name": "train", "num_bytes": 848745, "num_examples": 2490}, {"name": "validation", "num_bytes": 90899, "num_examples": 277}], "download_size": 750920, "dataset_size": 1915443}, {"config_name": "wic", "features": [{"name": "word", "dtype": "string"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "start1", "dtype": "int32"}, {"name": "start2", "dtype": "int32"}, {"name": "end1", "dtype": "int32"}, {"name": "end2", "dtype": "int32"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 180593, "num_examples": 1400}, {"name": "train", "num_bytes": 665183, "num_examples": 5428}, {"name": "validation", "num_bytes": 82623, "num_examples": 638}], "download_size": 396213, "dataset_size": 928399}, {"config_name": "wsc", "features": [{"name": "text", "dtype": "string"}, {"name": "span1_index", "dtype": "int32"}, {"name": "span2_index", "dtype": "int32"}, {"name": "span1_text", "dtype": "string"}, {"name": "span2_text", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 31572, "num_examples": 146}, {"name": "train", "num_bytes": 89883, "num_examples": 554}, {"name": "validation", "num_bytes": 21637, "num_examples": 104}], "download_size": 32751, "dataset_size": 143092}, {"config_name": "wsc.fixed", "features": [{"name": "text", "dtype": "string"}, {"name": "span1_index", "dtype": "int32"}, {"name": "span2_index", "dtype": "int32"}, {"name": "span1_text", "dtype": "string"}, {"name": "span2_text", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 31568, "num_examples": 146}, {"name": "train", "num_bytes": 89883, "num_examples": 554}, {"name": "validation", "num_bytes": 21637, "num_examples": 104}], "download_size": 32751, "dataset_size": 143088}, {"config_name": "axb", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}], "splits": [{"name": "test", "num_bytes": 238392, "num_examples": 1104}], "download_size": 33950, "dataset_size": 238392}, {"config_name": "axg", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}], "splits": [{"name": "test", "num_bytes": 53581, "num_examples": 356}], "download_size": 10413, "dataset_size": 53581}]} | 2024-01-29T12:47:23+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #task_categories-token-classification #task_categories-question-answering #task_ids-natural-language-inference #task_ids-word-sense-disambiguation #task_ids-coreference-resolution #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-unknown #superglue #NLU #natural language understanding #region-us
| Dataset Card for "super\_glue"
==============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Point of Contact:
* Size of downloaded dataset files: 55.66 MB
* Size of the generated dataset: 238.01 MB
* Total amount of disk used: 293.67 MB
### Dataset Summary
SuperGLUE (URL is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
passage and a yes/no question about the passage. The questions are provided anonymously and
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
### Supported Tasks and Leaderboards
### Languages
Dataset Structure
-----------------
### Data Instances
#### axb
* Size of downloaded dataset files: 0.03 MB
* Size of the generated dataset: 0.23 MB
* Total amount of disk used: 0.26 MB
An example of 'test' looks as follows.
#### axg
* Size of downloaded dataset files: 0.01 MB
* Size of the generated dataset: 0.05 MB
* Total amount of disk used: 0.06 MB
An example of 'test' looks as follows.
#### boolq
* Size of downloaded dataset files: 3.93 MB
* Size of the generated dataset: 9.92 MB
* Total amount of disk used: 13.85 MB
An example of 'train' looks as follows.
#### cb
* Size of downloaded dataset files: 0.07 MB
* Size of the generated dataset: 0.19 MB
* Total amount of disk used: 0.27 MB
An example of 'train' looks as follows.
#### copa
* Size of downloaded dataset files: 0.04 MB
* Size of the generated dataset: 0.12 MB
* Total amount of disk used: 0.16 MB
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### axb
* 'sentence1': a 'string' feature.
* 'sentence2': a 'string' feature.
* 'idx': a 'int32' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'not\_entailment' (1).
#### axg
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'idx': a 'int32' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'not\_entailment' (1).
#### boolq
* 'question': a 'string' feature.
* 'passage': a 'string' feature.
* 'idx': a 'int32' feature.
* 'label': a classification label, with possible values including 'False' (0), 'True' (1).
#### cb
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'idx': a 'int32' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'contradiction' (1), 'neutral' (2).
#### copa
* 'premise': a 'string' feature.
* 'choice1': a 'string' feature.
* 'choice2': a 'string' feature.
* 'question': a 'string' feature.
* 'idx': a 'int32' feature.
* 'label': a classification label, with possible values including 'choice1' (0), 'choice2' (1).
### Data Splits
#### axb
#### axg
#### boolq
#### cb
#### copa
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset.
| [
"### Dataset Summary\n\n\nSuperGLUE (URL is a new benchmark styled after\nGLUE with a new set of more difficult language understanding tasks, improved\nresources, and a new public leaderboard.\n\n\nBoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short\npassage and a yes/no question about the passage. The questions are provided anonymously and\nunsolicited by users of the Google search engine, and afterwards paired with a paragraph from a\nWikipedia article containing the answer. Following the original work, we evaluate with accuracy.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### axb\n\n\n* Size of downloaded dataset files: 0.03 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.26 MB\n\n\nAn example of 'test' looks as follows.",
"#### axg\n\n\n* Size of downloaded dataset files: 0.01 MB\n* Size of the generated dataset: 0.05 MB\n* Total amount of disk used: 0.06 MB\n\n\nAn example of 'test' looks as follows.",
"#### boolq\n\n\n* Size of downloaded dataset files: 3.93 MB\n* Size of the generated dataset: 9.92 MB\n* Total amount of disk used: 13.85 MB\n\n\nAn example of 'train' looks as follows.",
"#### cb\n\n\n* Size of downloaded dataset files: 0.07 MB\n* Size of the generated dataset: 0.19 MB\n* Total amount of disk used: 0.27 MB\n\n\nAn example of 'train' looks as follows.",
"#### copa\n\n\n* Size of downloaded dataset files: 0.04 MB\n* Size of the generated dataset: 0.12 MB\n* Total amount of disk used: 0.16 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### axb\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).",
"#### axg\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).",
"#### boolq\n\n\n* 'question': a 'string' feature.\n* 'passage': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'False' (0), 'True' (1).",
"#### cb\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'contradiction' (1), 'neutral' (2).",
"#### copa\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'choice1' (0), 'choice2' (1).",
"### Data Splits",
"#### axb",
"#### axg",
"#### boolq",
"#### cb",
"#### copa\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset."
]
| [
"TAGS\n#task_categories-text-classification #task_categories-token-classification #task_categories-question-answering #task_ids-natural-language-inference #task_ids-word-sense-disambiguation #task_ids-coreference-resolution #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-unknown #superglue #NLU #natural language understanding #region-us \n",
"### Dataset Summary\n\n\nSuperGLUE (URL is a new benchmark styled after\nGLUE with a new set of more difficult language understanding tasks, improved\nresources, and a new public leaderboard.\n\n\nBoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short\npassage and a yes/no question about the passage. The questions are provided anonymously and\nunsolicited by users of the Google search engine, and afterwards paired with a paragraph from a\nWikipedia article containing the answer. Following the original work, we evaluate with accuracy.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### axb\n\n\n* Size of downloaded dataset files: 0.03 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.26 MB\n\n\nAn example of 'test' looks as follows.",
"#### axg\n\n\n* Size of downloaded dataset files: 0.01 MB\n* Size of the generated dataset: 0.05 MB\n* Total amount of disk used: 0.06 MB\n\n\nAn example of 'test' looks as follows.",
"#### boolq\n\n\n* Size of downloaded dataset files: 3.93 MB\n* Size of the generated dataset: 9.92 MB\n* Total amount of disk used: 13.85 MB\n\n\nAn example of 'train' looks as follows.",
"#### cb\n\n\n* Size of downloaded dataset files: 0.07 MB\n* Size of the generated dataset: 0.19 MB\n* Total amount of disk used: 0.27 MB\n\n\nAn example of 'train' looks as follows.",
"#### copa\n\n\n* Size of downloaded dataset files: 0.04 MB\n* Size of the generated dataset: 0.12 MB\n* Total amount of disk used: 0.16 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### axb\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).",
"#### axg\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).",
"#### boolq\n\n\n* 'question': a 'string' feature.\n* 'passage': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'False' (0), 'True' (1).",
"#### cb\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'contradiction' (1), 'neutral' (2).",
"#### copa\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'idx': a 'int32' feature.\n* 'label': a classification label, with possible values including 'choice1' (0), 'choice2' (1).",
"### Data Splits",
"#### axb",
"#### axg",
"#### boolq",
"#### cb",
"#### copa\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset."
]
|
af6e95118fce8a71f8d7eebf279c403b1b9b8876 | # Dataset Card for "praang-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ihanif/praang-images | [
"region:us"
]
| 2023-01-17T11:27:10+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 7404618.0, "num_examples": 23}], "download_size": 5551951, "dataset_size": 7404618.0}} | 2023-01-17T11:27:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "praang-images"
More Information needed | [
"# Dataset Card for \"praang-images\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"praang-images\"\n\nMore Information needed"
]
|
7d02c47036a5eddb519c924eb937f3ccaceb5743 |
# Dataset Card for "football-dataset"
Dummy dataset of 6 football players with a caption that can be used to fine-tune any Image Captioning model. | ybelkada/football-dataset | [
"region:us"
]
| 2023-01-17T11:46:21+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2073622.0, "num_examples": 6}], "download_size": 2074835, "dataset_size": 2073622.0}} | 2023-01-17T11:47:41+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for "football-dataset"
Dummy dataset of 6 football players with a caption that can be used to fine-tune any Image Captioning model. | [
"# Dataset Card for \"football-dataset\"\n\nDummy dataset of 6 football players with a caption that can be used to fine-tune any Image Captioning model."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"football-dataset\"\n\nDummy dataset of 6 football players with a caption that can be used to fine-tune any Image Captioning model."
]
|
81d5ce0c103d9fe05879b50949ed41c40b96de69 |

# Dataset Card for CommitPack
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigcode-project/octopack
- **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigcode-project/octopack).
- **Languages:** 350
- **OctoPack๐๐:**
<table>
<tr>
<th>Data</t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></td>
<td>4TB of GitHub commits across 350 programming languages</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></td>
<td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td>
</tr>
<tr>
<th>Model</t>
<td><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></td>
<td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></td>
<td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th>Evaluation</t>
<td><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></td>
<td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'commit': '0c17311f7fd511f5dae8f8e4acc2dce1a2de3cf5',
'old_file': 'main.py',
'new_file': 'main.py',
'old_contents': "import numpy as np\nimport matplotlib.pyplot as plt\n\n# generate sample data\nx_data = np.linspace(-5, 5, 20)\ny_data = np.random.normal(0.0, 1.0, x_data.size)\n\nplt.plot(x_data, y_data, 'o')\nplt.show()\n",
'new_contents': "import math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# generate sample data\nx_data = np.linspace(-math.pi, math.pi, 30)\ny_data = np.sin(x_data) + np.random.normal(0.0, 0.1, x_data.size)\n\nplt.plot(x_data, y_data, 'o')\nplt.show()\n\n",
'subject': 'Change to sin() function with noise',
'message': 'Change to sin() function with noise\n',
'lang': 'Python',
'license': 'mit',
'repos': 'MorganR/basic-gaussian-process',
'returncode': 0,
'stderr': ''
}
```
### Data Fields
The data fields are the same among all splits:
- `commit`: unique commit id
- `old_file`: name of the file before the commit
- `new_file`: name of the file after the commit
- `old_contents`: contents of the file before the commit
- `new_contents`: contents of the file after the commit
- `subject`: subject of the commit (this is used for all experiments in the paper)
- `message`: message of the commit (commonly the same as the subject)
- `lang`: programming language
- `license`: license of the repository the code stems from, one of `['mit', 'artistic-2.0', 'isc', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'unknown', 'apache-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-2.1', 'bsd-2-clause']`
- `repos`: name of the the repository the code stems from (if multiple, they are comma separated)
- `returncode`: if applicable errorcode during scraping (0 = no error)
- 'stderr': if applicable the error that occured during scraping (empty = no error)
### Data Splits
| Name | Megabytes | % of total | Samples | % of total |
| --- | --- | --- | --- | --- |
| total | 3709175.78 | 100.0% | 57700105 | 100.0% |
| json | 583293.816 | 15.7257% | 3495038 | 6.0572% |
| xml | 279208.676 | 7.5275% | 1923159 | 3.333% |
| text | 270662.596 | 7.2971% | 1389525 | 2.4082% |
| javascript | 262824.844 | 7.0858% | 5401937 | 9.3621% |
| objective-c++ | 239009.3 | 6.4437% | 32227 | 0.0559% |
| python | 234311.564 | 6.3171% | 6189601 | 10.7272% |
| c | 200876.804 | 5.4157% | 2779478 | 4.8171% |
| c++ | 186585.256 | 5.0304% | 2402294 | 4.1634% |
| markdown | 171849.952 | 4.6331% | 7645354 | 13.2502% |
| java | 127103.448 | 3.4267% | 3744377 | 6.4894% |
| html | 105305.284 | 2.839% | 2366841 | 4.102% |
| yaml | 100466.64 | 2.7086% | 2592787 | 4.4936% |
| go | 86444.624 | 2.3306% | 1183612 | 2.0513% |
| csv | 82946.192 | 2.2362% | 79268 | 0.1374% |
| php | 74961.64 | 2.021% | 2555419 | 4.4288% |
| jupyter-notebook | 66854.08 | 1.8024% | 94000 | 0.1629% |
| gettext-catalog | 62296.88 | 1.6795% | 168327 | 0.2917% |
| sql | 56802.764 | 1.5314% | 132772 | 0.2301% |
| unity3d-asset | 39535.008 | 1.0659% | 17867 | 0.031% |
| typescript | 39254.804 | 1.0583% | 572136 | 0.9916% |
| web-ontology-language | 36435.464 | 0.9823% | 7458 | 0.0129% |
| ruby | 35830.74 | 0.966% | 2928702 | 5.0757% |
| c# | 33669.652 | 0.9077% | 923157 | 1.5999% |
| nix | 33547.92 | 0.9045% | 221281 | 0.3835% |
| shell | 25109.952 | 0.677% | 1017977 | 1.7643% |
| perl | 21148.928 | 0.5702% | 374266 | 0.6486% |
| tex | 17471.108 | 0.471% | 89283 | 0.1547% |
| css | 16306.632 | 0.4396% | 548818 | 0.9512% |
| restructuredtext | 15613.888 | 0.421% | 494037 | 0.8562% |
| rust | 15011.296 | 0.4047% | 296214 | 0.5134% |
| groff | 12020.188 | 0.3241% | 32923 | 0.0571% |
| ini | 8375.164 | 0.2258% | 297100 | 0.5149% |
| scala | 8325.96 | 0.2245% | 316064 | 0.5478% |
| coffeescript | 6795.14 | 0.1832% | 292446 | 0.5068% |
| haskell | 6306.12 | 0.17% | 217325 | 0.3766% |
| swift | 5902.716 | 0.1591% | 319289 | 0.5534% |
| lua | 5763.12 | 0.1554% | 139091 | 0.2411% |
| svg | 5645.44 | 0.1522% | 27095 | 0.047% |
| gas | 5585.384 | 0.1506% | 15121 | 0.0262% |
| ocaml | 5355.4 | 0.1444% | 81360 | 0.141% |
| erlang | 5043.32 | 0.136% | 93685 | 0.1624% |
| makefile | 4238.512 | 0.1143% | 343379 | 0.5951% |
| asciidoc | 4138.588 | 0.1116% | 96671 | 0.1675% |
| emacs-lisp | 3988.652 | 0.1075% | 83228 | 0.1442% |
| scss | 3944.936 | 0.1064% | 288190 | 0.4995% |
| clojure | 3523.408 | 0.095% | 158674 | 0.275% |
| org | 3126.22 | 0.0843% | 30198 | 0.0523% |
| common-lisp | 2954.904 | 0.0797% | 74628 | 0.1293% |
| diff | 2586.048 | 0.0697% | 21021 | 0.0364% |
| groovy | 2569.14 | 0.0693% | 110057 | 0.1907% |
| html+erb | 2450.676 | 0.0661% | 225379 | 0.3906% |
| nesc | 2439.564 | 0.0658% | 473 | 0.0008% |
| dart | 2395.796 | 0.0646% | 56873 | 0.0986% |
| powershell | 2289.276 | 0.0617% | 55381 | 0.096% |
| f# | 2289.236 | 0.0617% | 66840 | 0.1158% |
| dm | 2223.144 | 0.0599% | 55584 | 0.0963% |
| kotlin | 2219.248 | 0.0598% | 124266 | 0.2154% |
| pascal | 2194.676 | 0.0592% | 42511 | 0.0737% |
| jsx | 2124.744 | 0.0573% | 139148 | 0.2412% |
| viml | 1948.208 | 0.0525% | 74062 | 0.1284% |
| actionscript | 1844.148 | 0.0497% | 28819 | 0.0499% |
| cython | 1736.588 | 0.0468% | 25927 | 0.0449% |
| turtle | 1698.948 | 0.0458% | 3882 | 0.0067% |
| less | 1616.564 | 0.0436% | 88634 | 0.1536% |
| mathematica | 1475.044 | 0.0398% | 925 | 0.0016% |
| xslt | 1441.456 | 0.0389% | 27956 | 0.0485% |
| scheme | 1249.244 | 0.0337% | 30546 | 0.0529% |
| perl6 | 1223.16 | 0.033% | 12167 | 0.0211% |
| edn | 1186.94 | 0.032% | 2289 | 0.004% |
| fortran | 1178.548 | 0.0318% | 13463 | 0.0233% |
| java-server-pages | 1173.072 | 0.0316% | 53574 | 0.0928% |
| standard-ml | 1133.476 | 0.0306% | 20097 | 0.0348% |
| cmake | 1132.068 | 0.0305% | 58446 | 0.1013% |
| json5 | 1108.2 | 0.0299% | 1827 | 0.0032% |
| vala | 1104.512 | 0.0298% | 14822 | 0.0257% |
| vue | 1093.8 | 0.0295% | 68967 | 0.1195% |
| freemarker | 1032.332 | 0.0278% | 36216 | 0.0628% |
| graphql | 1004.844 | 0.0271% | 2009 | 0.0035% |
| twig | 958.96 | 0.0259% | 39588 | 0.0686% |
| tcl | 869.832 | 0.0235% | 16407 | 0.0284% |
| pod | 859.016 | 0.0232% | 14922 | 0.0259% |
| dockerfile | 849.728 | 0.0229% | 259379 | 0.4495% |
| yacc | 845.704 | 0.0228% | 8230 | 0.0143% |
| postscript | 800.728 | 0.0216% | 903 | 0.0016% |
| racket | 796.64 | 0.0215% | 16615 | 0.0288% |
| eagle | 785.684 | 0.0212% | 2237 | 0.0039% |
| haxe | 772.896 | 0.0208% | 28447 | 0.0493% |
| julia | 752.068 | 0.0203% | 22695 | 0.0393% |
| handlebars | 740.816 | 0.02% | 49842 | 0.0864% |
| smarty | 720.944 | 0.0194% | 41065 | 0.0712% |
| visual-basic | 681.516 | 0.0184% | 10511 | 0.0182% |
| literate-haskell | 673.74 | 0.0182% | 10729 | 0.0186% |
| smalltalk | 665.892 | 0.018% | 11741 | 0.0203% |
| isabelle | 655.82 | 0.0177% | 8359 | 0.0145% |
| nimrod | 652.86 | 0.0176% | 12023 | 0.0208% |
| zig | 621.384 | 0.0168% | 4290 | 0.0074% |
| m4 | 603.584 | 0.0163% | 12465 | 0.0216% |
| max | 603.56 | 0.0163% | 2259 | 0.0039% |
| elixir | 558.116 | 0.015% | 35473 | 0.0615% |
| mako | 543.012 | 0.0146% | 8943 | 0.0155% |
| arduino | 534.176 | 0.0144% | 32350 | 0.0561% |
| jade | 531.4 | 0.0143% | 46993 | 0.0814% |
| haml | 502.012 | 0.0135% | 74792 | 0.1296% |
| elm | 481.968 | 0.013% | 18542 | 0.0321% |
| purebasic | 474.276 | 0.0128% | 36 | 0.0001% |
| coldfusion | 470.78 | 0.0127% | 9263 | 0.0161% |
| lean | 470.032 | 0.0127% | 7507 | 0.013% |
| r | 454.32 | 0.0122% | 12858 | 0.0223% |
| cuda | 437.668 | 0.0118% | 11450 | 0.0198% |
| textile | 425.116 | 0.0115% | 18491 | 0.032% |
| robotframework | 421.612 | 0.0114% | 9211 | 0.016% |
| abap | 409.62 | 0.011% | 1955 | 0.0034% |
| rdoc | 397.028 | 0.0107% | 38760 | 0.0672% |
| llvm | 382.2 | 0.0103% | 10727 | 0.0186% |
| ada | 380.7 | 0.0103% | 13258 | 0.023% |
| batchfile | 372.16 | 0.01% | 43674 | 0.0757% |
| qml | 361.452 | 0.0097% | 19360 | 0.0336% |
| jasmin | 359.82 | 0.0097% | 4782 | 0.0083% |
| assembly | 343.62 | 0.0093% | 8126 | 0.0141% |
| g-code | 334.964 | 0.009% | 3690 | 0.0064% |
| cucumber | 331.38 | 0.0089% | 26677 | 0.0462% |
| html+php | 323.348 | 0.0087% | 18381 | 0.0319% |
| kicad | 321.936 | 0.0087% | 759 | 0.0013% |
| api-blueprint | 317.852 | 0.0086% | 4765 | 0.0083% |
| eiffel | 311.48 | 0.0084% | 373 | 0.0006% |
| toml | 292.676 | 0.0079% | 63517 | 0.1101% |
| modelica | 284.616 | 0.0077% | 2611 | 0.0045% |
| bitbake | 277.576 | 0.0075% | 43239 | 0.0749% |
| lex | 275.96 | 0.0074% | 705 | 0.0012% |
| stylus | 273.056 | 0.0074% | 21967 | 0.0381% |
| protocol-buffer | 254.124 | 0.0069% | 9202 | 0.0159% |
| unknown | 252.228 | 0.0068% | 30570 | 0.053% |
| nit | 244.54 | 0.0066% | 4951 | 0.0086% |
| factor | 241.192 | 0.0065% | 15378 | 0.0267% |
| xs | 239.04 | 0.0064% | 3215 | 0.0056% |
| sass | 230.648 | 0.0062% | 23144 | 0.0401% |
| parrot-internal-representation | 230.196 | 0.0062% | 6231 | 0.0108% |
| html+django | 217.04 | 0.0059% | 10535 | 0.0183% |
| mediawiki | 214.324 | 0.0058% | 10188 | 0.0177% |
| logos | 212.296 | 0.0057% | 1733 | 0.003% |
| genshi | 209.3 | 0.0056% | 956 | 0.0017% |
| coldfusion-cfc | 208.164 | 0.0056% | 4410 | 0.0076% |
| xtend | 179.544 | 0.0048% | 7775 | 0.0135% |
| sqf | 168.656 | 0.0045% | 7778 | 0.0135% |
| vhdl | 155.948 | 0.0042% | 2185 | 0.0038% |
| antlr | 143.548 | 0.0039% | 3651 | 0.0063% |
| systemverilog | 140.192 | 0.0038% | 3944 | 0.0068% |
| hcl | 136.752 | 0.0037% | 13379 | 0.0232% |
| asp | 136.104 | 0.0037% | 4286 | 0.0074% |
| nsis | 129.124 | 0.0035% | 4048 | 0.007% |
| inform-7 | 120.188 | 0.0032% | 184 | 0.0003% |
| slim | 119.036 | 0.0032% | 18726 | 0.0325% |
| groovy-server-pages | 117.368 | 0.0032% | 6695 | 0.0116% |
| ceylon | 116.144 | 0.0031% | 7256 | 0.0126% |
| fish | 111.28 | 0.003% | 15351 | 0.0266% |
| processing | 108.58 | 0.0029% | 5912 | 0.0102% |
| component-pascal | 105.5 | 0.0028% | 43 | 0.0001% |
| lasso | 104.168 | 0.0028% | 67 | 0.0001% |
| glsl | 99.488 | 0.0027% | 9478 | 0.0164% |
| saltstack | 98.196 | 0.0026% | 12314 | 0.0213% |
| xbase | 94.424 | 0.0025% | 1670 | 0.0029% |
| autohotkey | 94.22 | 0.0025% | 1452 | 0.0025% |
| liquid | 93.792 | 0.0025% | 2651 | 0.0046% |
| purescript | 92.412 | 0.0025% | 5024 | 0.0087% |
| agda | 92.06 | 0.0025% | 4956 | 0.0086% |
| inno-setup | 91.36 | 0.0025% | 3014 | 0.0052% |
| oz | 90.476 | 0.0024% | 1551 | 0.0027% |
| chapel | 89.62 | 0.0024% | 26447 | 0.0458% |
| arc | 87.212 | 0.0024% | 758 | 0.0013% |
| opencl | 86.432 | 0.0023% | 2489 | 0.0043% |
| graphviz-dot | 85.804 | 0.0023% | 1525 | 0.0026% |
| pawn | 85.424 | 0.0023% | 580 | 0.001% |
| jsoniq | 75.152 | 0.002% | 1343 | 0.0023% |
| bluespec | 72.38 | 0.002% | 2500 | 0.0043% |
| smali | 71.38 | 0.0019% | 174 | 0.0003% |
| krl | 69.868 | 0.0019% | 1879 | 0.0033% |
| maple | 68.284 | 0.0018% | 1311 | 0.0023% |
| unrealscript | 67.668 | 0.0018% | 585 | 0.001% |
| ooc | 63.188 | 0.0017% | 3416 | 0.0059% |
| pure-data | 62.624 | 0.0017% | 603 | 0.001% |
| xquery | 61.956 | 0.0017% | 2237 | 0.0039% |
| digital-command-language | 59.644 | 0.0016% | 833 | 0.0014% |
| moonscript | 59.208 | 0.0016% | 1951 | 0.0034% |
| awk | 57.176 | 0.0015% | 2206 | 0.0038% |
| pike | 52.872 | 0.0014% | 1262 | 0.0022% |
| livescript | 51.228 | 0.0014% | 5194 | 0.009% |
| solidity | 50.856 | 0.0014% | 3689 | 0.0064% |
| monkey | 48.256 | 0.0013% | 1367 | 0.0024% |
| jsonld | 48.012 | 0.0013% | 462 | 0.0008% |
| zephir | 42.684 | 0.0012% | 1265 | 0.0022% |
| crystal | 41.924 | 0.0011% | 4217 | 0.0073% |
| rhtml | 41.02 | 0.0011% | 4551 | 0.0079% |
| stata | 40.684 | 0.0011% | 1344 | 0.0023% |
| idris | 39.896 | 0.0011% | 3025 | 0.0052% |
| raml | 39.388 | 0.0011% | 948 | 0.0016% |
| openscad | 37.732 | 0.001% | 2178 | 0.0038% |
| red | 35.26 | 0.001% | 1108 | 0.0019% |
| c2hs-haskell | 34.472 | 0.0009% | 1021 | 0.0018% |
| cycript | 33.96 | 0.0009% | 197 | 0.0003% |
| applescript | 33.512 | 0.0009% | 1304 | 0.0023% |
| mupad | 32.488 | 0.0009% | 178 | 0.0003% |
| literate-agda | 31.384 | 0.0008% | 567 | 0.001% |
| boo | 31.172 | 0.0008% | 26289 | 0.0456% |
| sourcepawn | 29.528 | 0.0008% | 717 | 0.0012% |
| qmake | 29.508 | 0.0008% | 3632 | 0.0063% |
| ragel-in-ruby-host | 28.296 | 0.0008% | 888 | 0.0015% |
| io | 27.952 | 0.0008% | 1247 | 0.0022% |
| desktop | 27.648 | 0.0007% | 5021 | 0.0087% |
| propeller-spin | 26.772 | 0.0007% | 625 | 0.0011% |
| thrift | 26.748 | 0.0007% | 1007 | 0.0017% |
| volt | 25.052 | 0.0007% | 1660 | 0.0029% |
| xproc | 24.212 | 0.0007% | 914 | 0.0016% |
| igor-pro | 23.748 | 0.0006% | 388 | 0.0007% |
| lolcode | 23.74 | 0.0006% | 24861 | 0.0431% |
| html+eex | 21.412 | 0.0006% | 2100 | 0.0036% |
| logtalk | 20.428 | 0.0006% | 1035 | 0.0018% |
| mirah | 20.104 | 0.0005% | 706 | 0.0012% |
| gnuplot | 19.676 | 0.0005% | 889 | 0.0015% |
| literate-coffeescript | 19.016 | 0.0005% | 1041 | 0.0018% |
| jflex | 18.608 | 0.0005% | 555 | 0.001% |
| emberscript | 18.392 | 0.0005% | 1024 | 0.0018% |
| cobol | 17.0 | 0.0005% | 24953 | 0.0432% |
| yang | 16.94 | 0.0005% | 597 | 0.001% |
| rebol | 16.468 | 0.0004% | 239 | 0.0004% |
| linker-script | 16.084 | 0.0004% | 1604 | 0.0028% |
| cartocss | 15.916 | 0.0004% | 555 | 0.001% |
| urweb | 13.068 | 0.0004% | 304 | 0.0005% |
| rmarkdown | 13.032 | 0.0004% | 750 | 0.0013% |
| darcs-patch | 13.008 | 0.0004% | 80 | 0.0001% |
| csound | 12.852 | 0.0003% | 229 | 0.0004% |
| squirrel | 12.844 | 0.0003% | 531 | 0.0009% |
| apl | 12.56 | 0.0003% | 586 | 0.001% |
| hlsl | 12.168 | 0.0003% | 1529 | 0.0026% |
| latte | 11.888 | 0.0003% | 1380 | 0.0024% |
| pony | 11.836 | 0.0003% | 624 | 0.0011% |
| ioke | 10.86 | 0.0003% | 373 | 0.0006% |
| hy | 10.512 | 0.0003% | 879 | 0.0015% |
| uno | 10.356 | 0.0003% | 628 | 0.0011% |
| pan | 10.336 | 0.0003% | 637 | 0.0011% |
| xojo | 10.308 | 0.0003% | 642 | 0.0011% |
| papyrus | 10.256 | 0.0003% | 130 | 0.0002% |
| stan | 10.252 | 0.0003% | 540 | 0.0009% |
| slash | 9.904 | 0.0003% | 640 | 0.0011% |
| supercollider | 9.796 | 0.0003% | 318 | 0.0006% |
| vcl | 9.456 | 0.0003% | 747 | 0.0013% |
| smt | 9.032 | 0.0002% | 117 | 0.0002% |
| glyph | 8.948 | 0.0002% | 7 | 0.0% |
| wisp | 8.736 | 0.0002% | 262 | 0.0005% |
| renpy | 8.3 | 0.0002% | 421 | 0.0007% |
| clips | 7.728 | 0.0002% | 450 | 0.0008% |
| dns-zone | 7.56 | 0.0002% | 54 | 0.0001% |
| sas | 7.536 | 0.0002% | 269 | 0.0005% |
| rouge | 7.196 | 0.0002% | 396 | 0.0007% |
| ec | 7.032 | 0.0002% | 94 | 0.0002% |
| dylan | 6.82 | 0.0002% | 280 | 0.0005% |
| tcsh | 6.524 | 0.0002% | 748 | 0.0013% |
| aspectj | 6.332 | 0.0002% | 451 | 0.0008% |
| netlogo | 6.304 | 0.0002% | 140 | 0.0002% |
| gap | 6.096 | 0.0002% | 46 | 0.0001% |
| fancy | 5.952 | 0.0002% | 675 | 0.0012% |
| coq | 5.744 | 0.0002% | 80 | 0.0001% |
| click | 5.74 | 0.0002% | 9 | 0.0% |
| capn-proto | 5.644 | 0.0002% | 330 | 0.0006% |
| flux | 5.572 | 0.0002% | 47 | 0.0001% |
| forth | 5.512 | 0.0001% | 265 | 0.0005% |
| ats | 5.424 | 0.0001% | 383 | 0.0007% |
| netlinx | 5.172 | 0.0001% | 144 | 0.0002% |
| clean | 5.068 | 0.0001% | 171 | 0.0003% |
| parrot-assembly | 4.664 | 0.0001% | 227 | 0.0004% |
| alloy | 4.644 | 0.0001% | 203 | 0.0004% |
| lfe | 4.576 | 0.0001% | 287 | 0.0005% |
| gdscript | 4.488 | 0.0001% | 460 | 0.0008% |
| augeas | 4.444 | 0.0001% | 395 | 0.0007% |
| sparql | 4.404 | 0.0001% | 1036 | 0.0018% |
| lilypond | 4.308 | 0.0001% | 265 | 0.0005% |
| scilab | 4.088 | 0.0001% | 375 | 0.0006% |
| autoit | 4.06 | 0.0001% | 279 | 0.0005% |
| myghty | 3.864 | 0.0001% | 105 | 0.0002% |
| blitzmax | 3.74 | 0.0001% | 220 | 0.0004% |
| creole | 3.416 | 0.0001% | 337 | 0.0006% |
| harbour | 3.336 | 0.0001% | 107 | 0.0002% |
| piglatin | 3.168 | 0.0001% | 513 | 0.0009% |
| opa | 3.164 | 0.0001% | 211 | 0.0004% |
| sage | 3.032 | 0.0001% | 414 | 0.0007% |
| ston | 2.848 | 0.0001% | 414 | 0.0007% |
| maxscript | 2.8 | 0.0001% | 47 | 0.0001% |
| lsl | 2.68 | 0.0001% | 74 | 0.0001% |
| gentoo-ebuild | 2.576 | 0.0001% | 601 | 0.001% |
| nu | 2.38 | 0.0001% | 170 | 0.0003% |
| bro | 2.34 | 0.0001% | 333 | 0.0006% |
| xc | 2.02 | 0.0001% | 88 | 0.0002% |
| j | 1.808 | 0.0% | 142 | 0.0002% |
| metal | 1.724 | 0.0% | 151 | 0.0003% |
| module-management-system | 1.544 | 0.0% | 91 | 0.0002% |
| webidl | 1.508 | 0.0% | 96 | 0.0002% |
| tea | 1.468 | 0.0% | 29 | 0.0001% |
| redcode | 1.272 | 0.0% | 149 | 0.0003% |
| shen | 1.2 | 0.0% | 71 | 0.0001% |
| pov-ray-sdl | 1.136 | 0.0% | 104 | 0.0002% |
| x10 | 1.008 | 0.0% | 33 | 0.0001% |
| brainfuck | 0.964 | 0.0% | 167 | 0.0003% |
| ninja | 0.952 | 0.0% | 187 | 0.0003% |
| golo | 0.896 | 0.0% | 115 | 0.0002% |
| webassembly | 0.86 | 0.0% | 83 | 0.0001% |
| self | 0.824 | 0.0% | 15 | 0.0% |
| labview | 0.808 | 0.0% | 61 | 0.0001% |
| octave | 0.804 | 0.0% | 12 | 0.0% |
| pogoscript | 0.804 | 0.0% | 74 | 0.0001% |
| d | 0.796 | 0.0% | 20 | 0.0% |
| http | 0.736 | 0.0% | 140 | 0.0002% |
| ecl | 0.664 | 0.0% | 48 | 0.0001% |
| chuck | 0.584 | 0.0% | 99 | 0.0002% |
| gosu | 0.524 | 0.0% | 60 | 0.0001% |
| parrot | 0.52 | 0.0% | 17 | 0.0% |
| opal | 0.472 | 0.0% | 69 | 0.0001% |
| objective-j | 0.456 | 0.0% | 37 | 0.0001% |
| kit | 0.412 | 0.0% | 48 | 0.0001% |
| gams | 0.376 | 0.0% | 18 | 0.0% |
| prolog | 0.276 | 0.0% | 35 | 0.0001% |
| clarion | 0.268 | 0.0% | 13 | 0.0% |
| mask | 0.252 | 0.0% | 37 | 0.0001% |
| brightscript | 0.244 | 0.0% | 28 | 0.0% |
| scaml | 0.184 | 0.0% | 31 | 0.0001% |
| matlab | 0.164 | 0.0% | 29 | 0.0001% |
| idl | 0.148 | 0.0% | 1 | 0.0% |
| ags-script | 0.124 | 0.0% | 31 | 0.0001% |
| lookml | 0.12 | 0.0% | 10 | 0.0% |
| apacheconf | 0.108 | 0.0% | 59 | 0.0001% |
| oxygene | 0.104 | 0.0% | 9 | 0.0% |
| txl | 0.096 | 0.0% | 3 | 0.0% |
| grammatical-framework | 0.088 | 0.0% | 39 | 0.0001% |
| renderscript | 0.064 | 0.0% | 54 | 0.0001% |
| mtml | 0.052 | 0.0% | 13 | 0.0% |
| unified-parallel-c | 0.052 | 0.0% | 6 | 0.0% |
| dogescript | 0.04 | 0.0% | 10 | 0.0% |
| gentoo-eclass | 0.04 | 0.0% | 6 | 0.0% |
| zimpl | 0.04 | 0.0% | 7 | 0.0% |
| irc-log | 0.036 | 0.0% | 9 | 0.0% |
| fantom | 0.028 | 0.0% | 11 | 0.0% |
| numpy | 0.028 | 0.0% | 1 | 0.0% |
| cirru | 0.024 | 0.0% | 4 | 0.0% |
| xpages | 0.024 | 0.0% | 7 | 0.0% |
| nginx | 0.02 | 0.0% | 6 | 0.0% |
| objdump | 0.02 | 0.0% | 1 | 0.0% |
| python-traceback | 0.02 | 0.0% | 10 | 0.0% |
| realbasic | 0.012 | 0.0% | 1 | 0.0% |
| befunge | 0.008 | 0.0% | 2 | 0.0% |
| bison | 0.008 | 0.0% | 1 | 0.0% |
| m | 0.008 | 0.0% | 1 | 0.0% |
| omgrofl | 0.008 | 0.0% | 1 | 0.0% |
## Additional Information
### Licensing Information
Each sample comes from a code repository with a permissive license. The license is provided by the `license` field for each sample.
### Citation Information
```bibtex
@article{muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre},
journal={arXiv preprint arXiv:2308.07124},
year={2023}
}
```
| bigcode/commitpack | [
"language:code",
"license:mit",
"arxiv:2308.07124",
"region:us"
]
| 2023-01-17T11:53:28+00:00 | {"language": ["code"], "license": "mit", "pretty_name": "CommitPack"} | 2023-08-20T06:13:13+00:00 | [
"2308.07124"
]
| [
"code"
]
| TAGS
#language-code #license-mit #arxiv-2308.07124 #region-us
| !Octopack
Dataset Card for CommitPack
===========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Repository: URL
* Paper: OctoPack: Instruction Tuning Code Large Language Models
* Point of Contact: Niklas Muennighoff
### Dataset Summary
>
> CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed.
>
>
>
* Creation: The dataset can be recreated using instructions available here.
* Languages: 350
* OctoPack:
Dataset Structure
-----------------
### Data Instances
An example looks as follows:
### Data Fields
The data fields are the same among all splits:
* 'commit': unique commit id
* 'old\_file': name of the file before the commit
* 'new\_file': name of the file after the commit
* 'old\_contents': contents of the file before the commit
* 'new\_contents': contents of the file after the commit
* 'subject': subject of the commit (this is used for all experiments in the paper)
* 'message': message of the commit (commonly the same as the subject)
* 'lang': programming language
* 'license': license of the repository the code stems from, one of '['mit', 'artistic-2.0', 'isc', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'unknown', 'apache-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-2.1', 'bsd-2-clause']'
* 'repos': name of the the repository the code stems from (if multiple, they are comma separated)
* 'returncode': if applicable errorcode during scraping (0 = no error)
* 'stderr': if applicable the error that occured during scraping (empty = no error)
### Data Splits
Additional Information
----------------------
### Licensing Information
Each sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample.
| [
"### Dataset Summary\n\n\n\n> \n> CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed.\n> \n> \n> \n\n\n* Creation: The dataset can be recreated using instructions available here.\n* Languages: 350\n* OctoPack:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all splits:\n\n\n* 'commit': unique commit id\n* 'old\\_file': name of the file before the commit\n* 'new\\_file': name of the file after the commit\n* 'old\\_contents': contents of the file before the commit\n* 'new\\_contents': contents of the file after the commit\n* 'subject': subject of the commit (this is used for all experiments in the paper)\n* 'message': message of the commit (commonly the same as the subject)\n* 'lang': programming language\n* 'license': license of the repository the code stems from, one of '['mit', 'artistic-2.0', 'isc', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'unknown', 'apache-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-2.1', 'bsd-2-clause']'\n* 'repos': name of the the repository the code stems from (if multiple, they are comma separated)\n* 'returncode': if applicable errorcode during scraping (0 = no error)\n* 'stderr': if applicable the error that occured during scraping (empty = no error)",
"### Data Splits\n\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nEach sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample."
]
| [
"TAGS\n#language-code #license-mit #arxiv-2308.07124 #region-us \n",
"### Dataset Summary\n\n\n\n> \n> CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed.\n> \n> \n> \n\n\n* Creation: The dataset can be recreated using instructions available here.\n* Languages: 350\n* OctoPack:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all splits:\n\n\n* 'commit': unique commit id\n* 'old\\_file': name of the file before the commit\n* 'new\\_file': name of the file after the commit\n* 'old\\_contents': contents of the file before the commit\n* 'new\\_contents': contents of the file after the commit\n* 'subject': subject of the commit (this is used for all experiments in the paper)\n* 'message': message of the commit (commonly the same as the subject)\n* 'lang': programming language\n* 'license': license of the repository the code stems from, one of '['mit', 'artistic-2.0', 'isc', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'unknown', 'apache-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-2.1', 'bsd-2-clause']'\n* 'repos': name of the the repository the code stems from (if multiple, they are comma separated)\n* 'returncode': if applicable errorcode during scraping (0 = no error)\n* 'stderr': if applicable the error that occured during scraping (empty = no error)",
"### Data Splits\n\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nEach sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample."
]
|
16d2d61d2e3989a492ce1bc2aa74d541f3b5f0f6 |
# Dataset Card for LFID Seismic Data
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LIFD DataSets homepage](https://github.com/cemac/LIFD_ML_Datasets)
- **Repository:** LIFD GitHub Repo](https://github.com/cemac/LIFD_ML_Datasets)
- **Point of Contact:** [*coming soon*]()
### Dataset Summary
A description of the dataset:
### Supported Tasks and Leaderboards
*coming soon - Kaggle links?*
### Data Fields
SAC files
## Dataset Creation
All seismic data were downloaded through the IRIS Wilber 3 system (https://ds.iris.edu/wilber3/) or IRIS Web Services (https://service.iris.edu/), including the following seismic networks: (1) the AZ (ANZA; UC San Diego, 1982); (2) the TA (Transportable Array; IRIS, 2003); (3) the US (USNSN,ย Albuquerque, 1990); (4) the IU (GSN;ย Albuquerque, 1988).
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
| cemachelen/LIFD_Seismic_Data | [
"task_categories:feature-extraction",
"task_categories:image-to-image",
"task_categories:time-series-forecasting",
"task_categories:object-detection",
"task_categories:unconditional-image-generation",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-17T12:59:25+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["feature-extraction", "image-to-image", "time-series-forecasting", "object-detection", "unconditional-image-generation"], "task_ids": ["multivariate-time-series-forecasting"], "pretty_name": "LIFD Seismic Data", "tags": []} | 2023-01-19T14:32:45+00:00 | []
| [
"en"
]
| TAGS
#task_categories-feature-extraction #task_categories-image-to-image #task_categories-time-series-forecasting #task_categories-object-detection #task_categories-unconditional-image-generation #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #language-English #license-mit #region-us
|
# Dataset Card for LFID Seismic Data
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Dataset Structure
- Data Fields
- Dataset Creation
- Source Data
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: LIFD DataSets homepage
- Repository: LIFD GitHub Repo](URL
- Point of Contact: [*coming soon*]()
### Dataset Summary
A description of the dataset:
### Supported Tasks and Leaderboards
*coming soon - Kaggle links?*
### Data Fields
SAC files
## Dataset Creation
All seismic data were downloaded through the IRIS Wilber 3 system (URL or IRIS Web Services (URL including the following seismic networks: (1) the AZ (ANZA; UC San Diego, 1982); (2) the TA (Transportable Array; IRIS, 2003); (3) the US (USNSN,ย Albuquerque, 1990); (4) the IU (GSN;ย Albuquerque, 1988).
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for LFID Seismic Data",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: LIFD DataSets homepage\n- Repository: LIFD GitHub Repo](URL\n- Point of Contact: [*coming soon*]()",
"### Dataset Summary\n\nA description of the dataset:",
"### Supported Tasks and Leaderboards\n\n*coming soon - Kaggle links?*",
"### Data Fields\n\nSAC files",
"## Dataset Creation\n\nAll seismic data were downloaded through the IRIS Wilber 3 system (URL or IRIS Web Services (URL including the following seismic networks: (1) the AZ (ANZA; UC San Diego, 1982); (2) the TA (Transportable Array; IRIS, 2003); (3) the US (USNSN,ย Albuquerque, 1990); (4) the IU (GSN;ย Albuquerque, 1988).",
"### Source Data",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#task_categories-feature-extraction #task_categories-image-to-image #task_categories-time-series-forecasting #task_categories-object-detection #task_categories-unconditional-image-generation #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #language-English #license-mit #region-us \n",
"# Dataset Card for LFID Seismic Data",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: LIFD DataSets homepage\n- Repository: LIFD GitHub Repo](URL\n- Point of Contact: [*coming soon*]()",
"### Dataset Summary\n\nA description of the dataset:",
"### Supported Tasks and Leaderboards\n\n*coming soon - Kaggle links?*",
"### Data Fields\n\nSAC files",
"## Dataset Creation\n\nAll seismic data were downloaded through the IRIS Wilber 3 system (URL or IRIS Web Services (URL including the following seismic networks: (1) the AZ (ANZA; UC San Diego, 1982); (2) the TA (Transportable Array; IRIS, 2003); (3) the US (USNSN,ย Albuquerque, 1990); (4) the IU (GSN;ย Albuquerque, 1988).",
"### Source Data",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
|
83d060a6e2eb69b6c89369676ef3a88bcb23a4ff | the api I used to get the Calories may be messed up. | breadlicker45/Calorie-dataset | [
"license:other",
"region:us"
]
| 2023-01-17T13:15:22+00:00 | {"license": "other"} | 2023-02-10T22:28:47+00:00 | []
| []
| TAGS
#license-other #region-us
| the api I used to get the Calories may be messed up. | []
| [
"TAGS\n#license-other #region-us \n"
]
|
04669dcb51c15513cdc808ff7920b25be05781d1 |
# Ekman Taxomony of KOTE(Korean Online That-gul Emotions) datasets
I mapped 44 emotion types in the KOTE dataset to 7 Ekman Taxonomy (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion).
For the mapping, I referred to the clustering results in the KOTE paper (https://arxiv.org/pdf/2205.05300.pdf).
The distance between each emotion and Ekman basic emotion (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion) was calculated and configured to map to the nearest basic emotion.
# Emotion Grouping
Disgust: fed up, shock, disgust, contempt
Anger: anger, irritation, dissatisfaction, preposterous
Fear: pathetic, distrust, disappointment, embarrassment, shame, guilt, gessepany, fear, anxiety
Sadness: compassion, sadness, sorrow, despair, exhaustion, laziness, reluctant, boredom
No Emotion: no emotion arrogance, resolute
Surprise: realization, surprise, respect, Interest
Joy: Expectancy, Welcome, Care, attracted, Excitement, joy, happiness, admiration, pride, gratitude, relief, comfort
annotations_creators: https://github.com/searle-j/KOTE, language: "Korean", license: mit
| kjhkjh95/kote_ekman | [
"arxiv:2205.05300",
"region:us"
]
| 2023-01-17T13:59:02+00:00 | {} | 2023-01-17T15:18:28+00:00 | [
"2205.05300"
]
| []
| TAGS
#arxiv-2205.05300 #region-us
|
# Ekman Taxomony of KOTE(Korean Online That-gul Emotions) datasets
I mapped 44 emotion types in the KOTE dataset to 7 Ekman Taxonomy (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion).
For the mapping, I referred to the clustering results in the KOTE paper (URL
The distance between each emotion and Ekman basic emotion (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion) was calculated and configured to map to the nearest basic emotion.
# Emotion Grouping
Disgust: fed up, shock, disgust, contempt
Anger: anger, irritation, dissatisfaction, preposterous
Fear: pathetic, distrust, disappointment, embarrassment, shame, guilt, gessepany, fear, anxiety
Sadness: compassion, sadness, sorrow, despair, exhaustion, laziness, reluctant, boredom
No Emotion: no emotion arrogance, resolute
Surprise: realization, surprise, respect, Interest
Joy: Expectancy, Welcome, Care, attracted, Excitement, joy, happiness, admiration, pride, gratitude, relief, comfort
annotations_creators: URL language: "Korean", license: mit
| [
"# Ekman Taxomony of KOTE(Korean Online That-gul Emotions) datasets\n\nI mapped 44 emotion types in the KOTE dataset to 7 Ekman Taxonomy (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion).\nFor the mapping, I referred to the clustering results in the KOTE paper (URL\nThe distance between each emotion and Ekman basic emotion (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion) was calculated and configured to map to the nearest basic emotion.",
"# Emotion Grouping\n\n\nDisgust: fed up, shock, disgust, contempt\nAnger: anger, irritation, dissatisfaction, preposterous\nFear: pathetic, distrust, disappointment, embarrassment, shame, guilt, gessepany, fear, anxiety\nSadness: compassion, sadness, sorrow, despair, exhaustion, laziness, reluctant, boredom\nNo Emotion: no emotion arrogance, resolute\nSurprise: realization, surprise, respect, Interest\nJoy: Expectancy, Welcome, Care, attracted, Excitement, joy, happiness, admiration, pride, gratitude, relief, comfort\n\n\nannotations_creators: URL language: \"Korean\", license: mit"
]
| [
"TAGS\n#arxiv-2205.05300 #region-us \n",
"# Ekman Taxomony of KOTE(Korean Online That-gul Emotions) datasets\n\nI mapped 44 emotion types in the KOTE dataset to 7 Ekman Taxonomy (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion).\nFor the mapping, I referred to the clustering results in the KOTE paper (URL\nThe distance between each emotion and Ekman basic emotion (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion) was calculated and configured to map to the nearest basic emotion.",
"# Emotion Grouping\n\n\nDisgust: fed up, shock, disgust, contempt\nAnger: anger, irritation, dissatisfaction, preposterous\nFear: pathetic, distrust, disappointment, embarrassment, shame, guilt, gessepany, fear, anxiety\nSadness: compassion, sadness, sorrow, despair, exhaustion, laziness, reluctant, boredom\nNo Emotion: no emotion arrogance, resolute\nSurprise: realization, surprise, respect, Interest\nJoy: Expectancy, Welcome, Care, attracted, Excitement, joy, happiness, admiration, pride, gratitude, relief, comfort\n\n\nannotations_creators: URL language: \"Korean\", license: mit"
]
|
3e786a1f95948505a9cdd19172822822e15f6fbf | # Dataset Card for "kpe_long_docs_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | RobertoMCA97/kpe_long_docs_test | [
"region:us"
]
| 2023-01-17T15:01:18+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "sequence": "string"}, {"name": "doc_bio_tags", "sequence": "string"}], "splits": [{"name": "semeval2010_test", "num_bytes": 11151877, "num_examples": 100}, {"name": "nus", "num_bytes": 23814618, "num_examples": 211}, {"name": "duc2001", "num_bytes": 3523199, "num_examples": 308}, {"name": "ldkp3k_test", "num_bytes": 285969940, "num_examples": 3413}], "download_size": 77767836, "dataset_size": 324459634}} | 2023-01-17T15:02:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "kpe_long_docs_test"
More Information needed | [
"# Dataset Card for \"kpe_long_docs_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"kpe_long_docs_test\"\n\nMore Information needed"
]
|
373906f601c0b9b701a460d8231b9881dd01c0c6 | # AutoTrain Dataset for project: attempt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project attempt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<800x1000 RGB PIL image>",
"target": 13
},
{
"image": "<254x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 277 |
| valid | 80 |
| AdamOswald1/autotrain-data-attempt | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T15:12:55+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T15:21:15+00:00 | []
| []
| TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: attempt
======================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project attempt.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
fcadb7ed3488a139f6cc7ef204423811678f6744 |
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Repository:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x16BAA72A4A8>,
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"mask_weared_incorrect": 0,
"with_mask": 1,
"without_mask": 2,
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1500 |180 |180 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Pool",
title="FaceMask dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
}
```
### Contributions
| poolrf2001/FaceMask | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-17T16:37:30+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "FaceMask", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "mask_weared_incorrect", "1": "with_mask", "2": "without_mask"}}}}], "splits": [{"name": "train", "num_bytes": 38806014, "num_examples": 1500}, {"name": "validation", "num_bytes": 4758962, "num_examples": 180}, {"name": "test", "num_bytes": 4693735, "num_examples": 180}], "download_size": 48258711, "dataset_size": 49140913}} | 2023-01-17T22:58:52+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #region-us
| Dataset Card for Beans
======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: (URL
* Repository: (URL
* Paper: N/A
* Leaderboard: N/A
* Point of Contact: N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
* 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
Dataset Structure
-----------------
### Data Instances
A sample from the training set is provided below:
### Data Fields
The data instances have the following fields:
* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'.
* 'labels': an 'int' classification label.
Class Label Mappings:
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nBeans leaf dataset with images of diseased and health leaves.",
"### Supported Tasks and Leaderboards\n\n\n* 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from the training set is provided below:",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #region-us \n",
"### Dataset Summary\n\n\nBeans leaf dataset with images of diseased and health leaves.",
"### Supported Tasks and Leaderboards\n\n\n* 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from the training set is provided below:",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
|
f79ac28deed233b642a05c14820e8b6fbe6a1d8f | # AutoTrain Dataset for project: alt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project alt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 1
},
{
"image": "<1024x590 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 243 |
| valid | 243 |
| AdamOswald1/autotrain-data-alt | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T17:09:01+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T17:12:46+00:00 | []
| []
| TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: alt
==================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project alt.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
de9747e81dd1af03d17291798431b56423ab1db4 |
## Dataset Description
- **Homepage:** [Face Mask Detection Dataset](https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
## Dataset Summary
A dataset from [kaggle](https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection). origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data
### Introduction
-
### PROBLEM STATEMENT
-
### About Files
- Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing_laptopโ which contain the images of the respective human activities.
- Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing_laptopโ.
- Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their imageโs filename in the same order as given in this file.
- sample_submission: This is a csv file that contains the sample submission for the data sprint.
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label. All `test` data is labeled 0.
### Class Label Mappings:
```
{
'mask_weared_incorrect': 0,
'with_mask': 1,
'without_mask': 2
}
```
### Data Splits
| | train | test | validation|
|---------------|--------|------|----------:|
| # of examples | 1500 | 180 | 180
### Data Size
- download: 46 MiB
- generated: 46.8 MiB
- total: 92.8 MiB
```pycon
>>> from datasets import load_dataset
>>> ds = load_dataset("poolrf2001/mask")
>>> ds
DatasetDict({
test: Dataset({
features: ['image', 'labels'],
num_rows: 180
})
train: Dataset({
features: ['image', 'labels'],
num_rows: 1500
})
validation: Dataset({
features: ['image', 'labels'],
num_rows: 180
})
})
>>> ds["train"].features
{'image': Image(decode=True, id=None),
'labels': ClassLabel(num_classes=3, names=['mask_weared_incorrect', 'with_mask', 'without_mask'], id=None)}
>>> ds["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=180x180>,
'labels': 1}
``` | poolrf2001/mask | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:odbl",
"region:us"
]
| 2023-01-17T17:10:01+00:00 | {"language": ["en"], "license": ["odbl"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "Face Mask Detection"} | 2023-01-17T22:16:12+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #size_categories-1K<n<10K #source_datasets-original #language-English #license-odbl #region-us
| Dataset Description
-------------------
* Homepage: Face Mask Detection Dataset
* Repository: N/A
* Paper: N/A
* Leaderboard: N/A
* Point of Contact: N/A
Dataset Summary
---------------
A dataset from kaggle. origin: URL
### Introduction
*
### PROBLEM STATEMENT
*
### About Files
* Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing\_laptopโ which contain the images of the respective human activities.
* Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing\_laptopโ.
* Testing\_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their imageโs filename in the same order as given in this file.
* sample\_submission: This is a csv file that contains the sample submission for the data sprint.
### Data Fields
The data instances have the following fields:
* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'.
* 'labels': an 'int' classification label. All 'test' data is labeled 0.
### Class Label Mappings:
### Data Splits
### Data Size
* download: 46 MiB
* generated: 46.8 MiB
* total: 92.8 MiB
| [
"### Introduction\n\n\n*",
"### PROBLEM STATEMENT\n\n\n*",
"### About Files\n\n\n* Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing\\_laptopโ which contain the images of the respective human activities.\n* Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing\\_laptopโ.\n* Testing\\_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their imageโs filename in the same order as given in this file.\n* sample\\_submission: This is a csv file that contains the sample submission for the data sprint.",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label. All 'test' data is labeled 0.",
"### Class Label Mappings:",
"### Data Splits",
"### Data Size\n\n\n* download: 46 MiB\n* generated: 46.8 MiB\n* total: 92.8 MiB"
]
| [
"TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #source_datasets-original #language-English #license-odbl #region-us \n",
"### Introduction\n\n\n*",
"### PROBLEM STATEMENT\n\n\n*",
"### About Files\n\n\n* Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing\\_laptopโ which contain the images of the respective human activities.\n* Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', โclappingโ, โcyclingโ, โdancingโ, โdrinkingโ, โeatingโ, โfightingโ, โhuggingโ, โlaughingโ, โlisteningtomusicโ, โrunningโ, โsittingโ, โsleepingโ, textingโ, โusing\\_laptopโ.\n* Testing\\_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their imageโs filename in the same order as given in this file.\n* sample\\_submission: This is a csv file that contains the sample submission for the data sprint.",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label. All 'test' data is labeled 0.",
"### Class Label Mappings:",
"### Data Splits",
"### Data Size\n\n\n* download: 46 MiB\n* generated: 46.8 MiB\n* total: 92.8 MiB"
]
|
0da8dfb24526cd625b8a35d5b1092f710e87420e | # AutoTrain Dataset for project: testttt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project testttt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<113x220 RGB PIL image>",
"target": 2
},
{
"image": "<1280x720 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 184 |
| valid | 58 |
| AdamOswald1/autotrain-data-testttt | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T17:16:50+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T17:28:18+00:00 | []
| []
| TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: testttt
======================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project testttt.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
f5a3293e2b9a21083fd4f16383be35c49e8f03bf | # Dataset Card for "beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000 | [
"region:us"
]
| 2023-01-17T17:24:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 120049326.0, "num_examples": 228}], "download_size": 120049639, "dataset_size": 120049326.0}} | 2023-01-17T17:47:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000\"\n\nMore Information needed"
]
|
a3b79592f955871e5444bbcfb1ae72f35804f19d | # AutoTrain Dataset for project: let
## Dataset Description
This dataset has been automatically processed by AutoTrain for project let.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 1
},
{
"image": "<1024x590 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 242 |
| valid | 242 |
| AdamOswald1/autotrain-data-let | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T17:30:42+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T17:33:00+00:00 | []
| []
| TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: let
==================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project let.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
85c101c7d80e7514d5ce1ffc51a5b8faa888ce9a | # Dataset Card for "sm-diffusion-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | abelc/sm-diffusion-256 | [
"region:us"
]
| 2023-01-17T17:52:54+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 1420346.0, "num_examples": 32}], "download_size": 1420748, "dataset_size": 1420346.0}} | 2023-01-17T17:53:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sm-diffusion-256"
More Information needed | [
"# Dataset Card for \"sm-diffusion-256\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sm-diffusion-256\"\n\nMore Information needed"
]
|
aaf7a293404474a1ca0c154dc223c11db759f57e | # Dataset Card for "italo-diffusion-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | abelc/italo-diffusion-256 | [
"region:us"
]
| 2023-01-17T17:54:34+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 29319809.0, "num_examples": 658}], "download_size": 29297971, "dataset_size": 29319809.0}} | 2023-01-17T17:56:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "italo-diffusion-256"
More Information needed | [
"# Dataset Card for \"italo-diffusion-256\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"italo-diffusion-256\"\n\nMore Information needed"
]
|
1287aa40dd7809c836854657cc2640ca4b39be71 | # Dataset Card for "telegram_de_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | carexl8/telegram_de_ru | [
"region:us"
]
| 2023-01-17T20:29:31+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "time", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "language tags", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5938949, "num_examples": 10191}], "download_size": 1869587, "dataset_size": 5938949}} | 2023-04-25T21:04:20+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "telegram_de_ru"
More Information needed | [
"# Dataset Card for \"telegram_de_ru\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"telegram_de_ru\"\n\nMore Information needed"
]
|
0bc8f4b30c42ce70ccb2493fd7cabc4b6188626f | # Dataset Card for "olm-wikipedia-20221220-1-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221220-1-percent | [
"region:us"
]
| 2023-01-17T20:47:06+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 209366020.9708762, "num_examples": 65879}], "download_size": 123017868, "dataset_size": 209366020.9708762}} | 2023-01-17T20:47:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "olm-wikipedia-20221220-1-percent"
More Information needed | [
"# Dataset Card for \"olm-wikipedia-20221220-1-percent\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-wikipedia-20221220-1-percent\"\n\nMore Information needed"
]
|
90d2c6da950a6168fcee20ec69e194a034f44eef |
<div align="center">
<img width="640" alt="keremberke/protective-equipment-detection" src="https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes']
```
### Number of Images
```json
{'valid': 3570, 'test': 1935, 'train': 6473}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/protective-equipment-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7](https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7?ref=roboflow2huggingface)
### Citation
```
@misc{ ppes-kaxsi_dataset,
title = { PPEs Dataset },
type = { Open Source Dataset },
author = { Personal Protective Equipment },
howpublished = { \\url{ https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi } },
url = { https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on July 7, 2022 at 3:49 PM GMT
It includes 11978 images.
Ppe-equipements are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/protective-equipment-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"region:us"
]
| 2023-01-17T20:53:31+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Manufacturing"]} | 2023-01-18T21:21:55+00:00 | []
| []
| TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #Manufacturing #region-us
|
<div align="center">
<img width="640" alt="keremberke/protective-equipment-detection" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on July 7, 2022 at 3:49 PM GMT
It includes 11978 images.
Ppe-equipements are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on July 7, 2022 at 3:49 PM GMT\n\nIt includes 11978 images.\nPpe-equipements are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #Manufacturing #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on July 7, 2022 at 3:49 PM GMT\n\nIt includes 11978 images.\nPpe-equipements are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
|
3ee36c43c9ce7104d93176747f98fb91861a38e5 | # Dataset Card for "olm-wikipedia-20221220-1-percent-tokenized-568"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221220-1-percent-tokenized-568 | [
"region:us"
]
| 2023-01-17T20:56:11+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 300340980, "num_examples": 87819}], "download_size": 100193548, "dataset_size": 300340980}} | 2023-01-17T20:56:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "olm-wikipedia-20221220-1-percent-tokenized-568"
More Information needed | [
"# Dataset Card for \"olm-wikipedia-20221220-1-percent-tokenized-568\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-wikipedia-20221220-1-percent-tokenized-568\"\n\nMore Information needed"
]
|
eea25d1105868f81289af0f1cb500ddf88e484bb | # Dataset Card for "financial_phrasebank_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cyrilzhang/financial_phrasebank_split | [
"region:us"
]
| 2023-01-17T21:26:00+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 611259.9339661576, "num_examples": 4361}, {"name": "test", "num_bytes": 67980.06603384235, "num_examples": 485}], "download_size": 418548, "dataset_size": 679240.0}} | 2023-01-17T21:26:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "financial_phrasebank_split"
More Information needed | [
"# Dataset Card for \"financial_phrasebank_split\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"financial_phrasebank_split\"\n\nMore Information needed"
]
|
113e1b27260b0b7070e15c7fbe71c812abe8c279 | # Dataset Card for "dreambooth_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_test | [
"region:us"
]
| 2023-01-17T22:49:53+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5590808.0, "num_examples": 5}, {"name": "validation", "num_bytes": 37346797.0, "num_examples": 32}], "download_size": 1169134, "dataset_size": 42937605.0}} | 2023-01-17T23:23:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dreambooth_test"
More Information needed | [
"# Dataset Card for \"dreambooth_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth_test\"\n\nMore Information needed"
]
|
10860918d9160a08b6b55ed717fa7a580725052b |
# Dataset Card for ReazonSpeech
## Dataset Description
- **Homepage:** https://research.reazon.jp/projects/ReazonSpeech
- **GitHub:** https://github.com/reazon-research/reazonspeech
## Dataset Summary
This dataset contains a diverse set of natural Japanese speech, collected
from terrestrial television streams. It contains more than 35000 hours of
audio.
Paper: [ReazonSpeech: A Free and Massive Corpus for Japanese ASR](https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf)
### Disclaimer
**TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.**
## Dataset Format
Audio files are available in FLAC format, sampled at 16000 hz.
Each audio file is accompanied with a transcription.
```
{
'name': '000/0000000000000.flac',
'audio': {
'path': '/path/to/000/0000000000000.flac',
'array': array([ 0.01000000, ...], dtype=float32),
'sampling_rate': 16000
},
'transcription': 'ไปๆฅใฎใใฅใผในใใไผใใใพใใ'
}
```
We provide 5 different dataset sizes. Here is the list of available
sizes and their approximate recording hours.
| Name | Size | Hours |
| -------- | ----- | ----------- |
| `tiny` | 600MB | 8.5 hours |
| `small` | 6GB | 100 hours |
| `medium` | 65GB | 1000 hours |
| `large` | 330GB | 5000 hours |
| `all` | 2.3TB | 35000 hours |
You can access this dataset through Hugging Face `datasets` library.
```
from datasets import load_dataset
ds = load_dataset("reazon-research/reazonspeech", "all", trust_remote_code=True)
```
## Access the older versions
If you want to access the older versions of ReazonSpeech corpus,
you can use the following tags.
| Name | Size | Hours |
| ----------- | ----- | ----------- |
| `small-v1` | 350MB | 5 hours |
| `medium-v1` | 22GB | 300 hours |
| `all-v1` | 1TB | 19000 hours |
## License
[CDLA-Sharing-1.0](https://cdla.dev/sharing-1-0/)
TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.
| reazon-research/reazonspeech | [
"task_categories:automatic-speech-recognition",
"size_categories:10M<n<100M",
"language:ja",
"license:other",
"region:us"
]
| 2023-01-17T23:03:48+00:00 | {"language": ["ja"], "license": "other", "size_categories": ["10M<n<100M"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "ReazonSpeech"} | 2024-01-21T07:55:59+00:00 | []
| [
"ja"
]
| TAGS
#task_categories-automatic-speech-recognition #size_categories-10M<n<100M #language-Japanese #license-other #region-us
| Dataset Card for ReazonSpeech
=============================
Dataset Description
-------------------
* Homepage: URL
* GitHub: URL
Dataset Summary
---------------
This dataset contains a diverse set of natural Japanese speech, collected
from terrestrial television streams. It contains more than 35000 hours of
audio.
Paper: ReazonSpeech: A Free and Massive Corpus for Japanese ASR
### Disclaimer
TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.
Dataset Format
--------------
Audio files are available in FLAC format, sampled at 16000 hz.
Each audio file is accompanied with a transcription.
We provide 5 different dataset sizes. Here is the list of available
sizes and their approximate recording hours.
Name: 'tiny', Size: 600MB, Hours: 8.5 hours
Name: 'small', Size: 6GB, Hours: 100 hours
Name: 'medium', Size: 65GB, Hours: 1000 hours
Name: 'large', Size: 330GB, Hours: 5000 hours
Name: 'all', Size: 2.3TB, Hours: 35000 hours
You can access this dataset through Hugging Face 'datasets' library.
Access the older versions
-------------------------
If you want to access the older versions of ReazonSpeech corpus,
you can use the following tags.
Name: 'small-v1', Size: 350MB, Hours: 5 hours
Name: 'medium-v1', Size: 22GB, Hours: 300 hours
Name: 'all-v1', Size: 1TB, Hours: 19000 hours
License
-------
CDLA-Sharing-1.0
TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.
| [
"### Disclaimer\n\n\nTO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET\nSOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.\n\n\nDataset Format\n--------------\n\n\nAudio files are available in FLAC format, sampled at 16000 hz.\nEach audio file is accompanied with a transcription.\n\n\nWe provide 5 different dataset sizes. Here is the list of available\nsizes and their approximate recording hours.\n\n\nName: 'tiny', Size: 600MB, Hours: 8.5 hours\nName: 'small', Size: 6GB, Hours: 100 hours\nName: 'medium', Size: 65GB, Hours: 1000 hours\nName: 'large', Size: 330GB, Hours: 5000 hours\nName: 'all', Size: 2.3TB, Hours: 35000 hours\n\n\nYou can access this dataset through Hugging Face 'datasets' library.\n\n\nAccess the older versions\n-------------------------\n\n\nIf you want to access the older versions of ReazonSpeech corpus,\nyou can use the following tags.\n\n\nName: 'small-v1', Size: 350MB, Hours: 5 hours\nName: 'medium-v1', Size: 22GB, Hours: 300 hours\nName: 'all-v1', Size: 1TB, Hours: 19000 hours\n\n\nLicense\n-------\n\n\nCDLA-Sharing-1.0\n\n\nTO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET\nSOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4."
]
| [
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-10M<n<100M #language-Japanese #license-other #region-us \n",
"### Disclaimer\n\n\nTO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET\nSOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.\n\n\nDataset Format\n--------------\n\n\nAudio files are available in FLAC format, sampled at 16000 hz.\nEach audio file is accompanied with a transcription.\n\n\nWe provide 5 different dataset sizes. Here is the list of available\nsizes and their approximate recording hours.\n\n\nName: 'tiny', Size: 600MB, Hours: 8.5 hours\nName: 'small', Size: 6GB, Hours: 100 hours\nName: 'medium', Size: 65GB, Hours: 1000 hours\nName: 'large', Size: 330GB, Hours: 5000 hours\nName: 'all', Size: 2.3TB, Hours: 35000 hours\n\n\nYou can access this dataset through Hugging Face 'datasets' library.\n\n\nAccess the older versions\n-------------------------\n\n\nIf you want to access the older versions of ReazonSpeech corpus,\nyou can use the following tags.\n\n\nName: 'small-v1', Size: 350MB, Hours: 5 hours\nName: 'medium-v1', Size: 22GB, Hours: 300 hours\nName: 'all-v1', Size: 1TB, Hours: 19000 hours\n\n\nLicense\n-------\n\n\nCDLA-Sharing-1.0\n\n\nTO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET\nSOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4."
]
|
980f33e8374ad0a3954b9841611644da2547b501 | # Dataset Card for "OxfordPets_test_facebook_opt_350m_Visclues_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/OxfordPets_test_facebook_opt_350m_Visclues_20 | [
"region:us"
]
| 2023-01-17T23:11:19+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_3", "num_bytes": 277693.0, "num_examples": 20}, {"name": "fewshot_5", "num_bytes": 292064.0, "num_examples": 20}, {"name": "fewshot_1", "num_bytes": 263406.0, "num_examples": 20}, {"name": "fewshot_2", "num_bytes": 270668.0, "num_examples": 20}], "download_size": 784934, "dataset_size": 1103831.0}} | 2023-01-17T23:35:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "OxfordPets_test_facebook_opt_350m_Visclues_20"
More Information needed | [
"# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Visclues_20\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Visclues_20\"\n\nMore Information needed"
]
|
7f8aa66317b438eeac50d62de5db7870656c6e03 | # Dataset Card for "sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anjalyjayakrishnan/sample | [
"region:us"
]
| 2023-01-18T03:29:02+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "package_name", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "star", "dtype": "int64"}, {"name": "version_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1508, "num_examples": 5}, {"name": "test", "num_bytes": 956, "num_examples": 5}], "download_size": 7783, "dataset_size": 2464}} | 2023-02-07T00:42:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sample"
More Information needed | [
"# Dataset Card for \"sample\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sample\"\n\nMore Information needed"
]
|
37773c2e6034a85d3581590de7b38abbb2d85e96 | # Dataset Card for GermanRentalAgreements
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/sebischair/Legal-Sentence-Classification-Datasets-and-Models)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
| joelniklaus/german_rental_agreements | [
"region:us"
]
| 2023-01-18T04:02:40+00:00 | {} | 2023-01-18T04:03:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for GermanRentalAgreements
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: GitHub
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @JoelNiklaus for adding this dataset.
| [
"# Dataset Card for GermanRentalAgreements",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: GitHub\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @JoelNiklaus for adding this dataset."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for GermanRentalAgreements",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: GitHub\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @JoelNiklaus for adding this dataset."
]
|
e1385bb979a4d10d5a65350e2bf4b606cbf426b1 | # Dataset Card for "beautiful_interesting_spectacular_photo_dog_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_dog_25000 | [
"region:us"
]
| 2023-01-18T06:36:25+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 361773346.0, "num_examples": 504}], "download_size": 361776700, "dataset_size": 361773346.0}} | 2023-01-18T06:37:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_dog_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_dog_25000\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_dog_25000\"\n\nMore Information needed"
]
|
c5742eb7ad92ad0303a94cccbb4003a7da7138f5 | # Dataset Card for "dreambooth_test_with_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_test_with_reg | [
"region:us"
]
| 2023-01-18T06:59:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 183792899.0, "num_examples": 200}, {"name": "validation", "num_bytes": 37346753.0, "num_examples": 32}], "download_size": 78739258, "dataset_size": 221139652.0}} | 2023-01-18T08:09:31+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dreambooth_test_with_reg"
More Information needed | [
"# Dataset Card for \"dreambooth_test_with_reg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth_test_with_reg\"\n\nMore Information needed"
]
|
1199a0e08751903da75b67410b654bb092e6e4e8 |
# Dataset Card for Livedoor News Corpus
[](https://github.com/shunk031/huggingface-datasets_livedoor-news-corpus/actions/workflows/ci.yaml)

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.rondhuit.com/download.html#ldcc
- **Repository:** https://github.com/shunk031/huggingface-datasets_livedoor-news-corpus
### Dataset Summary
> ๆฌใณใผใในใฏใNHN Japan ๆ ชๅผไผ็คพใ้ๅถใใใlivedoor ใใฅใผในใใฎใใกใไธ่จใฎใฏใชใจใคใใฃใใปใณใขใณใบใฉใคใปใณในใ้ฉ็จใใใใใฅใผใน่จไบใๅ้ใใๅฏ่ฝใช้ใ HTML ใฟใฐใๅใ้คใใฆไฝๆใใใใฎใงใใ
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset(
"shunk031/livedoor-news-corpus",
train_ratio=0.8,
val_ratio=0.1,
test_ratio=0.1,
random_state=42,
shuffle=True,
)
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['url', 'date', 'title', 'content', 'category'],
# num_rows: 5894
# })
# validation: Dataset({
# features: ['url', 'date', 'title', 'content', 'category'],
# num_rows: 737
# })
# test: Dataset({
# features: ['url', 'date', 'title', 'content', 'category'],
# num_rows: 736
# })
# })
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
> ๅ่จไบใใกใคใซใซใฏใฏใชใจใคใใฃใใปใณใขใณใบใฉใคใปใณในใ่กจ็คบ โ ๆนๅค็ฆๆญขใใ้ฉ็จใใใพใใ ใฏใฌใธใใ่กจ็คบใซใคใใฆใฏใใฅใผในใซใใดใชใซใใ็ฐใชใใใใใใฆใณใญใผใใใใใกใคใซใๅฑ้ใใใตใใใฃใฌใฏใใชใซใใใใใใใฎ LICENSE.txt ใใ่ฆงใใ ใใใ livedoor ใฏ NHN Japan ๆ ชๅผไผ็คพใฎ็ป้ฒๅๆจใงใใ
### Citation Information
[More Information Needed]
### Contributions
Thanks to [RONDHUIT Co., Ltd.](https://www.rondhuit.com/) for creating this dataset.
| shunk031/livedoor-news-corpus | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"language_creators:found",
"multilinguality:monolingual",
"language:ja",
"license:cc-by-nd-4.0",
"region:us"
]
| 2023-01-18T08:30:24+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["ja"], "license": ["cc-by-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "livedoor-news-corpus", "tags": []} | 2023-10-28T04:40:17+00:00 | []
| [
"ja"
]
| TAGS
#task_categories-text-classification #task_ids-multi-class-classification #language_creators-found #multilinguality-monolingual #language-Japanese #license-cc-by-nd-4.0 #region-us
|
# Dataset Card for Livedoor News Corpus
 | Poulami/processed_bert_dataset | [
"region:us"
]
| 2023-01-18T08:30:53+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24027177600.0, "num_examples": 6674216}], "download_size": 5886705553, "dataset_size": 24027177600.0}} | 2023-01-18T09:23:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "processed_bert_dataset"
More Information needed | [
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
]
|
84bc4ab1c9c6399e8d6f01c458bd9ef71fe8d397 | # WEC-Eng
A large-scale dataset for cross-document event coreference extracted from English Wikipedia. </br>
- **Repository (Code for generating WEC):** https://github.com/AlonEirew/extract-wec
- **Paper:** https://aclanthology.org/2021.naacl-main.198/
### Languages
English
## Load Dataset
You can read in WEC-Eng files as follows (using the **huggingface_hub** library):
```json
from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/biu-nlp/WEC-Eng"
splits_files = ["Dev_Event_gold_mentions_validated.json",
"Test_Event_gold_mentions_validated.json",
"Train_Event_gold_mentions.json"]
wec_eng = list()
for split_file in splits_files:
wec_eng.append(json.load(open(cached_download(
hf_hub_url(REPO_ID, split_file)), "r")))
```
## Dataset Structure
### Data Splits
- **Final version of the English CD event coreference dataset**<br>
- Train - Train_Event_gold_mentions.json
- Dev - Dev_Event_gold_mentions_validated.json
- Test - Test_Event_gold_mentions_validated.json
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Clusters | 7,042 | 233 | 322 |
| Event Mentions | 40,529 | 1250 | 1,893 |
- **The non (within clusters) controlled version of the dataset (lexical diversity)**<br>
- All (experimental) - All_Event_gold_mentions_unfiltered.json
### Data Instances
```json
{
"coref_chain": 2293469,
"coref_link": "Family Values Tour 1998",
"doc_id": "House of Pain",
"mention_context": [
"From",
"then",
"on",
",",
"the",
"members",
"continued",
"their"
],
"mention_head": "Tour",
"mention_head_lemma": "Tour",
"mention_head_pos": "PROPN",
"mention_id": "108172",
"mention_index": 1,
"mention_ner": "UNK",
"mention_type": 8,
"predicted_coref_chain": null,
"sent_id": 2,
"tokens_number": [
50,
51,
52,
53
],
"tokens_str": "Family Values Tour 1998",
"topic_id": -1
}
```
### Data Fields
|Field|Value Type|Value|
|---|:---:|---|
|coref_chain|Numeric|Coreference chain/cluster ID|
|coref_link|String|Coreference link wikipeida page/article title|
|doc_id|String|Mention page/article title|
|mention_context|List[String]|Tokenized mention paragraph (including mention)|
|mention_head|String|Mention span head token|
|mention_head_lemma|String|Mention span head token lemma|
|mention_head_pos|String|Mention span head token POS|
|mention_id|String|Mention id|
|mention_index|Numeric|Mention index in json file|
|mention_ner|String|Mention NER|
|tokens_number|List[Numeric]|Mentions tokens ids within the context|
|tokens_str|String|Mention span text|
|topic_id|Ignore|Ignore|
|mention_type|Ignore|Ignore|
|predicted_coref_chain|Ignore|Ignore|
|sent_id|Ignore|Ignore|
## Citation
```
@inproceedings{eirew-etal-2021-wec,
title = "{WEC}: Deriving a Large-scale Cross-document Event Coreference dataset from {W}ikipedia",
author = "Eirew, Alon and
Cattan, Arie and
Dagan, Ido",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.198",
doi = "10.18653/v1/2021.naacl-main.198",
pages = "2498--2510",
abstract = "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic. To complement these resources and enhance future research, we present Wikipedia Event Coreference (WEC), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics. We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages. To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting. Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.",
}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at https://github.com/AlonEirew/extract-wec. | biu-nlp/WEC-Eng | [
"region:us"
]
| 2023-01-18T09:11:52+00:00 | {} | 2023-01-18T13:47:10+00:00 | []
| []
| TAGS
#region-us
| WEC-Eng
=======
A large-scale dataset for cross-document event coreference extracted from English Wikipedia.
* Repository (Code for generating WEC): URL
* Paper: URL
### Languages
English
Load Dataset
------------
You can read in WEC-Eng files as follows (using the huggingface\_hub library):
Dataset Structure
-----------------
### Data Splits
* Final version of the English CD event coreference dataset
+ Train - Train\_Event\_gold\_mentions.json
+ Dev - Dev\_Event\_gold\_mentions\_validated.json
+ Test - Test\_Event\_gold\_mentions\_validated.json
* The non (within clusters) controlled version of the dataset (lexical diversity)
+ All (experimental) - All\_Event\_gold\_mentions\_unfiltered.json
### Data Instances
### Data Fields
License
-------
We provide the following data sets under a <a href="URL Commons Attribution-ShareAlike 3.0 Unported License. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
Contact
-------
If you have any questions please create a Github issue at URL
| [
"### Languages\n\n\nEnglish\n\n\nLoad Dataset\n------------\n\n\nYou can read in WEC-Eng files as follows (using the huggingface\\_hub library):\n\n\nDataset Structure\n-----------------",
"### Data Splits\n\n\n* Final version of the English CD event coreference dataset \n\n\t+ Train - Train\\_Event\\_gold\\_mentions.json\n\t+ Dev - Dev\\_Event\\_gold\\_mentions\\_validated.json\n\t+ Test - Test\\_Event\\_gold\\_mentions\\_validated.json\n\n\n\n* The non (within clusters) controlled version of the dataset (lexical diversity) \n\n\t+ All (experimental) - All\\_Event\\_gold\\_mentions\\_unfiltered.json",
"### Data Instances",
"### Data Fields\n\n\n\nLicense\n-------\n\n\nWe provide the following data sets under a <a href=\"URL Commons Attribution-ShareAlike 3.0 Unported License. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License\n\n\nContact\n-------\n\n\nIf you have any questions please create a Github issue at URL"
]
| [
"TAGS\n#region-us \n",
"### Languages\n\n\nEnglish\n\n\nLoad Dataset\n------------\n\n\nYou can read in WEC-Eng files as follows (using the huggingface\\_hub library):\n\n\nDataset Structure\n-----------------",
"### Data Splits\n\n\n* Final version of the English CD event coreference dataset \n\n\t+ Train - Train\\_Event\\_gold\\_mentions.json\n\t+ Dev - Dev\\_Event\\_gold\\_mentions\\_validated.json\n\t+ Test - Test\\_Event\\_gold\\_mentions\\_validated.json\n\n\n\n* The non (within clusters) controlled version of the dataset (lexical diversity) \n\n\t+ All (experimental) - All\\_Event\\_gold\\_mentions\\_unfiltered.json",
"### Data Instances",
"### Data Fields\n\n\n\nLicense\n-------\n\n\nWe provide the following data sets under a <a href=\"URL Commons Attribution-ShareAlike 3.0 Unported License. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License\n\n\nContact\n-------\n\n\nIf you have any questions please create a Github issue at URL"
]
|
15439bd777b2fb82f090c80e12d4da40c06522b4 |
<div align="center">
<img width="640" alt="keremberke/chest-xray-classification" src="https://huggingface.co/datasets/keremberke/chest-xray-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['NORMAL', 'PNEUMONIA']
```
### Number of Images
```json
{'train': 4077, 'test': 582, 'valid': 1165}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/chest-xray-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 31, 2022 at 3:11 PM GMT
It includes 5824 images.
Pneumonia are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/chest-xray-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Biology",
"region:us"
]
| 2023-01-18T09:22:08+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Biology"]} | 2023-01-18T09:25:27+00:00 | []
| []
| TAGS
#task_categories-image-classification #roboflow #roboflow2huggingface #Biology #region-us
|
<div align="center">
<img width="640" alt="keremberke/chest-xray-classification" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on March 31, 2022 at 3:11 PM GMT
It includes 5824 images.
Pneumonia are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 31, 2022 at 3:11 PM GMT\n\nIt includes 5824 images.\nPneumonia are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-image-classification #roboflow #roboflow2huggingface #Biology #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 31, 2022 at 3:11 PM GMT\n\nIt includes 5824 images.\nPneumonia are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
]
|
0a9f333828628586dcc023e6e108e0e003ca7f71 | # Dataset Card for "dfg_augmented_mbpp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | reshinthadith/dfg_augmented_mbpp | [
"region:us"
]
| 2023-01-18T09:26:49+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32138, "num_examples": 95}], "download_size": 17897, "dataset_size": 32138}} | 2023-01-18T09:27:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dfg_augmented_mbpp"
More Information needed | [
"# Dataset Card for \"dfg_augmented_mbpp\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dfg_augmented_mbpp\"\n\nMore Information needed"
]
|
27f567c7bdad157df4fc2e3d53b6fd957a9d38a4 |
<div align="center">
<img width="640" alt="keremberke/painting-style-classification" src="https://huggingface.co/datasets/keremberke/painting-style-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Realism', 'Art_Nouveau_Modern', 'Analytical_Cubism', 'Cubism', 'Expressionism', 'Action_painting', 'Synthetic_Cubism', 'Symbolism', 'Ukiyo_e', 'Naive_Art_Primitivism', 'Post_Impressionism', 'Impressionism', 'Fauvism', 'Rococo', 'Minimalism', 'Mannerism_Late_Renaissance', 'Color_Field_Painting', 'High_Renaissance', 'Romanticism', 'Pop_Art', 'Contemporary_Realism', 'Baroque', 'New_Realism', 'Pointillism', 'Northern_Renaissance', 'Early_Renaissance', 'Abstract_Expressionism']
```
### Number of Images
```json
{'valid': 1295, 'train': 4493, 'test': 629}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/painting-style-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/art-dataset/wiki-art/dataset/1](https://universe.roboflow.com/art-dataset/wiki-art/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ wiki-art_dataset,
title = { wiki art Dataset },
type = { Open Source Dataset },
author = { Art Dataset },
howpublished = { \\url{ https://universe.roboflow.com/art-dataset/wiki-art } },
url = { https://universe.roboflow.com/art-dataset/wiki-art },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 9, 2022 at 1:47 AM GMT
It includes 6417 images.
27 are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| keremberke/painting-style-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"region:us"
]
| 2023-01-18T09:27:05+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-01-18T09:30:28+00:00 | []
| []
| TAGS
#task_categories-image-classification #roboflow #roboflow2huggingface #region-us
|
<div align="center">
<img width="640" alt="keremberke/painting-style-classification" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on March 9, 2022 at 1:47 AM GMT
It includes 6417 images.
27 are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 9, 2022 at 1:47 AM GMT\n\nIt includes 6417 images.\n27 are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-image-classification #roboflow #roboflow2huggingface #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 9, 2022 at 1:47 AM GMT\n\nIt includes 6417 images.\n27 are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied."
]
|
34b5d5763e73dd7e4ab81acf6518d0acbd893c9c |
<div align="center">
<img width="640" alt="keremberke/table-extraction" src="https://huggingface.co/datasets/keremberke/table-extraction/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/table-extraction", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 18, 2023 at 9:41 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 342 images.
Data-table are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/table-extraction | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Documents",
"region:us"
]
| 2023-01-18T09:42:19+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Documents"]} | 2023-01-18T09:43:03+00:00 | []
| []
| TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #Documents #region-us
|
<div align="center">
<img width="640" alt="keremberke/table-extraction" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on January 18, 2023 at 9:41 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 342 images.
Data-table are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 18, 2023 at 9:41 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 342 images.\nData-table are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #Documents #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 18, 2023 at 9:41 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 342 images.\nData-table are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n\nNo image augmentation techniques were applied."
]
|
3cde19e1bd95af17f0bd5b24cec75b249814b0f4 |
<div align="center">
<img width="640" alt="keremberke/plane-detection" src="https://huggingface.co/datasets/keremberke/plane-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['planes']
```
### Number of Images
```json
{'test': 25, 'valid': 50, 'train': 175}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/plane-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4](https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ overhead-plane-detector_dataset,
title = { Overhead Plane Detector Dataset },
type = { Open Source Dataset },
author = { SkyBot Cam },
howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } },
url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jan },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 30, 2022 at 3:11 PM GMT
It includes 250 images.
Planes are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| keremberke/plane-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
]
| 2023-01-18T09:43:30+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-01-27T13:46:18+00:00 | []
| []
| TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #region-us
|
<div align="center">
<img width="640" alt="keremberke/plane-detection" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on March 30, 2022 at 3:11 PM GMT
It includes 250 images.
Planes are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 30, 2022 at 3:11 PM GMT\n\nIt includes 250 images.\nPlanes are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
]
| [
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 30, 2022 at 3:11 PM GMT\n\nIt includes 250 images.\nPlanes are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
]
|
972efa5575dfa6c3eef01e935f6a029089b61daa | # The CoreSearch Dataset
A large-scale dataset for cross-document event coreference **search**</br>
- **Paper:** [Cross-document Event Coreference Search: Task, Dataset and Modeling](https://arxiv.org/abs/2210.12654)
- **<ins>CoreSearchV2:</ins>** A cleaner version of this dataset is now available at [https://huggingface.co/datasets/biu-nlp/CoreSearchV2](https://huggingface.co/datasets/biu-nlp/CoreSearchV2)
### Languages
English
## Load Dataset
You can read/download the dataset files following Huggingface Hub instructions.</br>
For example, below code will load CoreSearch DPR folder:
```python
from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/Intel/CoreSearch"
DPR_FILES = "/dpr/"
dpr_files = ["dpr/Dev.json", "dpr/Train.json", "dpr/Test.json"]
dpr_jsons = list()
for _file in dpr_files:
dpr_jsons.append(json.load(open(cached_download(
hf_hub_url(REPO_ID, _file)), "r")))
```
### Data Splits
- **Final version of the CD event coreference search dataset**<br>
| | Train | Valid | Test | Total |
| ----- | ------ | ----- | ---- | ---- |
| WEC-Eng Validated Data | | | | |
| # Clusters | 237 | 49 | 236 | 522 |
| # Passages (with Mentions) | 1,503 | 341 | 1,266 | 3,110 |
| # Added Destructor Passages | 922,736 | 923,376 | 923,746 | 2,769,858 |
| # Total Passages | 924,239 | 923,717 | 925,012 | 2,772,968 |
## Citation
```
@inproceedings{eirew-etal-2022-cross,
title = "Cross-document Event Coreference Search: Task, Dataset and Modeling",
author = "Eirew, Alon and
Caciularu, Avi and
Dagan, Ido",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.58",
pages = "900--913",
abstract = "The task of Cross-document Coreference Resolution has been traditionally formulated as requiring to identify all coreference links across a given set of documents. We propose an appealing, and often more applicable, complementary set up for the task {--} Cross-document Coreference Search, focusing in this paper on event coreference. Concretely, given a mention in context of an event of interest, considered as a query, the task is to find all coreferring mentions for the query event in a large document collection. To support research on this task, we create a corresponding dataset, which is derived from Wikipedia while leveraging annotations in the available Wikipedia Event Coreferecene dataset (WEC-Eng). Observing that the coreference search setup is largely analogous to the setting of Open Domain Question Answering, we adapt the prominent Deep Passage Retrieval (DPR) model to our setting, as an appealing baseline. Finally, we present a novel model that integrates a powerful coreference scoring scheme into the DPR architecture, yielding improved performance.",
}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at <a href="https://github.com/AlonEirew/CoreSearch">https://github.com/AlonEirew/CoreSearch</a>. | biu-nlp/CoreSearch | [
"arxiv:2210.12654",
"region:us"
]
| 2023-01-18T09:49:31+00:00 | {} | 2023-03-23T09:39:55+00:00 | [
"2210.12654"
]
| []
| TAGS
#arxiv-2210.12654 #region-us
| # The CoreSearch Dataset
A large-scale dataset for cross-document event coreference search</br>
- Paper: Cross-document Event Coreference Search: Task, Dataset and Modeling
- <ins>CoreSearchV2:</ins> A cleaner version of this dataset is now available at URL
### Languages
English
## Load Dataset
You can read/download the dataset files following Huggingface Hub instructions.</br>
For example, below code will load CoreSearch DPR folder:
### Data Splits
- Final version of the CD event coreference search dataset<br>
| | Train | Valid | Test | Total |
| ----- | ------ | ----- | ---- | ---- |
| WEC-Eng Validated Data | | | | |
| # Clusters | 237 | 49 | 236 | 522 |
| # Passages (with Mentions) | 1,503 | 341 | 1,266 | 3,110 |
| # Added Destructor Passages | 922,736 | 923,376 | 923,746 | 2,769,858 |
| # Total Passages | 924,239 | 923,717 | 925,012 | 2,772,968 |
## License
We provide the following data sets under a <a href="URL Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at <a href="URL/URL | [
"# The CoreSearch Dataset\nA large-scale dataset for cross-document event coreference search</br>\n\n- Paper: Cross-document Event Coreference Search: Task, Dataset and Modeling\n\n- <ins>CoreSearchV2:</ins> A cleaner version of this dataset is now available at URL",
"### Languages\n\nEnglish",
"## Load Dataset\nYou can read/download the dataset files following Huggingface Hub instructions.</br>\nFor example, below code will load CoreSearch DPR folder:",
"### Data Splits\n- Final version of the CD event coreference search dataset<br>\n| | Train | Valid | Test | Total |\n| ----- | ------ | ----- | ---- | ---- |\n| WEC-Eng Validated Data | | | | |\n| # Clusters | 237 | 49 | 236 | 522 | \n| # Passages (with Mentions) | 1,503 | 341 | 1,266 | 3,110 |\n| # Added Destructor Passages | 922,736 | 923,376 | 923,746 | 2,769,858 |\n| # Total Passages | 924,239 | 923,717 | 925,012 | 2,772,968 |",
"## License\nWe provide the following data sets under a <a href=\"URL Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License",
"## Contact\nIf you have any questions please create a Github issue at <a href=\"URL/URL"
]
| [
"TAGS\n#arxiv-2210.12654 #region-us \n",
"# The CoreSearch Dataset\nA large-scale dataset for cross-document event coreference search</br>\n\n- Paper: Cross-document Event Coreference Search: Task, Dataset and Modeling\n\n- <ins>CoreSearchV2:</ins> A cleaner version of this dataset is now available at URL",
"### Languages\n\nEnglish",
"## Load Dataset\nYou can read/download the dataset files following Huggingface Hub instructions.</br>\nFor example, below code will load CoreSearch DPR folder:",
"### Data Splits\n- Final version of the CD event coreference search dataset<br>\n| | Train | Valid | Test | Total |\n| ----- | ------ | ----- | ---- | ---- |\n| WEC-Eng Validated Data | | | | |\n| # Clusters | 237 | 49 | 236 | 522 | \n| # Passages (with Mentions) | 1,503 | 341 | 1,266 | 3,110 |\n| # Added Destructor Passages | 922,736 | 923,376 | 923,746 | 2,769,858 |\n| # Total Passages | 924,239 | 923,717 | 925,012 | 2,772,968 |",
"## License\nWe provide the following data sets under a <a href=\"URL Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License",
"## Contact\nIf you have any questions please create a Github issue at <a href=\"URL/URL"
]
|
566b806ef764bafa34d823b57aea1cbdc068265c | # AutoTrain Dataset for project: test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "The constitution of Jordan grants its monarch the right to withhold assent to laws passed by its parliament. Article 93 of that document gives the Jordanian sovereign six months to sign or veto any legislation sent to him from the National Assembly; if he vetoes it within that timeframe, the assembly may override his veto by a two-thirds vote of both houses; otherwise, the law does not go into effect (but it may be reconsidered in the next session of the assembly). If the monarch fails to act within six months of the bill being presented to him, it becomes law without his signature.",
"question": "What happens if the soverign doesn't sign the bill within the six-month time frame?",
"answers.text": [
", it becomes law without his signature"
],
"answers.answer_start": [
550
],
"feat_id": [
"572ab241be1ee31400cb818b"
],
"feat_title": [
"Royal_assent"
]
},
{
"context": "The modern Greek theatre was born after the Greek independence, in the early 19th century, and initially was influenced by the Heptanesean theatre and melodrama, such as the Italian opera. The Nobile Teatro di San Giacomo di Corf\u00f9 was the first theatre and opera house of modern Greece and the place where the first Greek opera, Spyridon Xyndas' The Parliamentary Candidate (based on an exclusively Greek libretto) was performed. During the late 19th and early 20th century, the Athenian theatre scene was dominated by revues, musical comedies, operettas and nocturnes and notable playwrights included Spyridon Samaras, Dionysios Lavrangas, Theophrastos Sakellaridis and others.",
"question": "What was the first Greek opera?",
"answers.text": [
"The Parliamentary Candidate"
],
"answers.answer_start": [
346
],
"feat_id": [
"57267a75dd62a815002e8683"
],
"feat_title": [
"Greece"
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 104204 |
| valid | 26051 |
| 96harsh56/autotrain-data-test | [
"region:us"
]
| 2023-01-18T10:02:07+00:00 | {} | 2023-02-15T06:29:58+00:00 | []
| []
| TAGS
#region-us
| AutoTrain Dataset for project: test
===================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
| [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
]
|
fc9f5f881b814de9f5d73c489a80a32e764579f6 | https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` | tasksource/cycic_multiplechoice | [
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
]
| 2023-01-18T10:59:28+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["multiple-choice"]} | 2023-01-18T12:15:47+00:00 | [
"2301.05948"
]
| [
"en"
]
| TAGS
#task_categories-multiple-choice #language-English #license-apache-2.0 #arxiv-2301.05948 #region-us
| URL
added for
| []
| [
"TAGS\n#task_categories-multiple-choice #language-English #license-apache-2.0 #arxiv-2301.05948 #region-us \n"
]
|
ad7eb0d2022e36d762903851ef3ac1d612da96be | https://storage.googleapis.com/ai2-mosaic/public/cycic/CycIC-train-dev.zip
https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` | tasksource/cycic_classification | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
]
| 2023-01-18T11:03:35+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["question-answering", "text-classification"]} | 2023-05-31T07:47:48+00:00 | [
"2301.05948"
]
| [
"en"
]
| TAGS
#task_categories-question-answering #task_categories-text-classification #language-English #license-apache-2.0 #arxiv-2301.05948 #region-us
| URL
URL
added for
| []
| [
"TAGS\n#task_categories-question-answering #task_categories-text-classification #language-English #license-apache-2.0 #arxiv-2301.05948 #region-us \n"
]
|
23087f93ef072d1a828aa2845d08a2f1f0d1bd92 |
# ะัะธัะผะตัะธัะตัะบะธะต ะทะฐะดะฐัะธ ะดะปั ะดะธะฐะปะพะณะพะฒะพะน ัะธััะตะผั
ะะฐัะฐัะตั ัะพะดะตัะถะธั ััะผะฟะปั ั ะฟัะพัััะผะธ ะผะฐัะตะผะฐัะธัะตัะบะธะผะธ ะทะฐะดะฐะฝะธัะผะธ ะฟัะธะผะตัะฝะพ ัะฐะบะพะณะพ ะฒะธะดะฐ:
```
- ะคะพะฝะฐัะธะบ ะคะตะดะพัะฐ ัะฐะฑะพัะฐะตั ะพั 2 ะฑะฐัะฐัะตะตะบ, ะฐ ัะพะฝะฐัะธะบ ะะตั
ะธ ะพั 6. ะกะบะพะปัะบะพ ะฑะฐัะฐัะตะตะบ ะฝัะถะฝะพ ัะพะฝะฐัะธะบะฐะผ ะคะตะดะพัะฐ ะธ ะะตั
ะธ ะฒ ััะผะผะต?
- 2+6=8, ััะพะปัะบะพ ะฑะฐัะฐัะตะตะบ ะฟะพััะตะฑัะตััั.
- ะขะตะฟะตัั ะฟัะธะฑะฐะฒั ะบ ัะตะทัะปััะฐัั 469, ััะพ ะฟะพะปััะธะปะพัั?
- 8 ะฟะปัั 469 ัะฐะฒะฝะพ 477
- ะะพะดะตะปะธ ะฝะฐ 53, ััะพ ะฟะพะปััะธะปะพัั?
- 9
```
ะัะฝะพะฒะฝะฐั ะผะฐััะฐ ะทะฐะดะฐั ัะฒัะทะฐะฝะฐ ั ะฐัะธัะผะตัะธัะตัะบะธะผะธ ะดะตะนััะฒะธัะผะธ. ะััั ะฝะตะบะพัะพัะพะต ะบะพะปะธัะตััะฒะพ ะทะฐะดะฐั
ะฝะฐ ะฟะพะธัะบ ะบะพัะฝะตะน ะบะฒะฐะดัะฐัะฝะพะณะพ ััะฐะฒะฝะตะฝะธั:
```
- ะะฐะนะดะธ ะดะตะนััะฒะธัะตะปัะฝัะต ะบะพัะฝะธ ะบะฒะฐะดัะฐัะฝะพะณะพ ััะฐะฒะฝะตะฝะธั aโ
xยฒ+bโ
x+c ะดะปั a=45, b=225, c=-270
- ะขัั ะดะฒะฐ ะดะตะนััะฒะธัะตะปัะฝัั
ะบะพัะฝั -6 ะธ 1
```
ะขะฐะบะถะต ะตััั ะฟะพะฟะพะปะฝัะตะผัะน ะฝะฐะฑะพั ะทะฐะดะฐั ั ัะฐัะบััััะผ ั
ะพะดะพะผ ัะตัะตะฝะธั:
```
- ะ ะฑะพะปะพัะธัััั
ะปะตัะฐั
ะฟัะพะถะธะฒะฐะตั 8 ัััะปะธะบะพะฒ. ะั
ะพัะฝะธะบ ััะตะดะฐะตั ะฟะพ ะพะดะฝะพะผั ัััะปะธะบั ะบะฐะถะดัะต 9 ะดะฝะตะน. ะกะบะพะปัะบะพ ัััะปะธะบะพะฒ ะพััะฐะฝะตััั ัะตัะตะท 12 ะดะฝะตะน?
- ะะฐ 12 ะดะฝะตะน ะพั
ะพัะฝะธะบ ะฟะพะพะฑะตะดะฐะตั 1 ัะฐะท. ะะพััะพะผั ะพััะฐะฝะตััั 8-1=7 ัััะปะธะบะพะฒ.
```
ะะตะบะพัะพััะต ะทะฐะดะฐัะธ ะฟะพัััะพะตะฝั ัะฐะบ, ััะพะฑั ะทะฐััะฐะฒะธัั ะผะพะดะตะปั ะพะฑัะฐัะฐัั ะฒะฝะธะผะฐะฝะธะต ะฝะต ะฟัะพััะพ ะฝะฐ
ะฝะฐะปะธัะธะต ัะธัะตะป, ะฐ ะฝะฐ ะบะพะฝัะตะบัั ะธั
ัะฟะพััะตะฑะปะตะฝะธั:
```
- ะะธะบะฐ ะฟัะธะฝะตัะปะฐ ะฒ ัะบะพะปั 5 ะผะฐะฝะดะฐัะธะฝะพะฒ. ะััะทัั ะฟะพะฟัะพัะธะปะธ ะตะต ะฟะพะดะตะปะธัััั ั ะฝะธะผะธ ะผะฐะฝะดะฐัะธะฝะฐะผะธ. ะะฝะฐ ะพัะดะฐะปะฐ ะธะผ 3 ัััะบะธ. ะกะบะพะปัะบะพ ะผะฐะฝะดะฐัะธะฝะพะฒ ะะธะบะฐ ะพัะดะฐะปะฐ?
- 3
```
ะะฝะพะณะดะฐ ัะธัะปะฐ ะฒ ะทะฐะดะฐัะต ะฝะต ะธะผะตัั ะพัะฝะพัะตะฝะธั ะบ ัััะธ ะทะฐะดะฐัะธ, ััะพ ะดะพะปะถะฝะพ ะตัะต ัะธะปัะฝะตะต ะฟะพะฑัะถะดะฐัั ัะตัะฐัััั ะผะพะดะตะปั ััะธััะฒะฐัั ะบะพะฝัะตะบัั:
```
- ะะตัะตะผะฝะพะถะธะฒ ะฒะพัะตะผั ะธ ัะตะผั, ััะธัะตะปั ััะตะดะฝะตะน ัะบะพะปั โ77 ะฟะพะปััะธะป 5084. ะะฝ ะฒะตัะฝะพ ะฟะพััะธัะฐะป?
- ะฃัะธัะตะปั ััะตะดะฝะตะน ัะบะพะปั โ77 ะพัะธะฑัั, ัะฐะบ ะบะฐะบ 8*7=56, ะฐ ะฝะต 5084
```
## ะคะพัะผะฐั ะดะฐะฝะฝัั
ะะฐะถะดัะน ััะผะฟะป ัะพะดะตัะถะธั ัะฟะธัะพะบ ัะฒัะทะฐะฝะฝัั
ัะตะฟะปะธะบ ะฑะตะท ะฟัะตัะธะบัะฐ "- ", ะพะฑัะฐะทัััะธั
ัะตะฟะพัะบั ะฐัะธัะผะตัะธัะตัะบะธั
ะทะฐะดะฐะฝะธะน, ะฒ ะบะพัะพััั
ััะปะพะฒะธะต ะฝะพะฒะพะน ะทะฐะดะฐัะธ ััะตะฑัะตั ะฐะฝะฐะปะธะทะฐ ะบะฐะบ ะผะธะฝะธะผัะผ ะฟัะตะดัะดััะตะน ัะตะฟะปะธะบะธ.
## ะะตะบัะธัะตัะบะฐั ะฒะฐัะธะฐัะธะฒะฝะพััั ะพัะฒะตัะพะฒ
ะะปั ะผะฝะพะณะธั
ะทะฐะดะฐั ะพัะฒะตั ััะพัะผัะปะธัะพะฒะฐะฝ ะฝะต ะฟัะพััะพ ะบะฐะบ ัะธัะปะพ, ะฒ ะฝะตะณะพ ะดะพะฑะฐะฒะปะตะฝ ัะพะฟััััะฒัััะธะน ัะตะบัั:
```
- ะงะตะผั ัะฐะฒะฝะพ 2+2?
- 2+2 ัะฐะฒะฝะพ 4
```
## ะะตััะธะบะธ ะณะตะฝะตัะฐัะธะฒะฝัั
ะผะพะดะตะปะตะน
ะะพัะปะต ัะฐะนะฝััะฝะฐ (1 ัะฟะพั
ะฐ, lr=1e-5) ะฝะฐ 90% ะดะฐัะฐัะตัะฐ, ะฟะพะปััะฐัััั ัะฐะบะธะต ะผะตััะธะบะธ ะฝะฐ ัะตััะพะฒะพะน ัะฐััะธ:
```
ะะพะดะตะปั ะกัะตะดะฝะตะต ะพัะบะปะพะฝะตะฝะธะต ัะธัะปะพะฒะพะณะพ ะพัะฒะตัะฐ ะะพะปั ะฒะตัะฝัั
ะพัะฒะตัะพะฒ
ะฒ ััะฐะฒะฝะตะฝะธะธ ั ะฒะตัะฝัะผ
sberbank-ai/rugpt3small_based_on_gpt2 8.03e+02% 0.057
sberbank-ai/rugpt3medium_based_on_gpt2 2.89e+02% 0.085
sberbank-ai/rugpt3large_based_on_gpt2 1.58e+02% 0.131
facebook/xglm-2.9B 8.13e+02% 0.224
```
## ะะตะฝะตัะฐัะพั ััะผะฟะปะพะฒ
ะัะธ ัะพัะผะธัะพะฒะฐะฝะธะธ ะดะฐัะฐัะตัะฐ ะธัะฟะพะปัะทะพะฒะฐะปัั ะดะฒะธะถะพะบ ัะฐะฑะปะพะฝะฝะพะน ะณะตะฝะตัะฐัะธะธ ะธะท ััะพะณะพ ัะตะฟะพะทะธัะพัะธั: [https://github.com/Koziev/math](https://github.com/Koziev/math).
## ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะดะฐัะฐัะตัะฐ
ะะฐัะฐัะตั ะธัะฟะพะปัะทัะตััั ะดะปั ััะตะฝะธัะพะฒะบะธ [ัะฐัะฑะพัะฐ](https://github.com/Koziev/chatbot).
| inkoziev/arithmetic | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:ru",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-01-18T11:18:15+00:00 | {"language_creators": ["machine-generated"], "language": ["ru"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "source_datasets": [], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "arithmetic", "tags": []} | 2023-02-18T12:40:43+00:00 | []
| [
"ru"
]
| TAGS
#task_categories-question-answering #task_ids-closed-domain-qa #language_creators-machine-generated #multilinguality-monolingual #language-Russian #license-cc-by-nc-4.0 #region-us
|
# ะัะธัะผะตัะธัะตัะบะธะต ะทะฐะดะฐัะธ ะดะปั ะดะธะฐะปะพะณะพะฒะพะน ัะธััะตะผั
ะะฐัะฐัะตั ัะพะดะตัะถะธั ััะผะฟะปั ั ะฟัะพัััะผะธ ะผะฐัะตะผะฐัะธัะตัะบะธะผะธ ะทะฐะดะฐะฝะธัะผะธ ะฟัะธะผะตัะฝะพ ัะฐะบะพะณะพ ะฒะธะดะฐ:
ะัะฝะพะฒะฝะฐั ะผะฐััะฐ ะทะฐะดะฐั ัะฒัะทะฐะฝะฐ ั ะฐัะธัะผะตัะธัะตัะบะธะผะธ ะดะตะนััะฒะธัะผะธ. ะััั ะฝะตะบะพัะพัะพะต ะบะพะปะธัะตััะฒะพ ะทะฐะดะฐั
ะฝะฐ ะฟะพะธัะบ ะบะพัะฝะตะน ะบะฒะฐะดัะฐัะฝะพะณะพ ััะฐะฒะฝะตะฝะธั:
ะขะฐะบะถะต ะตััั ะฟะพะฟะพะปะฝัะตะผัะน ะฝะฐะฑะพั ะทะฐะดะฐั ั ัะฐัะบััััะผ ั
ะพะดะพะผ ัะตัะตะฝะธั:
ะะตะบะพัะพััะต ะทะฐะดะฐัะธ ะฟะพัััะพะตะฝั ัะฐะบ, ััะพะฑั ะทะฐััะฐะฒะธัั ะผะพะดะตะปั ะพะฑัะฐัะฐัั ะฒะฝะธะผะฐะฝะธะต ะฝะต ะฟัะพััะพ ะฝะฐ
ะฝะฐะปะธัะธะต ัะธัะตะป, ะฐ ะฝะฐ ะบะพะฝัะตะบัั ะธั
ัะฟะพััะตะฑะปะตะฝะธั:
ะะฝะพะณะดะฐ ัะธัะปะฐ ะฒ ะทะฐะดะฐัะต ะฝะต ะธะผะตัั ะพัะฝะพัะตะฝะธั ะบ ัััะธ ะทะฐะดะฐัะธ, ััะพ ะดะพะปะถะฝะพ ะตัะต ัะธะปัะฝะตะต ะฟะพะฑัะถะดะฐัั ัะตัะฐัััั ะผะพะดะตะปั ััะธััะฒะฐัั ะบะพะฝัะตะบัั:
## ะคะพัะผะฐั ะดะฐะฝะฝัั
ะะฐะถะดัะน ััะผะฟะป ัะพะดะตัะถะธั ัะฟะธัะพะบ ัะฒัะทะฐะฝะฝัั
ัะตะฟะปะธะบ ะฑะตะท ะฟัะตัะธะบัะฐ "- ", ะพะฑัะฐะทัััะธั
ัะตะฟะพัะบั ะฐัะธัะผะตัะธัะตัะบะธั
ะทะฐะดะฐะฝะธะน, ะฒ ะบะพัะพััั
ััะปะพะฒะธะต ะฝะพะฒะพะน ะทะฐะดะฐัะธ ััะตะฑัะตั ะฐะฝะฐะปะธะทะฐ ะบะฐะบ ะผะธะฝะธะผัะผ ะฟัะตะดัะดััะตะน ัะตะฟะปะธะบะธ.
## ะะตะบัะธัะตัะบะฐั ะฒะฐัะธะฐัะธะฒะฝะพััั ะพัะฒะตัะพะฒ
ะะปั ะผะฝะพะณะธั
ะทะฐะดะฐั ะพัะฒะตั ััะพัะผัะปะธัะพะฒะฐะฝ ะฝะต ะฟัะพััะพ ะบะฐะบ ัะธัะปะพ, ะฒ ะฝะตะณะพ ะดะพะฑะฐะฒะปะตะฝ ัะพะฟััััะฒัััะธะน ัะตะบัั:
## ะะตััะธะบะธ ะณะตะฝะตัะฐัะธะฒะฝัั
ะผะพะดะตะปะตะน
ะะพัะปะต ัะฐะนะฝััะฝะฐ (1 ัะฟะพั
ะฐ, lr=1e-5) ะฝะฐ 90% ะดะฐัะฐัะตัะฐ, ะฟะพะปััะฐัััั ัะฐะบะธะต ะผะตััะธะบะธ ะฝะฐ ัะตััะพะฒะพะน ัะฐััะธ:
## ะะตะฝะตัะฐัะพั ััะผะฟะปะพะฒ
ะัะธ ัะพัะผะธัะพะฒะฐะฝะธะธ ะดะฐัะฐัะตัะฐ ะธัะฟะพะปัะทะพะฒะฐะปัั ะดะฒะธะถะพะบ ัะฐะฑะปะพะฝะฝะพะน ะณะตะฝะตัะฐัะธะธ ะธะท ััะพะณะพ ัะตะฟะพะทะธัะพัะธั: URL
## ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะดะฐัะฐัะตัะฐ
ะะฐัะฐัะตั ะธัะฟะพะปัะทัะตััั ะดะปั ััะตะฝะธัะพะฒะบะธ ัะฐัะฑะพัะฐ.
| [
"# ะัะธัะผะตัะธัะตัะบะธะต ะทะฐะดะฐัะธ ะดะปั ะดะธะฐะปะพะณะพะฒะพะน ัะธััะตะผั\n\nะะฐัะฐัะตั ัะพะดะตัะถะธั ััะผะฟะปั ั ะฟัะพัััะผะธ ะผะฐัะตะผะฐัะธัะตัะบะธะผะธ ะทะฐะดะฐะฝะธัะผะธ ะฟัะธะผะตัะฝะพ ัะฐะบะพะณะพ ะฒะธะดะฐ:\n\n\n\nะัะฝะพะฒะฝะฐั ะผะฐััะฐ ะทะฐะดะฐั ัะฒัะทะฐะฝะฐ ั ะฐัะธัะผะตัะธัะตัะบะธะผะธ ะดะตะนััะฒะธัะผะธ. ะััั ะฝะตะบะพัะพัะพะต ะบะพะปะธัะตััะฒะพ ะทะฐะดะฐั\nะฝะฐ ะฟะพะธัะบ ะบะพัะฝะตะน ะบะฒะฐะดัะฐัะฝะพะณะพ ััะฐะฒะฝะตะฝะธั:\n\n\n\n\nะขะฐะบะถะต ะตััั ะฟะพะฟะพะปะฝัะตะผัะน ะฝะฐะฑะพั ะทะฐะดะฐั ั ัะฐัะบััััะผ ั
ะพะดะพะผ ัะตัะตะฝะธั:\n\n\n\nะะตะบะพัะพััะต ะทะฐะดะฐัะธ ะฟะพัััะพะตะฝั ัะฐะบ, ััะพะฑั ะทะฐััะฐะฒะธัั ะผะพะดะตะปั ะพะฑัะฐัะฐัั ะฒะฝะธะผะฐะฝะธะต ะฝะต ะฟัะพััะพ ะฝะฐ\nะฝะฐะปะธัะธะต ัะธัะตะป, ะฐ ะฝะฐ ะบะพะฝัะตะบัั ะธั
ัะฟะพััะตะฑะปะตะฝะธั:\n\n\n\nะะฝะพะณะดะฐ ัะธัะปะฐ ะฒ ะทะฐะดะฐัะต ะฝะต ะธะผะตัั ะพัะฝะพัะตะฝะธั ะบ ัััะธ ะทะฐะดะฐัะธ, ััะพ ะดะพะปะถะฝะพ ะตัะต ัะธะปัะฝะตะต ะฟะพะฑัะถะดะฐัั ัะตัะฐัััั ะผะพะดะตะปั ััะธััะฒะฐัั ะบะพะฝัะตะบัั:",
"## ะคะพัะผะฐั ะดะฐะฝะฝัั
\n\nะะฐะถะดัะน ััะผะฟะป ัะพะดะตัะถะธั ัะฟะธัะพะบ ัะฒัะทะฐะฝะฝัั
ัะตะฟะปะธะบ ะฑะตะท ะฟัะตัะธะบัะฐ \"- \", ะพะฑัะฐะทัััะธั
ัะตะฟะพัะบั ะฐัะธัะผะตัะธัะตัะบะธั
ะทะฐะดะฐะฝะธะน, ะฒ ะบะพัะพััั
\nััะปะพะฒะธะต ะฝะพะฒะพะน ะทะฐะดะฐัะธ ััะตะฑัะตั ะฐะฝะฐะปะธะทะฐ ะบะฐะบ ะผะธะฝะธะผัะผ ะฟัะตะดัะดััะตะน ัะตะฟะปะธะบะธ.",
"## ะะตะบัะธัะตัะบะฐั ะฒะฐัะธะฐัะธะฒะฝะพััั ะพัะฒะตัะพะฒ\n\nะะปั ะผะฝะพะณะธั
ะทะฐะดะฐั ะพัะฒะตั ััะพัะผัะปะธัะพะฒะฐะฝ ะฝะต ะฟัะพััะพ ะบะฐะบ ัะธัะปะพ, ะฒ ะฝะตะณะพ ะดะพะฑะฐะฒะปะตะฝ ัะพะฟััััะฒัััะธะน ัะตะบัั:",
"## ะะตััะธะบะธ ะณะตะฝะตัะฐัะธะฒะฝัั
ะผะพะดะตะปะตะน\n\nะะพัะปะต ัะฐะนะฝััะฝะฐ (1 ัะฟะพั
ะฐ, lr=1e-5) ะฝะฐ 90% ะดะฐัะฐัะตัะฐ, ะฟะพะปััะฐัััั ัะฐะบะธะต ะผะตััะธะบะธ ะฝะฐ ัะตััะพะฒะพะน ัะฐััะธ:",
"## ะะตะฝะตัะฐัะพั ััะผะฟะปะพะฒ\n\nะัะธ ัะพัะผะธัะพะฒะฐะฝะธะธ ะดะฐัะฐัะตัะฐ ะธัะฟะพะปัะทะพะฒะฐะปัั ะดะฒะธะถะพะบ ัะฐะฑะปะพะฝะฝะพะน ะณะตะฝะตัะฐัะธะธ ะธะท ััะพะณะพ ัะตะฟะพะทะธัะพัะธั: URL",
"## ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะดะฐัะฐัะตัะฐ\n\nะะฐัะฐัะตั ะธัะฟะพะปัะทัะตััั ะดะปั ััะตะฝะธัะพะฒะบะธ ัะฐัะฑะพัะฐ."
]
| [
"TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #language_creators-machine-generated #multilinguality-monolingual #language-Russian #license-cc-by-nc-4.0 #region-us \n",
"# ะัะธัะผะตัะธัะตัะบะธะต ะทะฐะดะฐัะธ ะดะปั ะดะธะฐะปะพะณะพะฒะพะน ัะธััะตะผั\n\nะะฐัะฐัะตั ัะพะดะตัะถะธั ััะผะฟะปั ั ะฟัะพัััะผะธ ะผะฐัะตะผะฐัะธัะตัะบะธะผะธ ะทะฐะดะฐะฝะธัะผะธ ะฟัะธะผะตัะฝะพ ัะฐะบะพะณะพ ะฒะธะดะฐ:\n\n\n\nะัะฝะพะฒะฝะฐั ะผะฐััะฐ ะทะฐะดะฐั ัะฒัะทะฐะฝะฐ ั ะฐัะธัะผะตัะธัะตัะบะธะผะธ ะดะตะนััะฒะธัะผะธ. ะััั ะฝะตะบะพัะพัะพะต ะบะพะปะธัะตััะฒะพ ะทะฐะดะฐั\nะฝะฐ ะฟะพะธัะบ ะบะพัะฝะตะน ะบะฒะฐะดัะฐัะฝะพะณะพ ััะฐะฒะฝะตะฝะธั:\n\n\n\n\nะขะฐะบะถะต ะตััั ะฟะพะฟะพะปะฝัะตะผัะน ะฝะฐะฑะพั ะทะฐะดะฐั ั ัะฐัะบััััะผ ั
ะพะดะพะผ ัะตัะตะฝะธั:\n\n\n\nะะตะบะพัะพััะต ะทะฐะดะฐัะธ ะฟะพัััะพะตะฝั ัะฐะบ, ััะพะฑั ะทะฐััะฐะฒะธัั ะผะพะดะตะปั ะพะฑัะฐัะฐัั ะฒะฝะธะผะฐะฝะธะต ะฝะต ะฟัะพััะพ ะฝะฐ\nะฝะฐะปะธัะธะต ัะธัะตะป, ะฐ ะฝะฐ ะบะพะฝัะตะบัั ะธั
ัะฟะพััะตะฑะปะตะฝะธั:\n\n\n\nะะฝะพะณะดะฐ ัะธัะปะฐ ะฒ ะทะฐะดะฐัะต ะฝะต ะธะผะตัั ะพัะฝะพัะตะฝะธั ะบ ัััะธ ะทะฐะดะฐัะธ, ััะพ ะดะพะปะถะฝะพ ะตัะต ัะธะปัะฝะตะต ะฟะพะฑัะถะดะฐัั ัะตัะฐัััั ะผะพะดะตะปั ััะธััะฒะฐัั ะบะพะฝัะตะบัั:",
"## ะคะพัะผะฐั ะดะฐะฝะฝัั
\n\nะะฐะถะดัะน ััะผะฟะป ัะพะดะตัะถะธั ัะฟะธัะพะบ ัะฒัะทะฐะฝะฝัั
ัะตะฟะปะธะบ ะฑะตะท ะฟัะตัะธะบัะฐ \"- \", ะพะฑัะฐะทัััะธั
ัะตะฟะพัะบั ะฐัะธัะผะตัะธัะตัะบะธั
ะทะฐะดะฐะฝะธะน, ะฒ ะบะพัะพััั
\nััะปะพะฒะธะต ะฝะพะฒะพะน ะทะฐะดะฐัะธ ััะตะฑัะตั ะฐะฝะฐะปะธะทะฐ ะบะฐะบ ะผะธะฝะธะผัะผ ะฟัะตะดัะดััะตะน ัะตะฟะปะธะบะธ.",
"## ะะตะบัะธัะตัะบะฐั ะฒะฐัะธะฐัะธะฒะฝะพััั ะพัะฒะตัะพะฒ\n\nะะปั ะผะฝะพะณะธั
ะทะฐะดะฐั ะพัะฒะตั ััะพัะผัะปะธัะพะฒะฐะฝ ะฝะต ะฟัะพััะพ ะบะฐะบ ัะธัะปะพ, ะฒ ะฝะตะณะพ ะดะพะฑะฐะฒะปะตะฝ ัะพะฟััััะฒัััะธะน ัะตะบัั:",
"## ะะตััะธะบะธ ะณะตะฝะตัะฐัะธะฒะฝัั
ะผะพะดะตะปะตะน\n\nะะพัะปะต ัะฐะนะฝััะฝะฐ (1 ัะฟะพั
ะฐ, lr=1e-5) ะฝะฐ 90% ะดะฐัะฐัะตัะฐ, ะฟะพะปััะฐัััั ัะฐะบะธะต ะผะตััะธะบะธ ะฝะฐ ัะตััะพะฒะพะน ัะฐััะธ:",
"## ะะตะฝะตัะฐัะพั ััะผะฟะปะพะฒ\n\nะัะธ ัะพัะผะธัะพะฒะฐะฝะธะธ ะดะฐัะฐัะตัะฐ ะธัะฟะพะปัะทะพะฒะฐะปัั ะดะฒะธะถะพะบ ัะฐะฑะปะพะฝะฝะพะน ะณะตะฝะตัะฐัะธะธ ะธะท ััะพะณะพ ัะตะฟะพะทะธัะพัะธั: URL",
"## ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะดะฐัะฐัะตัะฐ\n\nะะฐัะฐัะตั ะธัะฟะพะปัะทัะตััั ะดะปั ััะตะฝะธัะพะฒะบะธ ัะฐัะฑะพัะฐ."
]
|
e19db6759252ca92467b067536ff74ae14e0a5f5 |
# Dataset Card for LILA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Usage](#dataset-usage)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lila.science/
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single [taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
This data set consists of only camera trap image data sets, whereas the broader [LILA](lila.science/) website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available [here](https://beerys.github.io/CaltechCameraTraps/).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [email protected].
If you use this data set, please cite the associated manuscript:
```bibtex
@inproceedings{DBLP:conf/eccv/BeeryHP18,
author = {Sara Beery and
Grant Van Horn and
Pietro Perona},
title = {Recognition in Terra Incognita},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich,
Germany, September 8-14, 2018, Proceedings, Part {XVI}},
pages = {472--489},
year = {2018},
crossref = {DBLP:conf/eccv/2018-16},
url = {https://doi.org/10.1007/978-3-030-01270-0\_28},
doi = {10.1007/978-3-030-01270-0\_28},
timestamp = {Mon, 08 Oct 2018 17:08:07 +0200},
biburl = {https://dblp.org/rec/bib/conf/eccv/BeeryHP18},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are โAmerican Crowโ, โAmerican Black Bearโ, and โDogโ.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{yousif2019dynamic,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]).
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 ร 1080 to 2048 ร 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{zhang2016animal,
title={Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification},
author={Zhang, Zhi and He, Zhihai and Cao, Guitao and Cao, Wenming},
journal={IEEE Transactions on Multimedia},
volume={18},
number={10},
pages={2079--2092},
year={2016},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]) and [Zhi Zhang]([email protected]).
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{tabak2019machine,
title={Machine learning to classify animal species in camera trap images: Applications in ecology},
author={Tabak, Michael A and Norouzzadeh, Mohammad S and Wolfson, David W and Sweeney, Steven J and VerCauteren, Kurt C and Snow, Nathan P and Halseth, Joseph M and Di Salvo, Paul A and Lewis, Jesse S and White, Michael D and others},
journal={Methods in Ecology and Evolution},
volume={10},
number={4},
pages={585--590},
year={2019},
publisher={Wiley Online Library}
}
```
For questions about this data set, contact [[email protected]]([email protected]).
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the [Wildlife Conservation Society](https://www.wcs.org/). The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so โ as is the case with most camera trap data sets โ empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set [on the LILA website](https://lila.science/datasets/wcscameratraps).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{anton2018monitoring,
title={Monitoring the mammalian fauna of urban areas using remote cameras and citizen science},
author={Anton, Victor and Hartley, Stephen and Geldenhuis, Andre and Wittmer, Heiko U},
journal={Journal of Urban Ecology},
volume={4},
number={1},
pages={juy002},
year={2018},
publisher={Oxford University Press}
}
```
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [Victor Anton]([email protected]).
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact [David Will]([email protected]) at Island Conservation.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact [Nathaniel Rindlaub]([email protected]) at The Nature Conservancy.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (โdeerโ, โelkโ, and โcattleโ are the most common animal classes), but labels also include some state indicators (e.g. โsnow on lensโ, โfoggy lensโ). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the [Snapshot Serengeti project](https://snapshotserengeti.org/) -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomsonโs gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshotserengeti-v-2-0/SnapshotSerengeti_S1-11_v2.1.species_list.csv). We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
```bibtex
@misc{dryad_5pt92,
title = {Data from: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna},
author = {Swanson, AB and Kosmala, M and Lintott, CJ and Simpson, RJ and Smith, A and Packer, C},
year = {2015},
journal = {Scientific Data},
URL = {https://doi.org/10.5061/dryad.5pt92},
doi = {doi:10.5061/dryad.5pt92},
publisher = {Dryad Digital Repository}
}
```
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the [Snapshot Karoo](https://www.zooniverse.org/projects/shuebner729/snapshot-karoo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KAR/SnapshotKaroo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the [Snapshot Kgalagadi](https://www.zooniverse.org/projects/shuebner729/snapshot-kgalagadi/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari โ an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KGA/SnapshotKgalagadi_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the [Snapshot Enonkishu](https://www.zooniverse.org/projects/aguthmann/snapshot-enonkishu) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/ENO/SnapshotEnonkishu_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the [Snapshot Camdeboo](https://www.zooniverse.org/projects/shuebner729/snapshot-camdeboo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/CDB/SnapshotCamdeboo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the [Snapshot Mountain Zebra](https://www.zooniverse.org/projects/meredithspalmer/snapshot-mountain-zebra/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/MTZ/SnapshotMountainZebra_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the [Snapshot Kruger](https://www.zooniverse.org/projects/shuebner729/snapshot-kruger) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KRU/SnapshotKruger_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are โEurasian Wild Pigโ, โLarge-antlered Muntjacโ, and โUnidentified Muridโ). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group. Dataset.
For questions about this data set, contact [email protected].
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquรญa region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms โ Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying [GitHub repository](https://github.com/julianavelez1/Processing-Camera-Trap-Data-Using-AI).
If you use these data in a publication or report, please use the following citation:
```bibtex
@article{velez2022choosing,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
For questions about this data set, contact [Juliana Velez Gomez]([email protected]).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/) is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in [COCO Camera Traps](https://github.com/Microsoft/CameraTraps/blob/master/data_management/README.md#coco-cameratraps-format) format.
All of the datasets share a common category taxonomy, which is defined on the [LILA website](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
### Data Fields
Different datasets may have slightly varying fields, which include:
`file_name`: the file name \
`width` and `height`: the dimensions of the image \
`study`: which research study the image was collected as part of \
`location` : the name of the location at which the image was taken \
`annotations`: information about image annotation, which includes the taxonomy information, bounding box/boxes (`bbox`/`bboxes`) if any, as well as any other annotation information. \
`image` : the `path` to download the image and any other information that is available, e.g. its size in `bytes`.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a โhumanโ class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the [LILA maintainers](mailto:[email protected]), since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
```python
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
# Filters to show only cats
cats = dataset.filter(lambda x: x["annotations"]["taxonomy"][0]["species"] == taxonomy["species"].str2int("felis catus"))
```
The original common names have been saved with their taxonomy mappings in this repository in `common_names_to_tax.json`. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Island Conservation Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
sea_turtle = LILA_COMMON_NAMES_TO_TAXONOMY.loc["sea turtle"].to_dict()
sea_turtle = {k: taxonomy[k].str2int(v) if v is not None else v for k, v in sea_turtle.items()} # Map to ClassLabel integers
sea_turtle_dataset = ds.filter(lambda x: x["annotations"]["taxonomy"][0] == sea_turtle)
```
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
random_entry = dataset.shuffle()[0]
filter_taxonomy = random_entry["annotations"]["taxonomy"][0]
filter_keys = list(map(lambda x: (x[0], taxonomy[x[0]].int2str(x[1])), filter(lambda x: x[1] is not None, list(filter_taxonomy.items()))))
if len(filter_keys) > 0:
print(LILA_COMMON_NAMES_TO_TAXONOMY[np.logical_and.reduce([
LILA_COMMON_NAMES_TO_TAXONOMY[k] == v for k,v in filter_keys
])])
else:
print("No common name found for the item.")
```
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/). Check the details of the specific dataset you are using in its section above.
### Citation Information
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to [@NimaBoscarino](https://github.com/NimaBoscarino/) for adding this dataset.
| polinaeterna/lila_camera_traps | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"biodiversity",
"camera trap data",
"wildlife monitoring",
"region:us"
]
| 2023-01-18T12:10:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "LILA Camera Traps", "tags": ["biodiversity", "camera trap data", "wildlife monitoring"], "duplicated_from": "society-ethics/lila_camera_traps"} | 2023-01-18T12:10:17+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #biodiversity #camera trap data #wildlife monitoring #region-us
|
# Dataset Card for LILA
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Usage
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: https://lila.science/
- Repository: N/A
- Paper: N/A
- Leaderboard: N/A
- Point of Contact: [email protected]
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single taxonomy.
This data set consists of only camera trap image data sets, whereas the broader LILA website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available here.
This data set is released under the Community Data License Agreement (permissive variant).
For questions about this data set, contact caltechcameratraps@URL.
If you use this data set, please cite the associated manuscript:
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are โAmerican Crowโ, โAmerican Black Bearโ, and โDogโ.
This data set is released under the Community Data License Agreement (permissive variant).
Please cite this manuscript if you use this data set:
For questions about this data set, contact Hayder Yousif.
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 ร 1080 to 2048 ร 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the Community Data License Agreement (permissive variant).
If you use this data set, please cite the associated manuscript:
For questions about this data set, contact Hayder Yousif and Zhi Zhang.
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the Community Data License Agreement (permissive variant).
Please cite this manuscript if you use this data set:
For questions about this data set, contact northamericancameratrapimages@URL.
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the Wildlife Conservation Society. The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so โ as is the case with most camera trap data sets โ empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set on the LILA website.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
This data set is released under the Community Data License Agreement (permissive variant).
For questions about this data set, contact Victor Anton.
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact David Will at Island Conservation.
This data set is released under the Community Data License Agreement (permissive variant).
The original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact Nathaniel Rindlaub at The Nature Conservancy.
This data set is released under the Community Data License Agreement (permissive variant).
The original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (โdeerโ, โelkโ, and โcattleโ are the most common animal classes), but labels also include some state indicators (e.g. โsnow on lensโ, โfoggy lensโ). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the Snapshot Serengeti project -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomsonโs gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available here. We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the Snapshot Karoo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the Snapshot Kgalagadi project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari โ an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the Snapshot Enonkishu project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the Snapshot Camdeboo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the Snapshot Mountain Zebra project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the Snapshot Kruger project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are โEurasian Wild Pigโ, โLarge-antlered Muntjacโ, and โUnidentified Muridโ). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group. Dataset.
For questions about this data set, contact saolawg@URL.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquรญa region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms โ Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying GitHub repository.
If you use these data in a publication or report, please use the following citation:
For questions about this data set, contact Juliana Velez Gomez.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The LILA taxonomy is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in COCO Camera Traps format.
All of the datasets share a common category taxonomy, which is defined on the LILA website.
### Data Fields
Different datasets may have slightly varying fields, which include:
'file_name': the file name \
'width' and 'height': the dimensions of the image \
'study': which research study the image was collected as part of \
'location' : the name of the location at which the image was taken \
'annotations': information about image annotation, which includes the taxonomy information, bounding box/boxes ('bbox'/'bboxes') if any, as well as any other annotation information. \
'image' : the 'path' to download the image and any other information that is available, e.g. its size in 'bytes'.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a โhumanโ class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the LILA maintainers, since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
The original common names have been saved with their taxonomy mappings in this repository in 'common_names_to_tax.json'. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the Community Data License Agreement (permissive variant). Check the details of the specific dataset you are using in its section above.
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to @NimaBoscarino for adding this dataset.
| [
"# Dataset Card for LILA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Usage\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: https://lila.science/\n- Repository: N/A\n- Paper: N/A\n- Leaderboard: N/A\n- Point of Contact: [email protected]",
"### Dataset Summary\n\nLILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.\n\nThis data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single taxonomy.\n\nThis data set consists of only camera trap image data sets, whereas the broader LILA website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.\n\n\nSee below for information about each specific dataset that LILA contains:\n\n<details>\n<summary> Caltech Camera Traps </summary>\n\nThis data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.\nMore information about this data set is available here.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact caltechcameratraps@URL.\n\nIf you use this data set, please cite the associated manuscript:\n\n</details>\n\n<details>\n<summary> ENA24 </summary>\n\nThis data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are โAmerican Crowโ, โAmerican Black Bearโ, and โDogโ.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\nFor questions about this data set, contact Hayder Yousif.\n\n</details>\n\n<details>\n<summary> Missouri Camera Traps </summary>\n\nThis data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 ร 1080 to 2048 ร 1536. Sequence lengths vary from 3 to more than 300 frames.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nIf you use this data set, please cite the associated manuscript:\n\nFor questions about this data set, contact Hayder Yousif and Zhi Zhang.\n</details>\n\n<details>\n<summary> North American Camera Trap Images (NACTI) </summary>\n\nThis data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\n\nFor questions about this data set, contact northamericancameratrapimages@URL.\n\n</details>\n\n<details>\n<summary> WCS Camera Traps </summary>\n\nThis data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the Wildlife Conservation Society. The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.\n\nSequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so โ as is the case with most camera trap data sets โ empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set on the LILA website.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Wellington Camera Traps </summary>\n\nThis data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).\n\nIf you use this data set, please cite the associated manuscript:\n\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact Victor Anton.\n</details>\n\n<details>\n<summary> Island Conservation Camera Traps </summary>\n\nThis data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.\n\nThe most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.\n\nIn general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.\n\nFor questions about this data set, contact David Will at Island Conservation.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.\n</details>\n\n<details>\n<summary> Channel Islands Camera Traps </summary>\n\nThis data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.\n\nIf you use these data in a publication or report, please use the following citation:\n\nThe Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.\n\nFor questions about this data set, contact Nathaniel Rindlaub at The Nature Conservancy.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.\n\n</details>\n\n<details>\n<summary> Idaho Camera Traps </summary>\n\nThis data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (โdeerโ, โelkโ, and โcattleโ are the most common animal classes), but labels also include some state indicators (e.g. โsnow on lensโ, โfoggy lensโ). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.\n\nThe metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).\n\nImages were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.\n</details>\n\n<details>\n<summary> Snapshot Serengeti </summary>\n\nThis data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the Snapshot Serengeti project -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.\n\nLabels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomsonโs gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available here. We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.\n\nThe images and species-level labels are described in more detail in the associated manuscript:\n\n\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Snapshot Karoo </summary>\n\nThis data set contains 14889 sequences of camera trap images, totaling 38074 images, from the Snapshot Karoo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.\n\nLabels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kgalagadi </summary>\n\nThis data set contains 3611 sequences of camera trap images, totaling 10222 images, from the Snapshot Kgalagadi project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari โ an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.\n\nLabels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Enonkishu </summary>\n\nThis data set contains 13301 sequences of camera trap images, totaling 28544 images, from the Snapshot Enonkishu project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.\n\nLabels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Camdeboo </summary>\n\nThis data set contains 12132 sequences of camera trap images, totaling 30227 images, from the Snapshot Camdeboo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.\n\nLabels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Mountain Zebra </summary>\n\nThis data set contains 71688 sequences of camera trap images, totaling 73034 images, from the Snapshot Mountain Zebra project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.\n\nLabels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kruger </summary>\n\nThis data set contains 4747 sequences of camera trap images, totaling 10072 images, from the Snapshot Kruger project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.\n\nLabels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> SWG Camera Traps </summary>\n\nThis data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are โEurasian Wild Pigโ, โLarge-antlered Muntjacโ, and โUnidentified Muridโ). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.\n\nThis data set is provided by the Saola Working Group; providers include:\n\n- IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group (SWG)\n- Asian Arks\n- Wildlife Conservation Society (Lao)\n- WWF Lao\n- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)\n- Center for Environment and Rural Development, Vinh University, Vietnam\n\nIf you use these data in a publication or report, please use the following citation:\n\nSWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group. Dataset.\n\nFor questions about this data set, contact saolawg@URL.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\n</details>\n\n<details>\n<summary> Orinoquia Camera Traps </summary>\n\nThis data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquรญa region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.\n\nThis data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.\n\nThe main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms โ Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying GitHub repository.\n\nIf you use these data in a publication or report, please use the following citation:\n\nFor questions about this data set, contact Juliana Velez Gomez.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>",
"### Supported Tasks and Leaderboards\n\nNo leaderboards exist for LILA.",
"### Languages\n\nThe LILA taxonomy is provided in English.",
"## Dataset Structure",
"### Data Instances\n\nThe data annotations are provided in COCO Camera Traps format.\n\nAll of the datasets share a common category taxonomy, which is defined on the LILA website.",
"### Data Fields\n\nDifferent datasets may have slightly varying fields, which include:\n\n'file_name': the file name \\\n'width' and 'height': the dimensions of the image \\\n'study': which research study the image was collected as part of \\\n'location' : the name of the location at which the image was taken \\\n 'annotations': information about image annotation, which includes the taxonomy information, bounding box/boxes ('bbox'/'bboxes') if any, as well as any other annotation information. \\\n 'image' : the 'path' to download the image and any other information that is available, e.g. its size in 'bytes'.",
"### Data Splits\n\nThis dataset does not have a predefined train/test split.",
"## Dataset Creation",
"### Curation Rationale\n\nThe datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.",
"### Source Data",
"#### Initial data collection and normalization\n\nN/A",
"#### Who are the source language producers?\n\nN/A",
"### Annotations",
"#### Annotation process\n\nEach dataset has been annotated by the members of the project/organization that provided it.",
"#### Who are the annotators?\n\nThe annotations have been provided by domain experts in fields such as biology and ecology.",
"### Personal and Sensitive Information\n\nSome of the original data sets included a โhumanโ class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the LILA maintainers, since in some cases it will be possible to release those images under an alternative license.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nMachine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.",
"### Discussion of Biases\n\nThese datasets do not represent global diversity, but are examples of local ecosystems and animals.",
"### Other Known Limitations\n\nN/A",
"## Additional Information",
"### Working with Taxonomies\n\nAll the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the \"Caltech Camera Traps\" dataset to find all the entries with a \"felis catus\" as the species for the first annotation.\n\n\n\nThe original common names have been saved with their taxonomy mappings in this repository in 'common_names_to_tax.json'. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.\n\nThe following example loads the first \"sea turtle\" in the \"Island Conservation Camera Traps\" dataset.\n\n\n\nThe example below selects a random item from the dataset, and then maps from the taxonomy to a common name:",
"### Dataset Curators\n\nLILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.",
"### Licensing Information\n\nMany, but not all, LILA data sets were released under the Community Data License Agreement (permissive variant). Check the details of the specific dataset you are using in its section above.\n\n\n\nCitations for each dataset (if they exist) are provided in its section above.",
"### Contributions\n\nThanks to @NimaBoscarino for adding this dataset."
]
| [
"TAGS\n#task_categories-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #biodiversity #camera trap data #wildlife monitoring #region-us \n",
"# Dataset Card for LILA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Usage\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: https://lila.science/\n- Repository: N/A\n- Paper: N/A\n- Leaderboard: N/A\n- Point of Contact: [email protected]",
"### Dataset Summary\n\nLILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.\n\nThis data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single taxonomy.\n\nThis data set consists of only camera trap image data sets, whereas the broader LILA website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.\n\n\nSee below for information about each specific dataset that LILA contains:\n\n<details>\n<summary> Caltech Camera Traps </summary>\n\nThis data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.\nMore information about this data set is available here.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact caltechcameratraps@URL.\n\nIf you use this data set, please cite the associated manuscript:\n\n</details>\n\n<details>\n<summary> ENA24 </summary>\n\nThis data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are โAmerican Crowโ, โAmerican Black Bearโ, and โDogโ.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\nFor questions about this data set, contact Hayder Yousif.\n\n</details>\n\n<details>\n<summary> Missouri Camera Traps </summary>\n\nThis data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 ร 1080 to 2048 ร 1536. Sequence lengths vary from 3 to more than 300 frames.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nIf you use this data set, please cite the associated manuscript:\n\nFor questions about this data set, contact Hayder Yousif and Zhi Zhang.\n</details>\n\n<details>\n<summary> North American Camera Trap Images (NACTI) </summary>\n\nThis data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\n\nFor questions about this data set, contact northamericancameratrapimages@URL.\n\n</details>\n\n<details>\n<summary> WCS Camera Traps </summary>\n\nThis data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the Wildlife Conservation Society. The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.\n\nSequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so โ as is the case with most camera trap data sets โ empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set on the LILA website.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Wellington Camera Traps </summary>\n\nThis data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).\n\nIf you use this data set, please cite the associated manuscript:\n\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact Victor Anton.\n</details>\n\n<details>\n<summary> Island Conservation Camera Traps </summary>\n\nThis data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.\n\nThe most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.\n\nIn general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.\n\nFor questions about this data set, contact David Will at Island Conservation.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.\n</details>\n\n<details>\n<summary> Channel Islands Camera Traps </summary>\n\nThis data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.\n\nIf you use these data in a publication or report, please use the following citation:\n\nThe Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.\n\nFor questions about this data set, contact Nathaniel Rindlaub at The Nature Conservancy.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a โhumanโ class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.\n\n</details>\n\n<details>\n<summary> Idaho Camera Traps </summary>\n\nThis data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (โdeerโ, โelkโ, and โcattleโ are the most common animal classes), but labels also include some state indicators (e.g. โsnow on lensโ, โfoggy lensโ). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.\n\nThe metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).\n\nImages were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.\n</details>\n\n<details>\n<summary> Snapshot Serengeti </summary>\n\nThis data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the Snapshot Serengeti project -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.\n\nLabels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomsonโs gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available here. We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.\n\nThe images and species-level labels are described in more detail in the associated manuscript:\n\n\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Snapshot Karoo </summary>\n\nThis data set contains 14889 sequences of camera trap images, totaling 38074 images, from the Snapshot Karoo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.\n\nLabels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kgalagadi </summary>\n\nThis data set contains 3611 sequences of camera trap images, totaling 10222 images, from the Snapshot Kgalagadi project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari โ an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.\n\nLabels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Enonkishu </summary>\n\nThis data set contains 13301 sequences of camera trap images, totaling 28544 images, from the Snapshot Enonkishu project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.\n\nLabels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Camdeboo </summary>\n\nThis data set contains 12132 sequences of camera trap images, totaling 30227 images, from the Snapshot Camdeboo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.\n\nLabels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Mountain Zebra </summary>\n\nThis data set contains 71688 sequences of camera trap images, totaling 73034 images, from the Snapshot Mountain Zebra project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.\n\nLabels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kruger </summary>\n\nThis data set contains 4747 sequences of camera trap images, totaling 10072 images, from the Snapshot Kruger project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.\n\nLabels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> SWG Camera Traps </summary>\n\nThis data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are โEurasian Wild Pigโ, โLarge-antlered Muntjacโ, and โUnidentified Muridโ). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.\n\nThis data set is provided by the Saola Working Group; providers include:\n\n- IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group (SWG)\n- Asian Arks\n- Wildlife Conservation Society (Lao)\n- WWF Lao\n- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)\n- Center for Environment and Rural Development, Vinh University, Vietnam\n\nIf you use these data in a publication or report, please use the following citation:\n\nSWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Groupโs Saola Working Group. Dataset.\n\nFor questions about this data set, contact saolawg@URL.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\n</details>\n\n<details>\n<summary> Orinoquia Camera Traps </summary>\n\nThis data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquรญa region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.\n\nThis data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.\n\nThe main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms โ Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying GitHub repository.\n\nIf you use these data in a publication or report, please use the following citation:\n\nFor questions about this data set, contact Juliana Velez Gomez.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>",
"### Supported Tasks and Leaderboards\n\nNo leaderboards exist for LILA.",
"### Languages\n\nThe LILA taxonomy is provided in English.",
"## Dataset Structure",
"### Data Instances\n\nThe data annotations are provided in COCO Camera Traps format.\n\nAll of the datasets share a common category taxonomy, which is defined on the LILA website.",
"### Data Fields\n\nDifferent datasets may have slightly varying fields, which include:\n\n'file_name': the file name \\\n'width' and 'height': the dimensions of the image \\\n'study': which research study the image was collected as part of \\\n'location' : the name of the location at which the image was taken \\\n 'annotations': information about image annotation, which includes the taxonomy information, bounding box/boxes ('bbox'/'bboxes') if any, as well as any other annotation information. \\\n 'image' : the 'path' to download the image and any other information that is available, e.g. its size in 'bytes'.",
"### Data Splits\n\nThis dataset does not have a predefined train/test split.",
"## Dataset Creation",
"### Curation Rationale\n\nThe datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.",
"### Source Data",
"#### Initial data collection and normalization\n\nN/A",
"#### Who are the source language producers?\n\nN/A",
"### Annotations",
"#### Annotation process\n\nEach dataset has been annotated by the members of the project/organization that provided it.",
"#### Who are the annotators?\n\nThe annotations have been provided by domain experts in fields such as biology and ecology.",
"### Personal and Sensitive Information\n\nSome of the original data sets included a โhumanโ class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the LILA maintainers, since in some cases it will be possible to release those images under an alternative license.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nMachine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.",
"### Discussion of Biases\n\nThese datasets do not represent global diversity, but are examples of local ecosystems and animals.",
"### Other Known Limitations\n\nN/A",
"## Additional Information",
"### Working with Taxonomies\n\nAll the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the \"Caltech Camera Traps\" dataset to find all the entries with a \"felis catus\" as the species for the first annotation.\n\n\n\nThe original common names have been saved with their taxonomy mappings in this repository in 'common_names_to_tax.json'. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.\n\nThe following example loads the first \"sea turtle\" in the \"Island Conservation Camera Traps\" dataset.\n\n\n\nThe example below selects a random item from the dataset, and then maps from the taxonomy to a common name:",
"### Dataset Curators\n\nLILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.",
"### Licensing Information\n\nMany, but not all, LILA data sets were released under the Community Data License Agreement (permissive variant). Check the details of the specific dataset you are using in its section above.\n\n\n\nCitations for each dataset (if they exist) are provided in its section above.",
"### Contributions\n\nThanks to @NimaBoscarino for adding this dataset."
]
|
371f5d2be3e3cbf1c4a3baeb88debbd507fcb7d8 |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | mariosasko/glue | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"qa-nli",
"coreference-nli",
"paraphrase-identification",
"region:us"
]
| 2023-01-18T12:19:24+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "dataset_info": [{"config_name": "cola", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "unacceptable", "1": "acceptable"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 61049, "num_examples": 1063}, {"name": "train", "num_bytes": 489149, "num_examples": 8551}, {"name": "validation", "num_bytes": 60850, "num_examples": 1043}], "download_size": 376971, "dataset_size": 611048}, {"config_name": "sst2", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 217556, "num_examples": 1821}, {"name": "train", "num_bytes": 4715283, "num_examples": 67349}, {"name": "validation", "num_bytes": 106692, "num_examples": 872}], "download_size": 7439277, "dataset_size": 5039531}, {"config_name": "mrpc", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_equivalent", "1": "equivalent"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 443498, "num_examples": 1725}, {"name": "train", "num_bytes": 946146, "num_examples": 3668}, {"name": "validation", "num_bytes": 106142, "num_examples": 408}], "download_size": 1494541, "dataset_size": 1495786}, {"config_name": "qqp", "features": [{"name": "question1", "dtype": "string"}, {"name": "question2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_duplicate", "1": "duplicate"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 50901116, "num_examples": 363846}, {"name": "validation", "num_bytes": 5653794, "num_examples": 40430}, {"name": "test", "num_bytes": 55171431, "num_examples": 390965}], "download_size": 41696084, "dataset_size": 111726341}, {"config_name": "stsb", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "float32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 170847, "num_examples": 1379}, {"name": "train", "num_bytes": 758394, "num_examples": 5749}, {"name": "validation", "num_bytes": 217012, "num_examples": 1500}], "download_size": 802872, "dataset_size": 1146253}, {"config_name": "mnli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test_matched", "num_bytes": 1854787, "num_examples": 9796}, {"name": "test_mismatched", "num_bytes": 1956866, "num_examples": 9847}, {"name": "train", "num_bytes": 74865118, "num_examples": 392702}, {"name": "validation_matched", "num_bytes": 1839926, "num_examples": 9815}, {"name": "validation_mismatched", "num_bytes": 1955384, "num_examples": 9832}], "download_size": 312783507, "dataset_size": 82472081}, {"config_name": "mnli_mismatched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 1956866, "num_examples": 9847}, {"name": "validation", "num_bytes": 1955384, "num_examples": 9832}], "download_size": 312783507, "dataset_size": 3912250}, {"config_name": "mnli_matched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 1854787, "num_examples": 9796}, {"name": "validation", "num_bytes": 1839926, "num_examples": 9815}], "download_size": 312783507, "dataset_size": 3694713}, {"config_name": "qnli", "features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 1376516, "num_examples": 5463}, {"name": "train", "num_bytes": 25677924, "num_examples": 104743}, {"name": "validation", "num_bytes": 1371727, "num_examples": 5463}], "download_size": 10627589, "dataset_size": 28426167}, {"config_name": "rte", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 975936, "num_examples": 3000}, {"name": "train", "num_bytes": 848888, "num_examples": 2490}, {"name": "validation", "num_bytes": 90911, "num_examples": 277}], "download_size": 697150, "dataset_size": 1915735}, {"config_name": "wnli", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 37992, "num_examples": 146}, {"name": "train", "num_bytes": 107517, "num_examples": 635}, {"name": "validation", "num_bytes": 12215, "num_examples": 71}], "download_size": 28999, "dataset_size": 157724}, {"config_name": "ax", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 238392, "num_examples": 1104}], "download_size": 222257, "dataset_size": 238392}], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2023-06-08T15:42:25+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #qa-nli #coreference-nli #paraphrase-identification #region-us
| Dataset Card for GLUE
=====================
Table of Contents
-----------------
* Dataset Card for GLUE
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Languages
+ Dataset Structure
- Data Instances
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Fields
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Splits
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Point of Contact:
* Size of downloaded dataset files: 955.33 MB
* Size of the generated dataset: 229.68 MB
* Total amount of disk used: 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli\_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli\_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 'en')
Dataset Structure
-----------------
### Data Instances
#### ax
* Size of downloaded dataset files: 0.21 MB
* Size of the generated dataset: 0.23 MB
* Total amount of disk used: 0.44 MB
An example of 'test' looks as follows.
#### cola
* Size of downloaded dataset files: 0.36 MB
* Size of the generated dataset: 0.58 MB
* Total amount of disk used: 0.94 MB
An example of 'train' looks as follows.
#### mnli
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 78.65 MB
* Total amount of disk used: 376.95 MB
An example of 'train' looks as follows.
#### mnli\_matched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.52 MB
* Total amount of disk used: 301.82 MB
An example of 'test' looks as follows.
#### mnli\_mismatched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.73 MB
* Total amount of disk used: 302.02 MB
An example of 'test' looks as follows.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Fields
The data fields are the same among all splits.
#### ax
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### cola
* 'sentence': a 'string' feature.
* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).
* 'idx': a 'int32' feature.
#### mnli
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_matched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_mismatched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Splits
#### ax
#### cola
#### mnli
#### mnli\_matched
#### mnli\_mismatched
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset.
| [
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
]
| [
"TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #qa-nli #coreference-nli #paraphrase-identification #region-us \n",
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
]
|
f25a499240f8653404c89da4f1763c0a75cb0cd0 |
# My Solid Theme
## Description
A copy of the solid theme
## Preview

## Contributions
Thanks to [@freddyaboulton](https://huggingface.co/freddyaboulton) for adding this gradio theme!
| freddyaboulton/my-solid-theme | [
"license:apache-2.0",
"gradio-theme",
"region:us"
]
| 2023-01-18T12:28:59+00:00 | {"license": "apache-2.0", "tags": ["gradio-theme"], "title": "My Solid Theme", "colorFrom": "orange", "colorTo": "purple", "sdk": "gradio", "sdk_version": "3.16.2", "app_file": "app.py", "pinned": false} | 2023-01-18T21:04:08+00:00 | []
| []
| TAGS
#license-apache-2.0 #gradio-theme #region-us
|
# My Solid Theme
## Description
A copy of the solid theme
## Preview

## Contributions
Thanks to @freddyaboulton for adding this gradio theme!
| [
"# My Solid Theme",
"## Description\n\nA copy of the solid theme",
"## Preview\n\n",
"## Contributions\n\nThanks to @freddyaboulton for adding this gradio theme!"
]
| [
"TAGS\n#license-apache-2.0 #gradio-theme #region-us \n",
"# My Solid Theme",
"## Description\n\nA copy of the solid theme",
"## Preview\n\n",
"## Contributions\n\nThanks to @freddyaboulton for adding this gradio theme!"
]
|
fb7f7d7102fd040c4211002b0c43e3ab727afffc | # UTK Faces
Original paper: [Age Progression/Regression by Conditional Adversarial Autoencoder](https://arxiv.org/abs/1702.08423)
Homepage: https://susanqq.github.io/UTKFace/
Bibtex:
```
@inproceedings{zhifei2017cvpr,
title={Age Progression/Regression by Conditional Adversarial Autoencoder},
author={Zhang, Zhifei, Song, Yang, and Qi, Hairong},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2017},
organization={IEEE}
}
``` | nlphuji/utk_faces | [
"arxiv:1702.08423",
"region:us"
]
| 2023-01-18T12:50:13+00:00 | {} | 2023-01-18T13:10:37+00:00 | [
"1702.08423"
]
| []
| TAGS
#arxiv-1702.08423 #region-us
| # UTK Faces
Original paper: Age Progression/Regression by Conditional Adversarial Autoencoder
Homepage: URL
Bibtex:
| [
"# UTK Faces\n\nOriginal paper: Age Progression/Regression by Conditional Adversarial Autoencoder\n\nHomepage: URL\n\nBibtex:"
]
| [
"TAGS\n#arxiv-1702.08423 #region-us \n",
"# UTK Faces\n\nOriginal paper: Age Progression/Regression by Conditional Adversarial Autoencoder\n\nHomepage: URL\n\nBibtex:"
]
|
8e418a32628e853f1ba384c3f3ee6eb26b2a8aa5 |
## Required installation
```bash
pip3 install pypdf2 pdf2image
sudo apt-get install poppler-utils
``` | jordyvl/unit-test_PDFfolder | [
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-01-18T13:25:33+00:00 | {"license": "cc-by-nc-4.0"} | 2023-01-18T19:52:11+00:00 | []
| []
| TAGS
#license-cc-by-nc-4.0 #region-us
|
## Required installation
| [
"## Required installation"
]
| [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"## Required installation"
]
|
4d0ff18143b5a7e1b1e79beb540c04549d1e59d3 |
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** ๐ฌ](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` | Hello-SimpleAI/HC3 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxiv:2301.07597",
"region:us"
]
| 2023-01-18T14:01:20+00:00 | {"language": ["en", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "sentence-similarity", "zero-shot-classification"], "tags": ["ChatGPT", "SimpleAI", "Detection", "OOD"]} | 2023-01-21T13:10:10+00:00 | [
"2301.07597"
]
| [
"en",
"zh"
]
| TAGS
#task_categories-text-classification #task_categories-question-answering #task_categories-sentence-similarity #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-sa-4.0 #ChatGPT #SimpleAI #Detection #OOD #arxiv-2301.07597 #region-us
|
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named HC3 dataset.
This dataset is introduced in our paper:
- Paper: *How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection*
Code, models and analysis are available on our GitHub:
- GitHub: Chatgpt-Comparison-Detection project
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See dataset copyright.
Checkout this papaer arxiv: 2301.07597
| [
"# Human ChatGPT Comparison Corpus (HC3)\nWe propose the first human-ChatGPT comparison corpus, named HC3 dataset.\n\nThis dataset is introduced in our paper: \n- Paper: *How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection*\n\nCode, models and analysis are available on our GitHub:\n- GitHub: Chatgpt-Comparison-Detection project",
"# Dataset Copyright\nIf the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.\nSee dataset copyright.\n\n\nCheckout this papaer arxiv: 2301.07597"
]
| [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-sentence-similarity #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-sa-4.0 #ChatGPT #SimpleAI #Detection #OOD #arxiv-2301.07597 #region-us \n",
"# Human ChatGPT Comparison Corpus (HC3)\nWe propose the first human-ChatGPT comparison corpus, named HC3 dataset.\n\nThis dataset is introduced in our paper: \n- Paper: *How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection*\n\nCode, models and analysis are available on our GitHub:\n- GitHub: Chatgpt-Comparison-Detection project",
"# Dataset Copyright\nIf the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.\nSee dataset copyright.\n\n\nCheckout this papaer arxiv: 2301.07597"
]
|
09a687b8dc164b89e7df95abf15df3b216bc31c2 |
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** ๐ฌ](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` | Hello-SimpleAI/HC3-Chinese | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxiv:2301.07597",
"region:us"
]
| 2023-01-18T14:20:45+00:00 | {"language": ["en", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "sentence-similarity", "zero-shot-classification"], "tags": ["ChatGPT", "SimpleAI", "Detection", "OOD"]} | 2023-01-21T13:11:49+00:00 | [
"2301.07597"
]
| [
"en",
"zh"
]
| TAGS
#task_categories-text-classification #task_categories-question-answering #task_categories-sentence-similarity #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-sa-4.0 #ChatGPT #SimpleAI #Detection #OOD #arxiv-2301.07597 #region-us
|
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named HC3 dataset.
This dataset is introduced in our paper:
- Paper: *How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection*
Code, models and analysis are available on our GitHub:
- GitHub: Chatgpt-Comparison-Detection project
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See dataset copyright.
Checkout this papaer arxiv: 2301.07597
| [
"# Human ChatGPT Comparison Corpus (HC3)\nWe propose the first human-ChatGPT comparison corpus, named HC3 dataset.\n\nThis dataset is introduced in our paper: \n- Paper: *How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection*\n\nCode, models and analysis are available on our GitHub:\n- GitHub: Chatgpt-Comparison-Detection project",
"# Dataset Copyright\nIf the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.\nSee dataset copyright.\n\n\nCheckout this papaer arxiv: 2301.07597"
]
| [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-sentence-similarity #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-sa-4.0 #ChatGPT #SimpleAI #Detection #OOD #arxiv-2301.07597 #region-us \n",
"# Human ChatGPT Comparison Corpus (HC3)\nWe propose the first human-ChatGPT comparison corpus, named HC3 dataset.\n\nThis dataset is introduced in our paper: \n- Paper: *How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection*\n\nCode, models and analysis are available on our GitHub:\n- GitHub: Chatgpt-Comparison-Detection project",
"# Dataset Copyright\nIf the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.\nSee dataset copyright.\n\n\nCheckout this papaer arxiv: 2301.07597"
]
|
1b08362748ebeaa8c330a2ea8a77ec548194b977 | # Dataset Card for "boostcamp-docvqa-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ssunbell/boostcamp-docvqa-v2 | [
"region:us"
]
| 2023-01-18T14:27:39+00:00 | {"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 6381793673, "num_examples": 39454}, {"name": "val", "num_bytes": 869361798, "num_examples": 5349}], "download_size": 2578867675, "dataset_size": 7251155471}} | 2023-01-18T14:37:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "boostcamp-docvqa-v2"
More Information needed | [
"# Dataset Card for \"boostcamp-docvqa-v2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"boostcamp-docvqa-v2\"\n\nMore Information needed"
]
|
ac39b2d465010fa9973aefa4a4559ffd1fd07fe9 |
# Dataset Card for ruMeme Descriptions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of more than 2500 memes in Russian and their descriptions from parsing https://vk.com/textmeme.
### Supported Tasks and Leaderboards
`text2image` - generate meme from its textual description
`image2text` - generate description of given meme
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`.
## Dataset Structure
### Data Fields
- `Image`: Meme itself at 512 by 512px (image)
- `Text`: Description (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
As already mentioned, data was gathered from parsing https://vk.com/textmeme. | foldl/rumeme-desc | [
"size_categories:1K<n<10K",
"language:ru",
"license:cc-by-sa-4.0",
"ru",
"memes",
"text2image",
"image2text",
"region:us"
]
| 2023-01-18T14:28:37+00:00 | {"language": ["ru"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "pretty_name": "rumeme-desc", "tags": ["ru", "memes", "text2image", "image2text"]} | 2023-01-18T19:31:38+00:00 | []
| [
"ru"
]
| TAGS
#size_categories-1K<n<10K #language-Russian #license-cc-by-sa-4.0 #ru #memes #text2image #image2text #region-us
|
# Dataset Card for ruMeme Descriptions
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Considerations for Using the Data
- Contributions
## Dataset Description
### Dataset Summary
This is a dataset of more than 2500 memes in Russian and their descriptions from parsing URL
### Supported Tasks and Leaderboards
'text2image' - generate meme from its textual description
'image2text' - generate description of given meme
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.
## Dataset Structure
### Data Fields
- 'Image': Meme itself at 512 by 512px (image)
- 'Text': Description (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
As already mentioned, data was gathered from parsing URL | [
"# Dataset Card for ruMeme Descriptions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n- Contributions",
"## Dataset Description",
"### Dataset Summary\nThis is a dataset of more than 2500 memes in Russian and their descriptions from parsing URL",
"### Supported Tasks and Leaderboards\n\n'text2image' - generate meme from its textual description\n\n'image2text' - generate description of given meme",
"### Languages\n\nThe text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.",
"## Dataset Structure",
"### Data Fields\n\n- 'Image': Meme itself at 512 by 512px (image)\n- 'Text': Description (str)",
"### Data Splits\n\nThere is not enough examples yet to split it to train/test/val in my opinion.",
"## Dataset Creation\n\nAs already mentioned, data was gathered from parsing URL"
]
| [
"TAGS\n#size_categories-1K<n<10K #language-Russian #license-cc-by-sa-4.0 #ru #memes #text2image #image2text #region-us \n",
"# Dataset Card for ruMeme Descriptions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n- Contributions",
"## Dataset Description",
"### Dataset Summary\nThis is a dataset of more than 2500 memes in Russian and their descriptions from parsing URL",
"### Supported Tasks and Leaderboards\n\n'text2image' - generate meme from its textual description\n\n'image2text' - generate description of given meme",
"### Languages\n\nThe text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.",
"## Dataset Structure",
"### Data Fields\n\n- 'Image': Meme itself at 512 by 512px (image)\n- 'Text': Description (str)",
"### Data Splits\n\nThere is not enough examples yet to split it to train/test/val in my opinion.",
"## Dataset Creation\n\nAs already mentioned, data was gathered from parsing URL"
]
|
b5b1adff8fbbcdbb1e781f70132a0475bbdee29e | # Dataset Card for "boostcamp-docvqa-v2-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ssunbell/boostcamp-docvqa-v2-test | [
"region:us"
]
| 2023-01-18T14:40:14+00:00 | {"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "test", "num_bytes": 843083964, "num_examples": 5188}], "download_size": 296773802, "dataset_size": 843083964}} | 2023-01-18T14:41:30+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "boostcamp-docvqa-v2-test"
More Information needed | [
"# Dataset Card for \"boostcamp-docvqa-v2-test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"boostcamp-docvqa-v2-test\"\n\nMore Information needed"
]
|
b852e960ac5ed4d775014b497014003a171e3ba3 | # Dataset Card for "pc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taldarim/pc | [
"region:us"
]
| 2023-01-18T16:12:32+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "Results interpretation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Frameworks usage", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Algorithms design", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Algorithms implementation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Launching problem", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Performance issue", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 95159, "num_examples": 58}], "download_size": 50809, "dataset_size": 95159}} | 2023-01-18T16:12:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "pc"
More Information needed | [
"# Dataset Card for \"pc\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"pc\"\n\nMore Information needed"
]
|
fe2ef29cc43f75a4d33430f41e62c319048758a5 | # Dataset Card for "symptoms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taldarim/symptoms | [
"region:us"
]
| 2023-01-18T16:12:41+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "No results", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of proper plugin choices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Wrong results", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between simulators and devices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of algorithms design for general functionalities", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of algorithms implementation for general functionalities", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Input data importing failure", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of supported devices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between different versions of the operating system", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Software hangs", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of frameworks comparison", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of plugins integration", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of software configuration", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between different versions of the plugin", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Plugin loading failure", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Application running failure", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of algorithms implementation for runtime functionalities", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Software lags", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between different devices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Device unrecognizable", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 101655, "num_examples": 58}], "download_size": 58604, "dataset_size": 101655}} | 2023-01-18T16:12:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "symptoms"
More Information needed | [
"# Dataset Card for \"symptoms\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"symptoms\"\n\nMore Information needed"
]
|
4d01eaf83da481d4e77877cb6ba7ed10076b2d22 | # Dataset Card for "dreambooth_prior_reg_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_prior_reg_images | [
"region:us"
]
| 2023-01-18T16:21:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44656947.0, "num_examples": 100}], "download_size": 44658302, "dataset_size": 44656947.0}} | 2023-01-18T16:22:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dreambooth_prior_reg_images"
More Information needed | [
"# Dataset Card for \"dreambooth_prior_reg_images\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth_prior_reg_images\"\n\nMore Information needed"
]
|
69bdf4dfd62e3108c06d1d687b16aa28f03d1776 | # Dataset Card for "dreambooth_test_with_prior_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_test_with_prior_reg | [
"region:us"
]
| 2023-01-18T16:26:10+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 156473412.0, "num_examples": 200}, {"name": "validation", "num_bytes": 37346753.0, "num_examples": 32}], "download_size": 51418519, "dataset_size": 193820165.0}} | 2023-01-18T16:27:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dreambooth_test_with_prior_reg"
More Information needed | [
"# Dataset Card for \"dreambooth_test_with_prior_reg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth_test_with_prior_reg\"\n\nMore Information needed"
]
|
219fbc0b34adcbbd711f937fdeb6207798b0927c |
# Dataset Card for The Pile
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://pile.eleuther.ai/
- **Repository:** https://github.com/EleutherAI/the-pile
- **Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
- **Leaderboard:**
- **Point of Contact:** [EleutherAI](mailto:[email protected])
**This version of the pile relies on `mystic.the-eye.eu`, a mirror of `the-eye.eu` which is currently down for me.**
### Dataset Summary
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is in English (`EN`).
## Dataset Structure
### Data Instances
#### all
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play โSurvival of the Tastiestโ on Android, and on the web. Playing on...'
}
```
#### enron_emails
```
{
'text': 'Name\t\t\tNew Title\t\t\t\tEffective Date\t\t\tMid Year promotion Yes/No\n\nFloyd, Jodie\t\tSr Cust Svc Rep (no change)\t\t7/16/01\t\t\t\tNo\n\nBuehler, Craig\t\tSr Mkt/Sup Analyst (no change)\t\t7/16/01\t\t\t\tNo\n\nWagoner, Mike\t\tTeam Advisor - Gas Control\t\t7/1/01\t\t\t\tNo\n\nClapper, Karen\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nGreaney, Chris\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nWilkens, Jerry\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nMinton, Kevin\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nCox, Don\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nHanagriff, Richard\tSr Accounting Control Spec\t\t8/1/01\t\t\t\tYes\n\n\nThanks,\nMS'
'meta': "{}",
}
```
#### europarl
```
{
'text': 'Uvรกdฤnรญ biocidnรญch pลรญpravkลฏ na trh - Novรฝ nรกvrh revize tรฝkajรญcรญ se biocidnรญch pลรญpravkลฏ (rozprava) \nPลedsedajรญcรญ\nDalลกรญm bodem je spoleฤnรก rozprava o nรกsledujรญcรญch tรฉmatech:\nzprรกva panรญ Sรขrbuovรฉ za Vรฝbor pro ลพivotnรญ prostลedรญ, veลejnรฉ zdravรญ a bezpeฤnost potravin o nรกvrhu...'
'meta': "{'language': 'cs'}",
}
```
#### free_law
```
{
'meta': "{'case_jurisdiction': 'scotus.tar.gz', 'case_ID': '110921.json','date_created': '2010-04-28T17:12:49Z'}",
'text': '\n461 U.S. 238 (1983)\nOLIM ET AL.\nv.\nWAKINEKONA\nNo. 81-1581.\nSupreme Court of United States.\nArgued...'
}
```
#### hacker_news
```
{
'text': "\nChina Deserves Donald Trump - rm2889\nhttps://www.nytimes.com/2019/05/21/opinion/china-trump-trade.html\n======\nNotPaidToPost\n> so heโd be wise to curb his nationalistic โno-one-tells-China-what-to-doโ\n> bluster\n\nThis comment highlights both ignorance of Chinese history and continuing\nAmerican arrogance.\n\nChina has been painfully dictated what to do during the last 200 years. This\nhas had a profound effect on the country and has led to the collapse of\nimperial rule and the drive to 'rejuvenate'...",
'meta': "{'id': '19979654'}",
}
```
#### nih_exporter
```
{
'text': "The National Domestic Violence Hotline (NDVH) and the National Dating Abuse Helpline (NDAH), which are supported by the Division of Family Violence Prevention and Services within the Family and Youth Services Bureau, serve as critical partners in the intervention, prevention, and resource assistance efforts of the network of family violence, domestic violence, and dating violence service providers. They provide crisis intervention and support services; information about resources on domestic...",
'meta': " {'APPLICATION_ID': 100065}",
}
```
#### pubmed
```
{
'meta': {'pmid': 11409574, 'language': 'eng'},
'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age. Systematic review of the published literature. Out-patient clinics, emergency departments and hospitalisation wards in 23 health centres from 10 countries. Cohort studies reporting the frequency of hypoxaemia in children under 5 years of age with ALRI, and the association between hypoxaemia and the risk of dying. Prevalence of hypoxaemia measured in children with ARI and relative risks for the association between the severity of illness and the frequency of hypoxaemia, and between hypoxaemia and the risk of dying. Seventeen published studies were found that included 4,021 children under 5 with acute respiratory infections (ARI) and reported the prevalence of hypoxaemia. Out-patient children and those with a clinical diagnosis of upper ARI had a low risk of hypoxaemia (pooled estimate of 6% to 9%). The prevalence increased to 31% and to 43% in patients in emergency departments and in cases with clinical pneumonia, respectively, and it was even higher among hospitalised children (47%) and in those with radiographically confirmed pneumonia (72%). The cumulated data also suggest that hypoxaemia is more frequent in children living at high altitude. Three papers reported an association between hypoxaemia and death, with relative risks varying between 1.4 and 4.6. Papers describing predictors of hypoxaemia have focused on clinical signs for detecting hypoxaemia rather than on identifying risk factors for developing this complication. Hypoxaemia is a common and potentially lethal complication of ALRI in children under 5, particularly among those with severe disease and those living at high altitude. Given the observed high prevalence of hypoxaemia and its likely association with increased mortality, efforts should be made to improve the detection of hypoxaemia and to provide oxygen earlier to more children with severe ALRI.'
}
```
#### pubmed_central
```
{
'meta': "{id': 'PMC5595690'}",
'text': 'Introduction {#acel12642-sec-0001}\n============\n\nAlzheimer\\\'s disease (AD), the most common cause of...'
}
```
#### ubuntu_irc
```
{
'text': "#ubuntu 2004-07-05\n* Window 3\n* \tServer: [0] <None>\n* \tScreen: 0x817e90c\n* \tGeometry Info: [0 11 0 11 11 11] \n* \tCO, LI are [94 49] \n* \tCurrent channel: #ubuntu\n* \tQuery User: <None> \n*\tPrompt: <None>\n* \tSecond status line is OFF\n* \tSplit line is ON triple is OFF\n* \tLogging is ON\n* \tLogfile is irclogs/ubuntu.log\n* \tNotification is OFF\n* \tHold mode is OFF\n* \tWindow level is NONE\n* \tLastlog level is ALL\n* \tNotify level is ALL\n<mdz> lifeless: using tla effectively for all packages in Warty requ...",
'meta': "{'channel': 'ubuntu', 'month': 7}"
}
```
#### uspto
```
{
'text': "1. Field of the Invention\nIn an extensive plant breeding program, Grant Merrill, originator and now deceased, originated a large number of new and distinct varieties of fruit trees, and which included the herein-claimed variety of peach tree. Such plant breeding program was undertaken in originator's experimental orchard located near Exeter, Tulare County, Calif.\n2. Prior Varieties\nAmong the existent varieties of peach trees which were known to originator, particular reference is made to Gemfree (U.S. Plant Pat. No. 1,409) and June Lady (U.S. Plant Pat. No. 3,022) hereinafter mentioned for the purpose of comparison.",
'meta': "{'bibliographic_information': {'Patent Number': 'PP0049700', 'Series Code': '6', 'Application Number': '2845415', 'Application Type': '6', 'Art unit': '337', 'Application Filing Date': '19810720', 'Title of Invention': 'Peach tree (A3-10)', 'Issue Date': '19830104', 'Number of Claims': '1', 'Exemplary Claim Number(s)': '1', 'Primary Examiner': 'Bagwill; Robert E.', 'Number of Drawing Sheets': '1', 'Number of figures': '1'}, 'source_file': 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1983/pftaps19830104_wk01.zip', 'abstract': 'A peach tree which is large, vigorous, and spreading; foliated with large, lanceolate leaves having a finely serrate margin, a petiole of medium length and thickness, and medium size, reniform glands; blooms from medium size, conic, plump, pubescent buds; the flowers, medium in blooming period compared with other varieties, being of medium size, and pink; and is a regular and very productive bearer of medium but variable size, round truncate, clingstone fruit having yellow skin substantially overspread with red, yellow flesh mottled with red adjacent the skin, and an amber stone.', 'classifications': [{'OCL': ['Plt', '43'], 'EDF': ['3'], 'ICL': ['A01H', '503'], 'FSC': ['Plt'], 'FSS': ['43']}], 'inventors': [{'inventor name': 'Merrill, deceased; Grant', 'Street': '325 Breese Ave.', 'City': 'late of Red Bluff', 'State': 'CA'}, {'inventor name': 'Merrill, executrix; by Lucile B.', 'Street': '325 Breese Ave.', 'City': 'Red Bluff', 'State': 'CA', 'Zip code': '96080'}]}"
}
```
### Data Fields
#### all
- `text` (str): Text.
- `meta` (dict): Metadata of the data instance with keys:
- pile_set_name: Name of the subset.
#### enron_emails
- `text` (str): Text.
- `meta` (str): Metadata of the data instance.
#### europarl
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: language.
#### free_law
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.
#### hacker_news
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: id.
#### nih_exporter
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: APPLICATION_ID.
#### pubmed
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: pmid, language.
#### pubmed_central
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: ID of the data instance.
#### ubuntu_irc
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: channel, month.
#### uspto
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications,
inventors.
### Data Splits
The "all" configuration is composed of 3 splits: train, validation and test.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Please refer to the specific license depending on the subset you use:
- PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
```
@misc{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
year={2020},
eprint={2101.00027},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| jonatli/the_pile_mystic | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2101.00027",
"region:us"
]
| 2023-01-18T16:28:37+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "the-pile", "pretty_name": "The Pile"} | 2023-01-18T16:31:17+00:00 | [
"2101.00027"
]
| [
"en"
]
| TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-other #arxiv-2101.00027 #region-us
|
# Dataset Card for The Pile
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: The Pile: An 800GB Dataset of Diverse Text for Language Modeling
- Leaderboard:
- Point of Contact: EleutherAI
This version of the pile relies on 'URL', a mirror of 'URL' which is currently down for me.
### Dataset Summary
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together.
### Supported Tasks and Leaderboards
### Languages
This dataset is in English ('EN').
## Dataset Structure
### Data Instances
#### all
#### enron_emails
#### europarl
#### free_law
#### hacker_news
#### nih_exporter
#### pubmed
#### pubmed_central
#### ubuntu_irc
#### uspto
### Data Fields
#### all
- 'text' (str): Text.
- 'meta' (dict): Metadata of the data instance with keys:
- pile_set_name: Name of the subset.
#### enron_emails
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance.
#### europarl
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: language.
#### free_law
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.
#### hacker_news
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: id.
#### nih_exporter
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: APPLICATION_ID.
#### pubmed
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: pmid, language.
#### pubmed_central
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: ID of the data instance.
#### ubuntu_irc
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: channel, month.
#### uspto
- 'text' (str): Text.
- 'meta' (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications,
inventors.
### Data Splits
The "all" configuration is composed of 3 splits: train, validation and test.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Please refer to the specific license depending on the subset you use:
- PubMed Central: MIT License
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for The Pile",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: The Pile: An 800GB Dataset of Diverse Text for Language Modeling\n- Leaderboard:\n- Point of Contact: EleutherAI\n\nThis version of the pile relies on 'URL', a mirror of 'URL' which is currently down for me.",
"### Dataset Summary\n\nThe Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality\ndatasets combined together.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThis dataset is in English ('EN').",
"## Dataset Structure",
"### Data Instances",
"#### all",
"#### enron_emails",
"#### europarl",
"#### free_law",
"#### hacker_news",
"#### nih_exporter",
"#### pubmed",
"#### pubmed_central",
"#### ubuntu_irc",
"#### uspto",
"### Data Fields",
"#### all\n\n- 'text' (str): Text.\n- 'meta' (dict): Metadata of the data instance with keys:\n - pile_set_name: Name of the subset.",
"#### enron_emails\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance.",
"#### europarl\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: language.",
"#### free_law\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.",
"#### hacker_news\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: id.",
"#### nih_exporter\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: APPLICATION_ID.",
"#### pubmed\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: pmid, language.",
"#### pubmed_central\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: ID of the data instance.",
"#### ubuntu_irc\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: channel, month.",
"#### uspto\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications, \n inventors.",
"### Data Splits\n\nThe \"all\" configuration is composed of 3 splits: train, validation and test.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPlease refer to the specific license depending on the subset you use:\n- PubMed Central: MIT License",
"### Contributions\n\nThanks to @github-username for adding this dataset."
]
| [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-other #arxiv-2101.00027 #region-us \n",
"# Dataset Card for The Pile",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: The Pile: An 800GB Dataset of Diverse Text for Language Modeling\n- Leaderboard:\n- Point of Contact: EleutherAI\n\nThis version of the pile relies on 'URL', a mirror of 'URL' which is currently down for me.",
"### Dataset Summary\n\nThe Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality\ndatasets combined together.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThis dataset is in English ('EN').",
"## Dataset Structure",
"### Data Instances",
"#### all",
"#### enron_emails",
"#### europarl",
"#### free_law",
"#### hacker_news",
"#### nih_exporter",
"#### pubmed",
"#### pubmed_central",
"#### ubuntu_irc",
"#### uspto",
"### Data Fields",
"#### all\n\n- 'text' (str): Text.\n- 'meta' (dict): Metadata of the data instance with keys:\n - pile_set_name: Name of the subset.",
"#### enron_emails\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance.",
"#### europarl\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: language.",
"#### free_law\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.",
"#### hacker_news\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: id.",
"#### nih_exporter\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: APPLICATION_ID.",
"#### pubmed\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: pmid, language.",
"#### pubmed_central\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: ID of the data instance.",
"#### ubuntu_irc\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: channel, month.",
"#### uspto\n\n- 'text' (str): Text.\n- 'meta' (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications, \n inventors.",
"### Data Splits\n\nThe \"all\" configuration is composed of 3 splits: train, validation and test.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPlease refer to the specific license depending on the subset you use:\n- PubMed Central: MIT License",
"### Contributions\n\nThanks to @github-username for adding this dataset."
]
|
316faf8285d7ff4a4fd96c18129d83dfc3f223ab |
# Dawood Theme
## Description
My Theme!
## Preview
Add an image preview of your theme here!
## Contributions
Thanks to [@dawood](https://huggingface.co/dawood) for adding this gradio theme!
| dawood/dawood-theme | [
"gradio-theme",
"region:us"
]
| 2023-01-18T16:32:44+00:00 | {"tags": ["gradio-theme"]} | 2023-01-18T16:32:45+00:00 | []
| []
| TAGS
#gradio-theme #region-us
|
# Dawood Theme
## Description
My Theme!
## Preview
Add an image preview of your theme here!
## Contributions
Thanks to @dawood for adding this gradio theme!
| [
"# Dawood Theme",
"## Description\n\nMy Theme!",
"## Preview\n\nAdd an image preview of your theme here!",
"## Contributions\n\nThanks to @dawood for adding this gradio theme!"
]
| [
"TAGS\n#gradio-theme #region-us \n",
"# Dawood Theme",
"## Description\n\nMy Theme!",
"## Preview\n\nAdd an image preview of your theme here!",
"## Contributions\n\nThanks to @dawood for adding this gradio theme!"
]
|
920960c8af62a00aa6fefb49adb2904b422353b8 | # Dataset Card for "c4-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ola13/c4-clusters | [
"region:us"
]
| 2023-01-18T17:17:57+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "meta", "struct": [{"name": "perplexity_score", "dtype": "float64"}]}, {"name": "text_length", "dtype": "int64"}, {"name": "domain", "dtype": "null"}, {"name": "perplexity", "dtype": "float64"}, {"name": "dup_ratio", "dtype": "float64"}, {"name": "pairs", "sequence": {"sequence": "int64"}}, {"name": "repetitions", "sequence": "binary"}, {"name": "cluster", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1061375955254, "num_examples": 364868892}], "download_size": 137201241092, "dataset_size": 1061375955254}} | 2023-01-20T13:22:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "c4-clusters"
More Information needed | [
"# Dataset Card for \"c4-clusters\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-clusters\"\n\nMore Information needed"
]
|
e21a65a60de0d1d1ba8ab44c0afc832dd1b48bc2 |
# Dataset Card for [scnclab2023]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
The Dataset has been created using the GPT-3 API by providing a prompt with some manually created clinical notes.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotation has been done using [Argilla](https://github.com/argilla-io)
#### Who are the annotators?
The sinthetical clinical notes have been annotated by a group of three biomedical experts
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Note that this is not a real dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | relevanthint/scnclab2023 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"bio",
"clinic",
"cancer",
"region:us"
]
| 2023-01-18T18:34:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "scnclab2023", "pretty_name": "Synthetical Clinical Notes - Clab 2023", "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-allergies", "2": "I-allergies", "3": "B-biomarkers", "4": "I-biomarkers", "5": "B-cancer_symptoms", "6": "I-cancer_symptoms", "7": "B-cancer_type", "8": "I-cancer_type", "9": "B-date", "10": "I-date", "11": "B-diagnosis", "12": "I-diagnosis", "13": "B-gender", "14": "I-gender", "15": "B-imaging_options", "16": "I-imaging_options", "17": "B-test_result", "18": "I-test_result", "19": "B-treatment", "20": "I-treatment"}}}}]}, "tags": ["bio", "clinic", "cancer"]} | 2023-01-19T22:35:17+00:00 | []
| [
"en"
]
| TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #bio #clinic #cancer #region-us
|
# Dataset Card for [scnclab2023]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact: relevanthint@URL
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
The Dataset has been created using the GPT-3 API by providing a prompt with some manually created clinical notes.
#### Who are the source language producers?
### Annotations
#### Annotation process
The annotation has been done using Argilla
#### Who are the annotators?
The sinthetical clinical notes have been annotated by a group of three biomedical experts
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
Note that this is not a real dataset.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [scnclab2023]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: relevanthint@URL",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nThe Dataset has been created using the GPT-3 API by providing a prompt with some manually created clinical notes.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe annotation has been done using Argilla",
"#### Who are the annotators?\n\nThe sinthetical clinical notes have been annotated by a group of three biomedical experts",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nNote that this is not a real dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
]
| [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #bio #clinic #cancer #region-us \n",
"# Dataset Card for [scnclab2023]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: relevanthint@URL",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nThe Dataset has been created using the GPT-3 API by providing a prompt with some manually created clinical notes.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe annotation has been done using Argilla",
"#### Who are the annotators?\n\nThe sinthetical clinical notes have been annotated by a group of three biomedical experts",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nNote that this is not a real dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
]
|
acfabcad7a4ad9046bc9494240eba44ff6724916 | # Dataset Card for "rick-and-morty-all-seasons-v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | storia/rick-and-morty-all-seasons | [
"region:us"
]
| 2023-01-18T18:45:36+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "subtitle", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "characters", "dtype": "string"}, {"name": "frame", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1637895252.464, "num_examples": 15264}, {"name": "test", "num_bytes": 5458443.0, "num_examples": 46}], "download_size": 1363032355, "dataset_size": 1643353695.464}} | 2023-01-18T18:46:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rick-and-morty-all-seasons-v4"
More Information needed | [
"# Dataset Card for \"rick-and-morty-all-seasons-v4\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rick-and-morty-all-seasons-v4\"\n\nMore Information needed"
]
|
32cef4e92bd2e27a4423d089b9554adb575d9ea6 | # Dataset Card for "illustrated_ads_images_labels_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/illustrated_ads_images_labels_only | [
"size_categories:n<1K",
"region:us"
]
| 2023-01-18T20:42:43+00:00 | {"size_categories": ["n<1K"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "text-only", "1": "illustrations"}}}}], "splits": [{"name": "train", "num_bytes": 47581375, "num_examples": 549}], "download_size": 47599430, "dataset_size": 47581375}} | 2023-01-18T20:49:56+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
| # Dataset Card for "illustrated_ads_images_labels_only"
More Information needed | [
"# Dataset Card for \"illustrated_ads_images_labels_only\"\n\nMore Information needed"
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# Dataset Card for \"illustrated_ads_images_labels_only\"\n\nMore Information needed"
]
|
5007b08f0ba5a4f93bb4f7e1654711b376830cd1 | # Dataset Card for "mls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/mls | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
]
| 2023-01-18T22:16:12+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2690661, "num_examples": 142}], "download_size": 1117834, "dataset_size": 2690661}, "tags": ["whisper", "whispering", "medium"]} | 2023-01-24T13:51:58+00:00 | []
| []
| TAGS
#task_categories-automatic-speech-recognition #whisper #whispering #medium #region-us
| # Dataset Card for "mls"
More Information needed | [
"# Dataset Card for \"mls\"\n\nMore Information needed"
]
| [
"TAGS\n#task_categories-automatic-speech-recognition #whisper #whispering #medium #region-us \n",
"# Dataset Card for \"mls\"\n\nMore Information needed"
]
|
67f7d06b0e302380e5865c74bfa319dcfeca61e4 | # Dataset Card for "legal_dataset2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marcus2000/legal_dataset2023 | [
"region:us"
]
| 2023-01-18T22:23:23+00:00 | {"dataset_info": {"features": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 110824374, "num_examples": 1723}, {"name": "test", "num_bytes": 21065187, "num_examples": 306}], "download_size": 41312472, "dataset_size": 131889561}} | 2023-01-18T22:31:59+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "legal_dataset2023"
More Information needed | [
"# Dataset Card for \"legal_dataset2023\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"legal_dataset2023\"\n\nMore Information needed"
]
|
772a1acf05ee05d3c38f3e4f173c25b2b11d1b8c | # Dataset Card for "olm-wikipedia-20221220-1-percent-tokenized-766"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221220-1-percent-tokenized-766 | [
"region:us"
]
| 2023-01-18T22:33:22+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 300178944, "num_examples": 65143}], "download_size": 93964466, "dataset_size": 300178944}} | 2023-01-18T22:33:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "olm-wikipedia-20221220-1-percent-tokenized-766"
More Information needed | [
"# Dataset Card for \"olm-wikipedia-20221220-1-percent-tokenized-766\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-wikipedia-20221220-1-percent-tokenized-766\"\n\nMore Information needed"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.