sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
5d7c462f99263b16b72306f21f3f87b2ecdf83ea | asr files | ebrigham/asr_files | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-03T11:29:38+00:00 |
fbeeed5fdfe4f226299f5fa26fda176cb260f333 | echarlaix/gqa-lxmert | [
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0"} | 2022-02-09T23:39:45+00:00 |
|
5a76297440a02f78d9b6dbd0fea87d62d132676b | echarlaix/gqa | [
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0"} | 2022-02-01T10:44:11+00:00 |
|
54f2ecab65bd61d27cc66597f7abb8305cfe9a28 | echarlaix/vqa-lxmert | [
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0"} | 2022-02-09T23:41:22+00:00 |
|
28994091cc52fbeb166d4bd5eb870e9642b5baef | echarlaix/vqa | [
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0"} | 2022-02-01T10:45:13+00:00 |
|
4441c97718b1f7e03d05f430226b57f658cc156d | # Dataset Card for D4RL-gym
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/d4rl/home/
- **Repository:** https://github.com/rail-berkeley/d4rl*
- **Paper:** D4RL: Datasets for Deep Data-Driven Reinforcement Learning https://arxiv.org/abs/2004.07219
### Dataset Summary
D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms.
We host here a subset of the dataset, used for the training of Decision Transformers : https://github.com/kzl/decision-transformer
There is only a training set for this dataset, as evaluation is undertaken by interacting with a simulator.
## Dataset Structure
### Data Instances
A data point comprises tuples of sequences of (observations, actions, reward, dones):
```
{
"observations":datasets.Array2D(),
"actions":datasets.Array2D(),
"rewards":datasets.Array2D(),
"dones":datasets.Array2D(),
}
```
### Data Fields
- `observations`: An Array2D containing 1000 observations from a trajectory of an evaluated agent.
- `actions`: An Array2D containing 1000 actions from a trajectory of an evaluated agent.
- `rewards`: An Array2D containing 1000 rewards from a trajectory of an evaluated agent.
- `dones`: An Array2D containing 1000 terminal state flags from a trajectory of an evaluated agent.
### Data Splits
There is only a training set for this dataset, as evaluation is undertaken by interacting with a simulator.
## Additional Information
### Dataset Curators
Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, Sergey Levine
### Licensing Information
MIT Licence
### Citation Information
```
@misc{fu2021d4rl,
title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
year={2021},
eprint={2004.07219},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Contributions
Thanks to [@edbeeching](https://github.com/edbeeching) for adding this dataset. | edbeeching/decision_transformer_gym_replay | [
"license:apache-2.0",
"arxiv:2004.07219",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0", "pretty_name": "D4RL-gym"} | 2022-04-20T11:39:58+00:00 |
2a081d71c7613e86fea6a2b80c74326896b3e892 | annotations_creators:
- other
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: HuggingFace Github Issues
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval | edbeeching/github-issues | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-11T14:20:42+00:00 |
0ea0800152e4bb1635be7e4f8030919b994cafcf | edsas/fgrdtgrdtdr | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-05-06T00:33:59+00:00 |
|
9c064b25bc35189d83db7d6d6aa5ec66a2175dec | edsas/grttyi | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-05-06T00:37:07+00:00 |
|
9e426939f02e1980603736a1413d5aefc0dd3d93 |
# Dataset Card for ravdess_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://zenodo.org/record/1188976#.YUS4MrozZdS
- **Paper:** https://doi.org/10.1371/journal.pone.0196391
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [email protected]
### Dataset Summary
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression. The conditions of the audio files are: 16bit, 48kHz .wav.
### Supported Tasks and Leaderboards
- audio-classification: The dataset can be used to train a model for Audio Classification tasks, which consists in predict the latent emotion presented on the audios.
### Languages
The audios available in the dataset are in English spoken by actors in a neutral North American accent.
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0
Commercial licenses for the RAVDESS can also be purchased. For more information, please visit our license fee page, or contact us at [email protected].
### Citation Information
Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391. | ehcalabres/ravdess_speech | [
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["audio-classification"], "task_ids": ["speech-emotion-recognition"]} | 2022-10-24T14:51:41+00:00 |
3b7a02bb3b724993f0b4c1a1f77f1eacda8e7aca | MediaSpeech
Identifier: SLR108
Summary: French, Arabic, Turkish and Spanish media speech datasets
Category: Speech
License: dataset is distributed under the Creative Commons Attribution 4.0 International License.
About this resource:
MediaSpeech is a dataset of French, Arabic, Turkish and Spanish media speech built with the purpose of testing Automated Speech Recognition (ASR) systems performance. The dataset contains 10 hours of speech for each language provided.
The dataset consists of short speech segments automatically extracted from media videos available on YouTube and manually transcribed, with some pre- and post-processing.
Baseline models and wav version of the dataset can be found in the following git repository: https://github.com/NTRLab/MediaSpeech
@misc{mediaspeech2021,
title={MediaSpeech: Multilanguage ASR Benchmark and Dataset},
author={Rostislav Kolobov and Olga Okhapkina and Olga Omelchishina, Andrey Platunov and Roman Bedyakin and Vyacheslav Moshkin and Dmitry Menshikov and Nikolay Mikhaylovskiy},
year={2021},
eprint={2103.16193},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
| emre/Open_SLR108_Turkish_10_hours | [
"license:cc-by-4.0",
"robust-speech-event",
"arxiv:2103.16193",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc-by-4.0", "tags": ["robust-speech-event"], "datasets": ["MediaSpeech"]} | 2022-12-06T21:00:45+00:00 |
79dd9aac442c9a88535865583a3ed4e75d7b47da |
# STSb Turkish
Semantic textual similarity dataset for the Turkish language. It is a machine translation (Azure) of the [STSb English](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) dataset. This dataset is not reviewed by expert human translators.
Uploaded from [this repository](https://github.com/emrecncelik/sts-benchmark-tr). | emrecan/stsb-mt-turkish | [
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"language_creators:machine-generated",
"size_categories:1K<n<10K",
"source_datasets:extended|other-sts-b",
"language:tr",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language_creators": ["machine-generated"], "language": ["tr"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|other-sts-b"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-scoring", "text-scoring"]} | 2022-10-25T09:55:24+00:00 |
7c235e1da745ff8aef467b19ef6b155642ca8bcf |
This is an extract of the original [Czywiesz](https://clarin-pl.eu/dspace/handle/11321/39) dataset. It contains the questions and the relevant Wikipedia
passages in format compatible with DPR training objective. It may be used to train a passage retriever. | enelpol/czywiesz | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["pl"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "pretty_name": "Czywiesz"} | 2022-10-25T08:07:45+00:00 |
60a26b89257179967d48dc8de7c24c0c9df76c16 |
# Dataset Card for cocktails_recipe
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains a list of cocktails and how to do them.
### Languages
The language is english.
## Dataset Structure
### Data Fields
- Title: name of the cocktail
- Glass: type of glass to use
- Garnish: garnish to use for the glass
- Recipe: how to do the cocktail
- Ingredients: ingredients required
### Data Splits
Currently, there is no splits.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was created by scraping the Diffords cocktail website.
### Personal and Sensitive Information
It should not contain any personal or sensitive information.
### Contributions
Thanks to [@github-erwanlc](https://github.com/erwanlc) for adding this dataset. | erwanlc/cocktails_recipe | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:2M<n<3M",
"language:en",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["2M<n<3M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "cocktails_recipe", "language_bcp47": ["en", "en-US"]} | 2022-10-25T08:17:00+00:00 |
a33b63910d8c33675132dd3a8f285549ef8b4b7b |
# Dataset Card for cocktails_recipe
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains a list of cocktails and how to do them.
### Languages
The language is english.
## Dataset Structure
### Data Fields
- Title: name of the cocktail
- Glass: type of glass to use
- Garnish: garnish to use for the glass
- Recipe: how to do the cocktail
- Ingredients: ingredients required
- Raw Ingredients: ingredients mapped to their raw ingredients to remove the brand
### Data Splits
Currently, there is no splits.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was created by scraping the Diffords cocktail website.
### Personal and Sensitive Information
It should not contain any personal or sensitive information.
### Contributions
Thanks to [@github-erwanlc](https://github.com/erwanlc) for adding this dataset. | erwanlc/cocktails_recipe_no_brand | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:2M<n<3M",
"language:en",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["2M<n<3M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "cocktails_recipe_no_brand", "language_bcp47": ["en", "en-US"]} | 2022-10-25T08:17:08+00:00 |
d0551d78fbb13309bfbfdb942f01e58cbe41a472 | espejelomar/code_search_net_python_10000_examples | [
"license:cc",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc"} | 2022-02-20T03:42:13+00:00 |
|
7a20e0a3c51c5e5153a4416c8606a1476565fa74 |
# Dataset Card for BSD100
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/
- **Repository**: https://huggingface.co/datasets/eugenesiow/BSD100
- **Paper**: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
BSD is a dataset used frequently for image denoising and super-resolution. Of the subdatasets, BSD100 is aclassical image dataset having 100 test images proposed by [Martin et al. (2001)](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655). The dataset is composed of a large variety of images ranging from natural images to object-specific such as plants, people, food etc. BSD100 is the testing set of the Berkeley segmentation dataset BSD300.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/BSD100', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/BSD100_HR/3096.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/BSD100_LR_x2/3096.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|100|
|bicubic_x3|100|
|bicubic_x4|100|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Martin et al. (2001)](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655)
### Licensing Information
You are free to download a portion of the dataset for non-commercial research and educational purposes.
In exchange, we request only that you make available to us the results of running your segmentation or
boundary detection algorithm on the test set as described below. Work based on the dataset should cite
the [Martin et al. (2001)](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655) paper.
### Citation Information
```bibtex
@inproceedings{martin2001database,
title={A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics},
author={Martin, David and Fowlkes, Charless and Tal, Doron and Malik, Jitendra},
booktitle={Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001},
volume={2},
pages={416--423},
year={2001},
organization={IEEE}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| eugenesiow/BSD100 | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"image-super-resolution",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": [], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "BSD100", "tags": ["image-super-resolution"]} | 2022-10-26T01:20:22+00:00 |
a6aa2cb45e33a4753d28a373bd1125a321a1c21d |
# Dataset Card for Div2k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://data.vision.ee.ethz.ch/cvl/DIV2K/
- **Repository**: https://huggingface.co/datasets/eugenesiow/Div2k
- **Paper**: http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
DIV2K is a dataset of RGB images (2K resolution high quality images) with a large diversity of contents.
The DIV2K dataset is divided into:
- train data: starting from 800 high definition high resolution images we obtain corresponding low resolution images and provide both high and low resolution images for 2, 3, and 4 downscaling factors
- validation data: 100 high definition high resolution images are used for genereting low resolution corresponding images, the low res are provided from the beginning of the challenge and are meant for the participants to get online feedback from the validation server; the high resolution images will be released when the final phase of the challenge starts.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for training and evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `train` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/DIV2K_valid_HR/0801.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/DIV2K_valid_LR_bicubic/X2/0801x2.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |train |validation|
|-------|-----:|---:|
|bicubic_x2|800|100|
|bicubic_x3|800|100|
|bicubic_x4|800|100|
|bicubic_x8|800|100|
|unknown_x2|800|100|
|unknown_x3|800|100|
|unknown_x4|800|100|
|realistic_mild_x4|800|100|
|realistic_difficult_x4|800|100|
|realistic_wild_x4|800|100|
## Dataset Creation
### Curation Rationale
Please refer to the [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) section.
### Source Data
#### Initial Data Collection and Normalization
**Resolution and quality**: All the images are 2K resolution, that is they have 2K pixels on at least one of
the axes (vertical or horizontal). All the images were processed using the same tools. For simplicity, since the most
common magnification factors in the recent SR literature are of ×2, ×3 and ×4 we cropped the images to multiple of
12 pixels on both axes. Most of the crawled images were originally above 20M pixels.
The images are of high quality both aesthetically and in the terms of small amounts of noise and other corruptions
(like blur and color shifts).
**Diversity**: The authors collected images from dozens of sites. A preference was made for sites with freely
shared high quality photography (such as https://www.pexels.com/ ). Note that we did not use images from Flickr,
Instagram, or other legally binding or copyright restricted images. We only seldom used keywords to assure the diversity
for our dataset. DIV2K covers a large diversity of contents, ranging from people, handmade objects and environments
(cities, villages), to flora and fauna, and natural sceneries including underwater and dim light conditions.
**Partitions**: After collecting the DIV2K 1000 images the authors computed image entropy, bit per pixel (bpp) PNG
compression rates and CORNIA scores (see Section 7.6) and applied bicubic downscaling ×3 and then upscaling ×3 with
bicubic interpolation (imresize Matlab function), ANR [47] and A+ [48] methods and default settings.
The authors randomly generated partitions of 800 train, 100 validation and 100 test images until they achieved a good
balance firstly in visual contents and then on the average entropy, average bpp, average number of pixels per
image (ppi), average CORNIA quality scores and also in the relative differences between the average PSNR scores of
bicubic, ANR and A+ methods.
Only the 800 train and 100 validation images are included in this dataset.
#### Who are the source language producers?
The authors manually crawled 1000 color RGB images from Internet paying special attention to the image quality,
to the diversity of sources (sites and cameras), to the image contents and to the copyrights.
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
All the images are collected from the Internet, and the copyright belongs to the original owners. If any of the images
belongs to you and you would like it removed, please kindly inform the authors, and they will remove it from the dataset
immediately.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Author**: [Radu Timofte](http://people.ee.ethz.ch/~timofter/)
### Licensing Information
Please notice that this dataset is made available for academic research purpose only. All the images are
collected from the Internet, and the copyright belongs to the original owners. If any of the images belongs to
you and you would like it removed, please kindly inform the authors, and they will remove it from the dataset
immediately.
### Citation Information
```bibtex
@InProceedings{Agustsson_2017_CVPR_Workshops,
author = {Agustsson, Eirikur and Timofte, Radu},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
url = "http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf",
month = {July},
year = {2017}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| eugenesiow/Div2k | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": [], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Div2k", "tags": ["other-image-super-resolution"]} | 2022-10-21T03:01:10+00:00 |
0fbc53ce3af34f8283a46d70ed353ccc67085237 |
# Dataset Card for PIRM
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/roimehrez/PIRM2018
- **Repository**: https://huggingface.co/datasets/eugenesiow/PIRM
- **Paper**: https://arxiv.org/abs/1809.07517
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing.
These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc.
Images vary in size, and are typically ~300K pixels in resolution.
This dataset was first used for evaluating the perceptual quality of super-resolution algorithms in The 2018 PIRM
challenge on Perceptual Super-resolution, in conjunction with ECCV 2018.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/PIRM', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/PIRM_valid_HR/1.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/PIRM_valid_LR_x2/1.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|test|
|-------|---:|---:|
|bicubic_x2|100|100|
|bicubic_x3|100|100|
|bicubic_x4|100|100|
|unknown_x4|100|100|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Blau et al. (2018)](https://arxiv.org/abs/1809.07517)
### Licensing Information
This dataset is published under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@misc{blau20192018,
title={The 2018 PIRM Challenge on Perceptual Image Super-resolution},
author={Yochai Blau and Roey Mechrez and Radu Timofte and Tomer Michaeli and Lihi Zelnik-Manor},
year={2019},
eprint={1809.07517},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| eugenesiow/PIRM | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:cc-by-nc-sa-4.0",
"other-image-super-resolution",
"arxiv:1809.07517",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": [], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "PIRM", "tags": ["other-image-super-resolution"]} | 2022-10-21T03:01:16+00:00 |
5afcf80d267dba61cdfa9a32b1a6fe4cca57b6d7 |
# Dataset Card for Set14
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://sites.google.com/site/romanzeyde/research-interests
- **Repository**: https://huggingface.co/datasets/eugenesiow/Set14
- **Paper**: http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
Set14 is an evaluation dataset with 14 RGB images for the image super resolution task. It was first used as the test set of the paper "On single image scale-up using sparse-representations" by [Zeyde et al. (2010)](http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf).
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Set14', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Set14_HR/baboon.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Set14_LR_x2/baboon.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|14|
|bicubic_x3|14|
|bicubic_x4|14|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Zeyde et al.](http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf)
### Licensing Information
Academic use only.
### Citation Information
```bibtex
@inproceedings{zeyde2010single,
title={On single image scale-up using sparse-representations},
author={Zeyde, Roman and Elad, Michael and Protter, Matan},
booktitle={International conference on curves and surfaces},
pages={711--730},
year={2010},
organization={Springer}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| eugenesiow/Set14 | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": [], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Set14", "tags": ["other-image-super-resolution"]} | 2022-10-21T03:00:31+00:00 |
d8b579a20afde95b4d8ed6bf6383447d33027295 |
# Dataset Card for Set5
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html
- **Repository**: https://huggingface.co/datasets/eugenesiow/Set5
- **Paper**: http://people.rennes.inria.fr/Aline.Roumy/publi/12bmvc_Bevilacqua_lowComplexitySR.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
Set5 is a evaluation dataset with 5 RGB images for the image super resolution task. The 5 images of the dataset are (“baby”, “bird”, “butterfly”, “head”, “woman”).
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Set5', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Set5_HR/baby.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Set5_LR_x2/baby.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|5|
|bicubic_x3|5|
|bicubic_x4|5|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Bevilacqua et al.](http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html)
### Licensing Information
Academic use only.
### Citation Information
```bibtex
@article{bevilacqua2012low,
title={Low-complexity single-image super-resolution based on nonnegative neighbor embedding},
author={Bevilacqua, Marco and Roumy, Aline and Guillemot, Christine and Alberi-Morel, Marie Line},
year={2012},
publisher={BMVA press}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| eugenesiow/Set5 | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": [], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Set5", "tags": ["other-image-super-resolution"]} | 2022-10-21T02:59:16+00:00 |
fb0d8a4c6b2471d32bd133de40bb8bb10dde69b9 |
# Dataset Card for Urban100
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/jbhuang0604/SelfExSR
- **Repository**: https://huggingface.co/datasets/eugenesiow/Urban100
- **Paper**: https://openaccess.thecvf.com/content_cvpr_2015/html/Huang_Single_Image_Super-Resolution_2015_CVPR_paper.html
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
The Urban100 dataset contains 100 images of urban scenes. It commonly used as a test set to evaluate the performance of super-resolution models. It was first published by [Huang et al. (2015)](https://openaccess.thecvf.com/content_cvpr_2015/html/Huang_Single_Image_Super-Resolution_2015_CVPR_paper.html) in the paper "Single Image Super-Resolution From Transformed Self-Exemplars".
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Urban100', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Urban100_HR/img_001.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Urban100_LR_x2/img_001.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|100|
|bicubic_x3|100|
|bicubic_x4|100|
## Dataset Creation
### Curation Rationale
The authors have created Urban100 containing 100 HR images with a variety of real-world structures.
### Source Data
#### Initial Data Collection and Normalization
The authors constructed this dataset using images from Flickr (under CC license) using keywords such as urban, city, architecture, and structure.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Huang et al. (2015)](https://github.com/jbhuang0604/SelfExSR)
### Licensing Information
The dataset provided uses images from Flikr under the CC (CC-BY-4.0) license.
### Citation Information
```bibtex
@InProceedings{Huang_2015_CVPR,
author = {Huang, Jia-Bin and Singh, Abhishek and Ahuja, Narendra},
title = {Single Image Super-Resolution From Transformed Self-Exemplars},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| eugenesiow/Urban100 | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:cc-by-4.0",
"other-image-super-resolution",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": [], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Urban100", "tags": ["other-image-super-resolution"]} | 2022-10-21T02:58:53+00:00 |
288fa596f1a5ceb5c207c8ebdcebc92e15903ce7 | # IADD
IADD is an Integrated Dataset for Arabic Dialect iDentification Dataset. It contains 136,317 texts representing 5 regions (Maghrebi (MGH) , Levantine (LEV), Egypt (EGY) , Iraq (IRQ) and Gulf (GLF)) and 9 countries (Algeria, Morocco, Tunisia, Palestine, Jordan, Syria, Lebanon, Egypt and Iraq).
IADD is created from the combination of subsets of five corpora: DART, SHAMI, TSAC, PADIC and AOC. The Dialectal ARabic Tweets dataset (DART) [1] has about 25,000 tweets that are annotated via crowdsourcing while the SHAMI dataset [2] consists of 117,805 sentences and covers levantine dialects spoken in Palestine, Jordan, Lebanon and Syria. TSAC [3] is a Tunisian dialect corpus of 17,000 comments collected mainly from Tunisian Facebook pages. Parallel Arabic Dialect Corpus (PADIC) [4] is made of sentences transcribed from recordings or translated from MSA. Finally, the Arabic Online Commentary (AOC) dataset [5] is based on reader commentary from the online versions of three Arabic newspapers, and it consists of 1.4M comments.
IADD is stored in a JSON-like format with the following keys:
- Sentence: contains the sentence/ text;
- Region: stores the corresponding dialectal region (MGH, LEV, EGY, IRQ, GLF or general);
- Country: specifies the corresponding country, if available (MAR, TUN, DZ, EGY, IRQ, SYR, JOR, PSE, LBN);
- DataSource: indicates the source of the data (PADIC, DART, AOC, SHAMI or TSAC).
[1] Alsarsour, I., Mohamed, E., Suwaileh, R., & Elsayed, T. (2018, May). Dart: A large dataset of dialectal arabic tweets. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
[2] Abu Kwaik, K., Saad, M. K., Chatzikyriakidis, S., & Dobnik, S. (2018). Shami: A corpus of levantine arabic dialects. In Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018).
[3] Mdhaffar, S., Bougares, F., Esteve, Y., & Hadrich-Belguith, L. (2017, April). Sentiment analysis of tunisian dialects: Linguistic ressources and experiments. In Third Arabic Natural Language Processing Workshop (WANLP) (pp. 55-61).
[4] Meftouh, K., Harrat, S., Jamoussi, S., Abbas, M., & Smaili, K. (2015, October). Machine translation experiments on PADIC: A parallel Arabic dialect corpus. In The 29th Pacific Asia conference on language, information and computation.
[5] Zaidan, O., & Callison-Burch, C. (2011, June). The arabic online commentary dataset: an annotated dataset of informal arabic with high dialectal content. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (pp. 37-41).
| evageon/IADD | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc-by-4.0"} | 2022-01-29T11:16:17+00:00 |
d22a730b623deccb518ee6ad0cf8cc8cef98e9cd |
# Dataset Card for MultiLingual LibriSpeech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
- **Repository:** [Needs More Information]
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
- **Leaderboard:** [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=facebook%2Fmultilingual_librispeech&only_verified=0&task=automatic-speech-recognition&config=-unspecified-&split=-unspecified-&metric=wer)
### Dataset Summary
This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
### Languages
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
print(next(iter(mls)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on MultiLingual Librispeech with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'file': '10900_6473_000030.flac',
'audio': {'path': '10900_6473_000030.flac',
'array': array([-1.52587891e-04, 6.10351562e-05, 0.00000000e+00, ...,
4.27246094e-04, 5.49316406e-04, 4.57763672e-04]),
'sampling_rate': 16000},
'text': 'więc czego chcecie odemnie spytałem wysłuchawszy tego zadziwiającego opowiadania broń nas stary człowieku broń zakrzyknęli równocześnie obaj posłowie\n',
'speaker_id': 10900,
'chapter_id': 6473,
'id': '10900_6473_000030'}
```
### Data Fields
- file: A filename .flac format.
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
| | Train | Train.9h | Train.1h | Dev | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
| french | 258213 | 2167 | 241 | 2416 | 2426 |
| spanish | 220701 | 2110 | 233 | 2408 | 2385 |
| italian | 59623 | 2173 | 240 | 1248 | 1262 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
| facebook/multilingual_librispeech | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:nl",
"language:fr",
"language:it",
"language:es",
"language:pt",
"language:pl",
"license:cc-by-4.0",
"arxiv:2012.03411",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["de", "nl", "fr", "it", "es", "pt", "pl"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "multilingual-librispeech", "pretty_name": "MultiLingual LibriSpeech"} | 2023-02-13T11:33:31+00:00 |
11518c2b8a66ab7d01becc9aef0c8717ec566908 | fastjt/fasst | [
"license:afl-3.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "afl-3.0"} | 2022-02-23T11:52:46+00:00 |
|
0cdd4e45510c9e5a82bdb350252cf3193f06ca3a | fededeleon/CriteriosClasificacion | [
"license:mit",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "mit"} | 2022-02-08T15:35:04+00:00 |
|
98afeae90eadb629ae70cd2d0fc16f64c2cd2f8d | # NewsMTSC dataset
NewsMTSC is a high-quality dataset consisting of more than 11k manually labeled sentences sampled from English news articles. Each sentence was labeled by five human coders (the dataset contains only examples where the five coders assessed same or similar sentiment). The dataset is published as a [full paper at EACL 2021: *NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles*](https://aclanthology.org/2021.eacl-main.142.pdf).
## Subsets and splits
The dataset consists of two subsets (`rw` and `mt`), each consisting of three splits (train, validation, and test). We recommend to use the `rw` subset, which is also the default subset. Both subsets share the same train set, in which the three sentiment classes have similar frequency since we applied class boosting. The two subsets differ in their validation and test sets: `rw` contains validation and test sets that resemble real-world distribution of sentiment in news articles. In contrast, `mt`'s validation and test sets contain only sentences that each have two or more (different) targets, where each target's sentiment was labeled individually.
More information on the subsets can be found in our [paper](https://aclanthology.org/2021.eacl-main.142.pdf).
## Format
Each split is stored in a JSONL file. In JSONL, each line represents one JSON object. In our dataset, each JSON object consists of the following attributes. When using the dataset, you most likely will need (only) the attributes highlighted in **bold**.
1. `mention`: text of the mention within `sentence`
2. **`polarity`: sentiment of the sentence concerning the target's mention (-1 = negative, 0 = neutral, 1 = positive)**
3. **`from`: character-based, 0-indexed position of the first character of the target's mention within `sentence`**
4. **`to`: last character of the target's mention**
5. **`sentence`: sentence**
6. `id`: identifier that is unique within NewsMTSC
## Contact
If you find an issue with the dataset or model or have a question concerning either, please open an issue in the repository.
* Repository: [https://github.com/fhamborg/NewsMTSC](https://github.com/fhamborg/NewsMTSC)
* Web: [https://felix.hamborg.eu/](https://felix.hamborg.eu/)
## How to cite
If you use the dataset or parts of it, please cite our paper:
```
@InProceedings{Hamborg2021b,
author = {Hamborg, Felix and Donnay, Karsten},
title = {NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)},
year = {2021},
month = {Apr.},
location = {Virtual Event},
}
```
| fhamborg/news_sentiment_newsmtsc | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "NewsMTSC", "language_bcp47": ["en-US"]} | 2022-10-25T08:20:03+00:00 |
09b22d4131212aef1221099273ff3af68f5f2566 | fighterhitx/test | [
"license:cc",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc"} | 2022-02-17T08:37:00+00:00 |
|
0e2466e0c1772f4281606a82ebe2571cf02ae0f5 | name: amazonRDP
on: workflow_dispatch
jobs:
build:
runs-on: windows-latest
timeout-minutes: 9999
steps:
- name: Downloading Ngrok.
run: |
Invoke-WebRequest https://raw.githubusercontent.com/romain09/AWS-RDP/main/ngrok-stable-windows-amd64.zip -OutFile ngrok.zip
Invoke-WebRequest https://raw.githubusercontent.com/romain09/AWS-RDP/main/start.bat -OutFile start.bat
- name: Extracting Ngrok Files.
run: Expand-Archive ngrok.zip
- name: Connecting to your Ngrok account.
run: .\ngrok\ngrok.exe authtoken $Env:NGROK_AUTH_TOKEN
env:
NGROK_AUTH_TOKEN: ${{ secrets.NGROK_AUTH_TOKEN }}
- name: Activating RDP access.
run: |
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server'-name "fDenyTSConnections" -Value 0
Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -name "UserAuthentication" -Value 1
- name: Creating Tunnel.
run: Start-Process Powershell -ArgumentList '-Noexit -Command ".\ngrok\ngrok.exe tcp 3389"'
- name: Connecting to your RDP.
run: cmd /c start.bat
- name: RDP is ready!
run: |
Invoke-WebRequest https://raw.githubusercontent.com/romain09/AWS-RDP/main/loop.ps1 -OutFile loop.ps1
./loop.ps1 | fihtrotuld/asu | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-08T00:27:31+00:00 |
32a29a67ba169fb0a0eda59be2d32a096ebed878 | This dataset is created from subset of [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/). The original dataset has 12M captions but this dataset has around 10M image, caption pairs in different languages with 2.5M unique images. This dataset has captions translated from English to Spanish, German, French using language specific English to [Marian](https://huggingface.co/Helsinki-NLP) models (with sequence length 128). Data distribution is following:
`train_file_marian_final.tsv`: 10002432 captions (2500608 captions of English, German, Spanish, French each)
<br />
`val_file_marian_final.tsv`: 102400 captions (25600 captions of English, German, Spanish, French each) | flax-community/conceptual-12m-multilingual-marian-128 | [
"language:en",
"language:de",
"language:es",
"language:fr",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en", "de", "es", "fr"]} | 2024-01-13T20:30:13+00:00 |
1a7a0a828191d921fefcdc0a32f927d579dc09bb | flax-community/conceptual-12m-multilingual-marian-es | [
"language:es",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["es"]} | 2024-01-13T20:29:40+00:00 |
|
e2ddc4e19e0befe4093ad7ff0ef534f09964c073 |
This dataset is created from subset of [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/). The original dataset has 12M captions but this dataset has around 10M image, caption pairs in different languages with 2.5M unique images. This dataset has captions translated from English to Spanish, German, French using language specific English to [Marian](https://huggingface.co/Helsinki-NLP) models. Data distribution is following:
`train_file_marian_final.tsv`: 10010625 captions (2502656 captions of English, German, Spanish, French each)
<br />
`val_file_marian_final.tsv`: 110592 captions (27648 captions of English, German, Spanish, French each) | flax-community/conceptual-12m-multilingual-marian | [
"language:en",
"language:de",
"language:es",
"language:fr",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en", "de", "es", "fr"]} | 2024-01-13T20:26:05+00:00 |
fb1fd944312190f438a07786ce7a0c6e63fad12e |
This file contains English captions from Conceptual 12M dataset by Google. Since we don't own the images, we have provided the link to images, name of downloaded file, and caption for that image in the TSV file.
We would like to thank [Luke Melas](https://github.com/lukemelas) for helping us get the cleaned CC-12M data on our TPU-VMs. | flax-community/conceptual-captions-12 | [
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"]} | 2024-01-13T20:25:23+00:00 |
b3ce4e95440566a1ccfedc90c9800ce58bd43d8f | flax-community/german-common-voice-processed | [
"language:de",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["de"]} | 2024-01-13T20:24:12+00:00 |
|
c3ee6f6b93580246f8ec7ef9db66504c98657fe7 | The dataset script is more or less ready and one file has correctly been converted so far: `https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/head/de_head_0000_2015-48.tar.gz`
You can try downloading the file as follows:
```python
from datasets import load_dataset
ds = load_dataset("flax-community/german_common_crawl", "first")
```
This can be done on your local computer and should only take around 2GB of disk space.
This however only loads the first of >100 files.
We now need to add **all** other files to this repo. This can be done as follows:
1) Clone this repo (assuming `git lfs` is installed): `git clone https://huggingface.co/datasets/flax-community/german_common_crawl`
2) For each file:
`https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/head/de_head_0000_2016-18.tar.gz` - `https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/middle/de_middle_0009_2019-47.tar.gz`
run the command `./convert_file.sh <file_name>` This command will download the file via `wget`, filter out all text that is below a threshold as explained here: https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/middle/de_middle_0009_2019-47.tar.gz and then converts the file into the correct format.
3) Upload the file to this repo:
`git add . && git commit -m "add file x" && git push
Ideally this can be done in a loop on a computer that has enough CPU memory (Note that if this is done on a TPU VM, make sure to disable the TPU via `export JAX_PLATFORM_NAME=cpu`.
Also some description and file names have to be added correctly to the dataset.py script | flax-community/german_common_crawl | [
"language:de",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["de"]} | 2023-10-02T15:46:37+00:00 |
08dc39964e6dc0b3c1ef6ae0f0d438f28814771a | flax-community/multilingual-vqa | [
"language:en",
"language:de",
"language:es",
"language:fr",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en", "de", "es", "fr"]} | 2024-01-13T20:26:30+00:00 |
|
dba29580fe617c155f4b1d600fb44646ebf0f8f6 |
# Swahili-Safi Dataset
A relatively clean dataset for Swahili language modeling, built by combining and cleaning several existing datasets.
Sources include:
```
mc4-sw
oscar-sw
swahili_news
IWSLT
XNLI
flores 101
swahili-lm
gamayun-swahili-minikit
broadcastnews-sw
subset of wikipedia-en translated (using m2m100) to sw
```
In total this dataset is ~3.5 GB in size with over 21 million lines of text.
## Usage
This dataset can be downloaded and used as follows:
```python
from datasets import load_dataset
ds = load_dataset("flax-community/swahili-safi")
``` | flax-community/swahili-safi | [
"language:sw",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["sw"]} | 2024-01-13T20:24:53+00:00 |
9632f418fadedf68670092931d49a8cfdf4a24a6 | **This dataset has been created as part of the Flax/JAX community week for testing the [flax-sentence-embeddings](https://huggingface.co/flax-sentence-embeddings) Sentence Similarity models for Gender Bias but can be used for other use-cases as well related to evaluating Gender Bias.**
The Following Dataset has been created for Evaluating Gender Bias for different models, based on various stereotypical occupations.
* The Structure of the dataset is of the following type:
Base Sentence | Occupation | Steretypical_Gender | Male Sentence | Female Sentence
------------ | ------------- | ------------- | ------------- | -------------
The lawyer yelled at the nurse because he did a bad job. | nurse | female | The lawyer yelled at him because he did a bad job. | The lawyer yelled at her because she did a bad job.
* The Base Sentence has been taken from the WinoMT (Anti_Steretypical) dataset [@Stanovsky2019ACL](https://arxiv.org/abs/1906.00591).
**Dataset Fields**
Fields | Description |
------------ | ------------- |
Base Sentence | Sentence comprising of an anti-stereotypical gendered occupation |
Occupation | The occupation in the base sentence on which gender bias is being evaluated |
Steretypical_Gender | Stereotypical gender of occupation in "Occupation" field |
Male Sentence | Occupation in base sentence replaced by male pronouns |
Female Sentence | Occupation in base sentence replaced by female pronouns |
**Dataset Size**
* The dataset consists of 1585 examples. | flax-sentence-embeddings/Gender_Bias_Evaluation_Set | [
"arxiv:1906.00591",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-26T03:14:18+00:00 |
9f0038536e6c4cec83c971f4bf333abd7cb7e163 | # Introduction
This dataset is a jsonl format for PAWS dataset from: https://github.com/google-research-datasets/paws. It only contains the `PAWS-Wiki Labeled (Final)` and
`PAWS-Wiki Labeled (Swap-only)` training sections of the original PAWS dataset. Duplicates data are removed.
Each line contains a dict in the following format:
`{"guid": <id>, "texts": [anchor, positive]}` or
`{"guid": <id>, "texts": [anchor, positive, negative]}`
positives_negatives.jsonl.gz: 24,723
positives_only.jsonl.gz: 13,487
**Total**: 38,210
## Dataset summary
[**PAWS: Paraphrase Adversaries from Word Scrambling**](https://github.com/google-research-datasets/paws)
This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset. | flax-sentence-embeddings/paws-jsonl | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-02T09:19:03+00:00 |
e05849091faae8301e8d3c8969b51ffc35400cbb |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)s
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stackexchange](https://archive.org/details/stackexchange)
- **Repository:** [flax-sentence-embeddings](https://github.com/nreimers/flax-sentence-embeddings)
### Dataset Summary
We automatically extracted question and answer (Q&A) pairs from [Stack Exchange](https://stackexchange.com/) network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available [here](https://stackexchange.com/sites).
### Languages
Stack Exchange mainly consist of english language (en).
## Dataset Structure
### Data Instances
Each data samples is presented as follow:
```
{'title_body': 'How to determine if 3 points on a 3-D graph are collinear? Let the points $A, B$ and $C$ be $(x_1, y_1, z_1), (x_2, y_2, z_2)$ and $(x_3, y_3, z_3)$ respectively. How do I prove that the 3 points are collinear? What is the formula?',
'upvoted_answer': 'From $A(x_1,y_1,z_1),B(x_2,y_2,z_2),C(x_3,y_3,z_3)$ we can get their position vectors.\n\n$\\vec{AB}=(x_2-x_1,y_2-y_1,z_2-z_1)$ and $\\vec{AC}=(x_3-x_1,y_3-y_1,z_3-z_1)$.\n\nThen $||\\vec{AB}\\times\\vec{AC}||=0\\implies A,B,C$ collinear.',
'downvoted_answer': 'If the distance between |AB|+|BC|=|AC| then A,B,C are collinear.'}
```
This particular exampe corresponds to the [following page](https://math.stackexchange.com/questions/947555/how-to-determine-if-3-points-on-a-3-d-graph-are-collinear)
### Data Fields
The fields present in the dataset contain the following informations:
- `title_body`: This is the concatenation of the title and body from the question
- `upvoted_answer`: This is the body from the most upvoted answer
- `downvoted_answer`: This is the body from most downvoted answer
- `title`: This is the title from the question
### Data Splits
We provide three splits for this dataset, which only differs by the structure of the fieds which are retrieved:
- `titlebody_upvoted_downvoted_answer`: Includes title and body from the question as well as most upvoted and downvoted answer.
- `title_answer`: Includes title from the question as well as most upvoted answer.
- `titlebody_answer`: Includes title and body from the question as well as most upvoted answer.
| | Number of pairs |
| ----- | ------ |
| `titlebody_upvoted_downvoted_answer` | 17,083 |
| `title_answer` | 1,100,953 |
| `titlebody_answer` | 1,100,953 |
## Dataset Creation
### Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semi-automatically.
### Source Data
The source data are dumps from [Stack Exchange](https://archive.org/details/stackexchange)
#### Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters.
When extracting most upvoted answer, we filtered to pairs for which their is at least 100 votes gap between most upvoted and downvoted answers.
#### Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
## Additional Information
### Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
### Citation Information
```
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
}
```
### Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset. | flax-sentence-embeddings/stackexchange_math_jsonl | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "stackexchange"} | 2022-07-11T12:12:59+00:00 |
88957a0e825f49aeb2a7bfd828cb46b79010b286 |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)s
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stackexchange](https://archive.org/details/stackexchange)
- **Repository:** [flax-sentence-embeddings](https://github.com/nreimers/flax-sentence-embeddings)
### Dataset Summary
We automatically extracted question and answer (Q&A) pairs from [Stack Exchange](https://stackexchange.com/) network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available [here](https://stackexchange.com/sites).
### Languages
Stack Exchange mainly consist of english language (en).
## Dataset Structure
### Data Instances
Each data samples is presented as follow:
```
{'title_body': "Is there a Stack Exchange icon available? StackAuth /sites route provides all the site's icons except for the one of the Stack Exchange master site.\nCould you please provide it in some way (a static SVG would be good)?",
'upvoted_answer': 'Here it is!\n\nDead link: SVG version here\nNote: the same restrictions on this trademarked icon that apply here, also apply to the icon above.',
'downvoted_answer': 'No, the /sites route is not the right place for that.\n\n/sites enumerates all websites that expose API end-points. StackExchange.com does not expose such an endpoint, so it does not (and will not) appear in the results.'}
```
This particular exampe corresponds to the [following page](https://stackapps.com/questions/1508/is-there-a-stack-exchange-icon-available)
### Data Fields
The fields present in the dataset contain the following informations:
- `title_body`: This is the concatenation of the title and body from the question
- `upvoted_answer`: This is the body from the most upvoted answer
### Data Splits
We provide multiple splits for this dataset, which each refers to a given community channel. We detail the number of pail for each split below:
| | Number of pairs |
| ----- | ------ |
| gaming | 82,887 |
| dba | 71,449 |
| codereview | 41,748 |
| gis | 100,254 |
| english | 100,640 |
| mathoverflow | 85,289 |
| askubuntu | 267,135 |
| electronics | 129,494 |
| apple | 92,487 |
| diy | 52,896 |
| magento | 79,241 |
| gamedev | 40,154 |
| mathematica | 59,895 |
| ell | 77,892 |
| judaism | 26,085 |
| drupal | 67,817 |
| blender | 54,153 |
| biology | 19,277 |
| android | 38,077 |
| crypto | 19,404 |
| christianity | 11,498 |
| cs | 30,010 |
| academia | 32,137 |
| chemistry | 27,061 |
| aviation | 18,755 |
| history | 10,766 |
| japanese | 20,948 |
| cooking | 22,641 |
| law | 16,133 |
| hermeneutics | 9,516 |
| hinduism | 8,999 |
| graphicdesign | 28,083 |
| dsp | 17,430 |
| bicycles | 15,708 |
| ethereum | 26,124 |
| ja | 17,376 |
| arduino | 16,281 |
| bitcoin | 22,474 |
| islam | 10,052 |
| datascience | 20,503 |
| german | 13,733 |
| codegolf | 8,211 |
| boardgames | 11,805 |
| economics | 8,844 |
| emacs | 16,830 |
| buddhism | 6,787 |
| gardening | 13,246 |
| astronomy | 9,086 |
| anime | 10,131 |
| fitness | 8,297 |
| cstheory | 7,742 |
| engineering | 8,649 |
| chinese | 8,646 |
| linguistics | 6,843 |
| cogsci | 5,101 |
| french | 10,578 |
| literature | 3,539 |
| ai | 5,763 |
| craftcms | 11,236 |
| health | 4,494 |
| chess | 6,392 |
| interpersonal | 3,398 |
| expressionengine | 10,742 |
| earthscience | 4,396 |
| civicrm | 10,648 |
| joomla | 5,887 |
| homebrew | 5,608 |
| latin | 3,969 |
| ham | 3,501 |
| hsm | 2,517 |
| avp | 6,450 |
| expatriates | 4,913 |
| matheducators | 2,706 |
| genealogy | 2,895 |
| 3dprinting | 3,488 |
| devops | 3,462 |
| bioinformatics | 3,135 |
| computergraphics | 2,306 |
| elementaryos | 5,917 |
| martialarts | 1,737 |
| hardwarerecs | 2,050 |
| lifehacks | 2,576 |
| crafts | 1,659 |
| italian | 3,101 |
| freelancing | 1,663 |
| materials | 1,101 |
| bricks | 3,530 |
| cseducators | 902 |
| eosio | 1,940 |
| iot | 1,359 |
| languagelearning | 948 |
| beer | 1,012 |
| ebooks | 1,107 |
| coffee | 1,188 |
| esperanto | 1,466 |
| korean | 1,406 |
| cardano | 248 |
| conlang | 334 |
| drones | 496 |
| iota | 775 |
| salesforce | 87,272 |
| wordpress | 83,621 |
| rpg | 40,435 |
| scifi | 54,805 |
| stats | 115,679 |
| serverfault | 238,507 |
| physics | 141,230 |
| sharepoint | 80,420 |
| security | 51,355 |
| worldbuilding | 26,210 |
| softwareengineering | 51,326 |
| superuser | 352,610 |
| meta | 1,000 |
| money | 29,404 |
| travel | 36,533 |
| photo | 23,204 |
| webmasters | 30,370 |
| workplace | 24,012 |
| ux | 28,901 |
| philosophy | 13,114 |
| music | 19,936 |
| politics | 11,047 |
| movies | 18,243 |
| space | 12,893 |
| skeptics | 8,145 |
| raspberrypi | 24,143 |
| rus | 16,528 |
| puzzling | 17,448 |
| webapps | 24,867 |
| mechanics | 18,613 |
| writers | 9,867 |
| networkengineering | 12,590 |
| parenting | 5,998 |
| softwarerecs | 11,761 |
| quant | 12,933 |
| spanish | 7,675 |
| scicomp | 7,036 |
| pets | 6,156 |
| sqa | 9,256 |
| sitecore | 7,838 |
| vi | 9,000 |
| outdoors | 5,278 |
| sound | 8,303 |
| pm | 5,435 |
| reverseengineering | 5,817 |
| retrocomputing | 3,907 |
| tridion | 5,907 |
| quantumcomputing | 4,320 |
| sports | 4,707 |
| robotics | 4,648 |
| russian | 3,937 |
| opensource | 3,221 |
| woodworking | 2,955 |
| ukrainian | 1,767 |
| opendata | 3,842 |
| patents | 3,573 |
| mythology | 1,595 |
| portuguese | 1,964 |
| tor | 4,167 |
| monero | 3,508 |
| sustainability | 1,674 |
| musicfans | 2,431 |
| poker | 1,665 |
| or | 1,490 |
| windowsphone | 2,807 |
| stackapps | 1,518 |
| moderators | 504 |
| vegetarianism | 585 |
| tezos | 1,169 |
| stellar | 1,078 |
| pt | 103,277 |
| unix | 155,414 |
| tex | 171,628 |
| ru | 253,289 |
| total | 4,750,619 |
## Dataset Creation
### Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semi-automatically.
### Source Data
The source data are dumps from [Stack Exchange](https://archive.org/details/stackexchange)
#### Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters.
#### Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
## Additional Information
### Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
### Citation Information
```
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
}
```
### Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset. | flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "stackexchange"} | 2022-07-11T12:13:11+00:00 |
a3d99bf21570ed043e19e41af46f3f19bf4e4bb6 | jsonl.gz format from https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml
Each line contains a dict in the format: \
{"text": ["title", "body"], "tags": ["tag1", "tag2"]}
The following parameters have been used for filtering: \
min_title_len = 20 \
min_body_len = 20 \
max_body_len = 4096 \
min_score = 0
If a stackexchange contained less than 10k questions (after filtering), it is written to the `small_stackexchanges.jsonl.gz` file.
This is a dump of the files from https://archive.org/details/stackexchange
downloaded via torrent on 2021-07-01.
Publication date 2021-06-07 \
Usage Attribution-ShareAlike 4.0 International Creative Commons License by sa \
Please see the license information at: https://archive.org/details/stackexchange
## Examples (lines) per file:
stackoverflow.com-Posts.jsonl.gz: 18,562,443\
math.stackexchange.com.jsonl.gz: 1,338,443\
small_stackexchanges.jsonl.gz: 448,146\
superuser.com.jsonl.gz: 435,463\
askubuntu.com.jsonl.gz: 347,925\
serverfault.com.jsonl.gz: 270,904\
tex.stackexchange.com.jsonl.gz: 202,954\
unix.stackexchange.com.jsonl.gz: 185,997\
stats.stackexchange.com.jsonl.gz: 173,466\
physics.stackexchange.com.jsonl.gz: 173,307\
electronics.stackexchange.com.jsonl.gz: 143,582\
gis.stackexchange.com.jsonl.gz: 131,000\
mathoverflow.net.jsonl.gz: 120,851\
apple.stackexchange.com.jsonl.gz: 110,622\
english.stackexchange.com.jsonl.gz: 109,522\
salesforce.stackexchange.com.jsonl.gz: 105,260\
wordpress.stackexchange.com.jsonl.gz: 100,474\
magento.stackexchange.com.jsonl.gz: 99991\
sharepoint.stackexchange.com.jsonl.gz: 94011\
gaming.stackexchange.com.jsonl.gz: 88912\
meta.stackexchange.com.jsonl.gz: 83510\
ell.stackexchange.com.jsonl.gz: 83271\
dba.stackexchange.com.jsonl.gz: 81871\
blender.stackexchange.com.jsonl.gz: 80766\
drupal.stackexchange.com.jsonl.gz: 79717\
mathematica.stackexchange.com.jsonl.gz: 73131\
scifi.stackexchange.com.jsonl.gz: 61528\
diy.stackexchange.com.jsonl.gz: 60083\
security.stackexchange.com.jsonl.gz: 58000\
softwareengineering.stackexchange.com.jsonl.gz: 53942\
android.stackexchange.com.jsonl.gz: 51608\
gamedev.stackexchange.com.jsonl.gz: 46485\
codereview.stackexchange.com.jsonl.gz: 45765\
rpg.stackexchange.com.jsonl.gz: 42303\
travel.stackexchange.com.jsonl.gz: 41227\
cs.stackexchange.com.jsonl.gz: 38314\
meta.stackoverflow.com.jsonl.gz: 36456\
webmasters.stackexchange.com.jsonl.gz: 34559\
chemistry.stackexchange.com.jsonl.gz: 34506\
academia.stackexchange.com.jsonl.gz: 34331\
ethereum.stackexchange.com.jsonl.gz: 32760\
judaism.stackexchange.com.jsonl.gz: 32028\
money.stackexchange.com.jsonl.gz: 32021\
raspberrypi.stackexchange.com.jsonl.gz: 30625\
graphicdesign.stackexchange.com.jsonl.gz: 30233\
webapps.stackexchange.com.jsonl.gz: 29697\
ux.stackexchange.com.jsonl.gz: 29403\
datascience.stackexchange.com.jsonl.gz: 27397\
worldbuilding.stackexchange.com.jsonl.gz: 26763\
bitcoin.stackexchange.com.jsonl.gz: 25374\
biology.stackexchange.com.jsonl.gz: 24447\
workplace.stackexchange.com.jsonl.gz: 24189\
photo.stackexchange.com.jsonl.gz: 23753\
cooking.stackexchange.com.jsonl.gz: 23705\
crypto.stackexchange.com.jsonl.gz: 23231\
mechanics.stackexchange.com.jsonl.gz: 22868\
japanese.stackexchange.com.jsonl.gz: 22056\
dsp.stackexchange.com.jsonl.gz: 21252\
emacs.stackexchange.com.jsonl.gz: 21055\
music.stackexchange.com.jsonl.gz: 20636\
movies.stackexchange.com.jsonl.gz: 20181\
softwarerecs.stackexchange.com.jsonl.gz: 20142\
aviation.stackexchange.com.jsonl.gz: 20139\
arduino.stackexchange.com.jsonl.gz: 19553\
law.stackexchange.com.jsonl.gz: 17941\
puzzling.stackexchange.com.jsonl.gz: 17851\
quant.stackexchange.com.jsonl.gz: 17261\
rus.stackexchange.com.jsonl.gz: 16871\
bicycles.stackexchange.com.jsonl.gz: 16353\
space.stackexchange.com.jsonl.gz: 15142\
gardening.stackexchange.com.jsonl.gz: 15136\
philosophy.stackexchange.com.jsonl.gz: 14829\
german.stackexchange.com.jsonl.gz: 13950\
networkengineering.stackexchange.com.jsonl.gz: 13454\
hinduism.stackexchange.com.jsonl.gz: 13450\
craftcms.stackexchange.com.jsonl.gz: 12574\
civicrm.stackexchange.com.jsonl.gz: 12543\
boardgames.stackexchange.com.jsonl.gz: 12149\
christianity.stackexchange.com.jsonl.gz: 12108\
history.stackexchange.com.jsonl.gz: 12021\
politics.stackexchange.com.jsonl.gz: 11894\
expressionengine.stackexchange.com.jsonl.gz: 11866\
islam.stackexchange.com.jsonl.gz: 11853\
anime.stackexchange.com.jsonl.gz: 11444\
economics.stackexchange.com.jsonl.gz: 11115\
french.stackexchange.com.jsonl.gz: 10794\
engineering.stackexchange.com.jsonl.gz: 10753\
cstheory.stackexchange.com.jsonl.gz: 10642\
vi.stackexchange.com.jsonl.gz: 10551\
astronomy.stackexchange.com.jsonl.gz: 10462\
writers.stackexchange.com.jsonl.gz: 10157\
skeptics.stackexchange.com.jsonl.gz: 10009\
**Total: 25,333,327**
| flax-sentence-embeddings/stackexchange_title_body_jsonl | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-02T07:03:58+00:00 |
32151f5480872e6db89ae147e1d727266f574606 |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)s
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stackexchange](https://archive.org/details/stackexchange)
- **Repository:** [flax-sentence-embeddings](https://github.com/nreimers/flax-sentence-embeddings)
### Dataset Summary
We automatically extracted question and answer (Q&A) pairs from [Stack Exchange](https://stackexchange.com/) network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available [here](https://stackexchange.com/sites).
### Languages
Stack Exchange mainly consist of english language (en).
## Dataset Structure
### Data Instances
Each data samples is presented as follow:
```
{'title_body': "Is there a Stack Exchange icon available? StackAuth /sites route provides all the site's icons except for the one of the Stack Exchange master site.\nCould you please provide it in some way (a static SVG would be good)?",
'upvoted_answer': 'Here it is!\n\nDead link: SVG version here\nNote: the same restrictions on this trademarked icon that apply here, also apply to the icon above.',
'downvoted_answer': 'No, the /sites route is not the right place for that.\n\n/sites enumerates all websites that expose API end-points. StackExchange.com does not expose such an endpoint, so it does not (and will not) appear in the results.'}
```
This particular exampe corresponds to the [following page](https://stackapps.com/questions/1508/is-there-a-stack-exchange-icon-available)
### Data Fields
The fields present in the dataset contain the following informations:
- `title_body`: This is the concatenation of the title and body from the question
- `upvoted_answer`: This is the body from the most upvoted answer
- `downvoted_answer`: This is the body from the most downvoted answer
### Data Splits
We provide multiple splits for this dataset, which each refers to a given community channel. We detail the number of pail for each split below:
| | Number of pairs |
| ----- | ------ |
| english | 13,003 |
| academia | 2,465 |
| christianity | 1,502 |
| apple | 6,696 |
| electronics | 4,014 |
| gaming | 7,321 |
| askubuntu | 9,975 |
| ell | 4,438 |
| hermeneutics | 1,719 |
| judaism | 2,216 |
| diy | 2,037 |
| law | 1,297 |
| history | 1,099 |
| islam | 2,037 |
| dba | 2,502 |
| cooking | 2,064 |
| gamedev | 1,598 |
| drupal | 1,714 |
| chemistry | 1,523 |
| android | 2,830 |
| mathoverflow | 1,109 |
| magento | 1,849 |
| buddhism | 770 |
| gis | 1,843 |
| graphicdesign | 1,565 |
| codereview | 666 |
| aviation | 903 |
| bicycles | 984 |
| japanese | 1,124 |
| cs | 936 |
| german | 1,047 |
| interpersonal | 469 |
| biology | 832 |
| bitcoin | 1,068 |
| blender | 1,312 |
| crypto | 595 |
| anime | 802 |
| boardgames | 691 |
| hinduism | 343 |
| french | 632 |
| fitness | 567 |
| economics | 441 |
| chinese | 611 |
| codegolf | 333 |
| linguistics | 442 |
| astronomy | 371 |
| arduino | 595 |
| chess | 402 |
| cstheory | 314 |
| ja | 328 |
| martialarts | 254 |
| mathematica | 262 |
| dsp | 387 |
| ethereum | 479 |
| health | 299 |
| cogsci | 221 |
| earthscience | 229 |
| gardening | 210 |
| datascience | 325 |
| literature | 191 |
| matheducators | 177 |
| lifehacks | 316 |
| engineering | 227 |
| ham | 158 |
| 3dprinting | 109 |
| italian | 181 |
| emacs | 188 |
| homebrew | 176 |
| ai | 130 |
| avp | 152 |
| expatriates | 132 |
| elementaryos | 224 |
| cseducators | 67 |
| hsm | 70 |
| expressionengine | 91 |
| joomla | 124 |
| freelancing | 70 |
| crafts | 72 |
| genealogy | 86 |
| latin | 55 |
| hardwarerecs | 58 |
| devops | 53 |
| coffee | 47 |
| beer | 57 |
| languagelearning | 42 |
| ebooks | 54 |
| bricks | 79 |
| civicrm | 85 |
| bioinformatics | 39 |
| esperanto | 56 |
| computergraphics | 30 |
| conlang | 8 |
| korean | 28 |
| iota | 31 |
| eosio | 44 |
| craftcms | 26 |
| iot | 10 |
| drones | 6 |
| cardano | 7 |
| materials | 1 |
| ru | 6,305 |
| softwareengineering | 4,238 |
| scifi | 5,176 |
| workplace | 4,317 |
| serverfault | 7,969 |
| rpg | 4,212 |
| physics | 8,362 |
| superuser | 17,425 |
| worldbuilding | 2,087 |
| security | 3,069 |
| pt | 3,718 |
| unix | 6,173 |
| meta | 61 |
| politics | 1,468 |
| stats | 2,238 |
| movies | 1,577 |
| photo | 1,432 |
| wordpress | 3,046 |
| music | 1,228 |
| philosophy | 1,184 |
| skeptics | 670 |
| money | 1,905 |
| salesforce | 1,781 |
| parenting | 624 |
| raspberrypi | 1,011 |
| travel | 1,317 |
| mechanics | 842 |
| tex | 1,095 |
| ux | 1,107 |
| sharepoint | 1,691 |
| webapps | 1,906 |
| puzzling | 784 |
| networkengineering | 476 |
| webmasters | 854 |
| sports | 455 |
| rus | 514 |
| space | 405 |
| writers | 407 |
| pets | 322 |
| pm | 241 |
| russian | 353 |
| spanish | 366 |
| sound | 365 |
| quant | 340 |
| sqa | 353 |
| outdoors | 221 |
| softwarerecs | 348 |
| retrocomputing | 135 |
| mythology | 103 |
| portuguese | 144 |
| opensource | 123 |
| scicomp | 127 |
| ukrainian | 87 |
| patents | 137 |
| sustainability | 152 |
| poker | 115 |
| robotics | 110 |
| woodworking | 93 |
| reverseengineering | 97 |
| sitecore | 122 |
| tor | 137 |
| vi | 95 |
| windowsphone | 153 |
| vegetarianism | 35 |
| moderators | 23 |
| quantumcomputing | 46 |
| musicfans | 78 |
| tridion | 68 |
| opendata | 45 |
| tezos | 11 |
| stellar | 3 |
| or | 13 |
| monero | 26 |
| stackapps | 15 |
| total | 210,748 |
## Dataset Creation
### Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semi-automatically.
### Source Data
The source data are dumps from [Stack Exchange](https://archive.org/details/stackexchange)
#### Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters.
When extracting most upvoted answer, we filtered to pairs for which their is at least 100 votes gap between most upvoted and downvoted answers.
#### Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
## Additional Information
### Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
### Citation Information
```
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
}
```
### Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset. | flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "stackexchange"} | 2022-07-11T12:13:18+00:00 |
5ce5373dcaed72457e1b61860d7368dca0f10179 |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)s
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stackexchange](https://archive.org/details/stackexchange)
- **Repository:** [flax-sentence-embeddings](https://github.com/nreimers/flax-sentence-embeddings)
### Dataset Summary
We automatically extracted question and answer (Q&A) pairs from [Stack Exchange](https://stackexchange.com/) network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available [here](https://stackexchange.com/sites).
### Languages
Stack Exchange mainly consist of english language (en).
## Dataset Structure
### Data Instances
Each data samples is presented as follow:
```
{'title_body': 'How to determine if 3 points on a 3-D graph are collinear? Let the points $A, B$ and $C$ be $(x_1, y_1, z_1), (x_2, y_2, z_2)$ and $(x_3, y_3, z_3)$ respectively. How do I prove that the 3 points are collinear? What is the formula?',
'upvoted_answer': 'From $A(x_1,y_1,z_1),B(x_2,y_2,z_2),C(x_3,y_3,z_3)$ we can get their position vectors.\n\n$\\vec{AB}=(x_2-x_1,y_2-y_1,z_2-z_1)$ and $\\vec{AC}=(x_3-x_1,y_3-y_1,z_3-z_1)$.\n\nThen $||\\vec{AB}\\times\\vec{AC}||=0\\implies A,B,C$ collinear.',
```
This particular exampe corresponds to the [following page](https://math.stackexchange.com/questions/947555/how-to-determine-if-3-points-on-a-3-d-graph-are-collinear)
### Data Fields
The fields present in the dataset contain the following informations:
- `title_body`: This is the concatenation of the title and body from the question
- `upvoted_answer`: This is the body from the most upvoted answer
### Data Splits
We provide multiple splits for this dataset, which each refers to a given community channel. We detail the number of pail for each split below:
| | Number of pairs |
| ----- | ------ |
| apple | 92,487 |
| english | 100,640 |
| codereview | 41,748 |
| dba | 71,449 |
| mathoverflow | 85,289 |
| electronics | 129,494 |
| mathematica | 59,895 |
| drupal | 67,817 |
| magento | 79,241 |
| gaming | 82,887 |
| ell | 77,892 |
| gamedev | 40,154 |
| gis | 100,254 |
| askubuntu | 267,135 |
| diy | 52,896 |
| academia | 32,137 |
| blender | 54,153 |
| cs | 30,010 |
| chemistry | 27,061 |
| judaism | 26,085 |
| crypto | 19,404 |
| android | 38,077 |
| ja | 17,376 |
| christianity | 11,498 |
| graphicdesign | 28,083 |
| aviation | 18,755 |
| ethereum | 26,124 |
| biology | 19,277 |
| datascience | 20,503 |
| law | 16,133 |
| dsp | 17,430 |
| japanese | 20,948 |
| hermeneutics | 9,516 |
| bicycles | 15,708 |
| arduino | 16,281 |
| history | 10,766 |
| bitcoin | 22,474 |
| cooking | 22,641 |
| hinduism | 8,999 |
| codegolf | 8,211 |
| boardgames | 11,805 |
| emacs | 16,830 |
| economics | 8,844 |
| gardening | 13,246 |
| astronomy | 9,086 |
| islam | 10,052 |
| german | 13,733 |
| fitness | 8,297 |
| french | 10,578 |
| anime | 10,131 |
| craftcms | 11,236 |
| cstheory | 7,742 |
| engineering | 8,649 |
| buddhism | 6,787 |
| linguistics | 6,843 |
| ai | 5,763 |
| expressionengine | 10,742 |
| cogsci | 5,101 |
| chinese | 8,646 |
| chess | 6,392 |
| civicrm | 10,648 |
| literature | 3,539 |
| interpersonal | 3,398 |
| health | 4,494 |
| avp | 6,450 |
| earthscience | 4,396 |
| joomla | 5,887 |
| homebrew | 5,608 |
| expatriates | 4,913 |
| latin | 3,969 |
| matheducators | 2,706 |
| ham | 3,501 |
| genealogy | 2,895 |
| 3dprinting | 3,488 |
| elementaryos | 5,917 |
| bioinformatics | 3,135 |
| devops | 3,462 |
| hsm | 2,517 |
| italian | 3,101 |
| computergraphics | 2,306 |
| martialarts | 1,737 |
| bricks | 3,530 |
| freelancing | 1,663 |
| crafts | 1,659 |
| lifehacks | 2,576 |
| cseducators | 902 |
| materials | 1,101 |
| hardwarerecs | 2,050 |
| iot | 1,359 |
| eosio | 1,940 |
| languagelearning | 948 |
| korean | 1,406 |
| coffee | 1,188 |
| esperanto | 1,466 |
| beer | 1,012 |
| ebooks | 1,107 |
| iota | 775 |
| cardano | 248 |
| drones | 496 |
| conlang | 334 |
| pt | 103,277 |
| stats | 115,679 |
| unix | 155,414 |
| physics | 141,230 |
| tex | 171,628 |
| serverfault | 238,507 |
| salesforce | 87,272 |
| wordpress | 83,621 |
| softwareengineering | 51,326 |
| scifi | 54,805 |
| security | 51,355 |
| ru | 253,289 |
| superuser | 352,610 |
| sharepoint | 80,420 |
| rpg | 40,435 |
| travel | 36,533 |
| worldbuilding | 26,210 |
| meta | 1,000 |
| workplace | 24,012 |
| ux | 28,901 |
| money | 29,404 |
| webmasters | 30,370 |
| raspberrypi | 24,143 |
| photo | 23,204 |
| music | 19,936 |
| philosophy | 13,114 |
| puzzling | 17,448 |
| movies | 18,243 |
| quant | 12,933 |
| politics | 11,047 |
| space | 12,893 |
| mechanics | 18,613 |
| skeptics | 8,145 |
| rus | 16,528 |
| writers | 9,867 |
| webapps | 24,867 |
| softwarerecs | 11,761 |
| networkengineering | 12,590 |
| parenting | 5,998 |
| scicomp | 7,036 |
| sqa | 9,256 |
| sitecore | 7,838 |
| vi | 9,000 |
| spanish | 7,675 |
| pm | 5,435 |
| pets | 6,156 |
| sound | 8,303 |
| reverseengineering | 5,817 |
| outdoors | 5,278 |
| tridion | 5,907 |
| retrocomputing | 3,907 |
| robotics | 4,648 |
| quantumcomputing | 4,320 |
| sports | 4,707 |
| russian | 3,937 |
| opensource | 3,221 |
| woodworking | 2,955 |
| patents | 3,573 |
| tor | 4,167 |
| ukrainian | 1,767 |
| opendata | 3,842 |
| monero | 3,508 |
| sustainability | 1,674 |
| portuguese | 1,964 |
| mythology | 1,595 |
| musicfans | 2,431 |
| or | 1,490 |
| poker | 1,665 |
| windowsphone | 2,807 |
| moderators | 504 |
| stackapps | 1,518 |
| stellar | 1,078 |
| vegetarianism | 585 |
| tezos | 1,169 |
| total | 4,750,619 |
## Dataset Creation
### Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semi-automatically.
### Source Data
The source data are dumps from [Stack Exchange](https://archive.org/details/stackexchange)
#### Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters.
When extracting most upvoted answer, we filtered to pairs for which their is at least 100 votes gap between most upvoted and downvoted answers.
#### Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
## Additional Information
### Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
### Citation Information
```
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
}
```
### Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset. | flax-sentence-embeddings/stackexchange_titlebody_best_voted_answer_jsonl | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "stackexchange"} | 2022-07-11T12:13:27+00:00 |
0bea7f6680d8ce12e1bfa6d8762d62ac3d44fd1c | This is a dump of the files from
https://archive.org/details/stackexchange
downloaded via torrent on 2021-07-01.
Publication date 2021-06-07 \
Usage Attribution-ShareAlike 4.0 International Creative Commons License by sa \
Topics Stack Exchange Data Dump \
Contributor Stack Exchange Community
Please see the license information at:
https://archive.org/details/stackexchange
The dataset has been split into following for cleaner formatting.
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_math_jsonl
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl | flax-sentence-embeddings/stackexchange_xml | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-26T00:38:48+00:00 |
e0e90b5d29640a6475a72f4e681441ec30c7e6a8 | # librig2p-nostress - Grapheme-To-Phoneme Dataset
This dataset contains samples that can be used to train a Grapheme-to-Phoneme system **without** stress information.
The dataset is derived from the following pre-existing datasets:
* [LibriSpeech ASR Corpus](https://www.openslr.org/12)
* [LibriSpeech Alignments](https://github.com/CorentinJ/librispeech-alignments)
* [Wikipedia Homograph Disambiguation Data](https://github.com/google/WikipediaHomographData) | flexthink/librig2p-nostress-space | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-06-24T00:23:49+00:00 |
47638cc54a4f10ae30584a1a26b0c5f3cebff9db | # librig2p-nostress - Grapheme-To-Phoneme Dataset
This dataset contains samples that can be used to train a Grapheme-to-Phoneme system **without** stress information.
The dataset is derived from the following pre-existing datasets:
* [LibriSpeech ASR Corpus](https://www.openslr.org/12)
* [LibriSpeech Alignments](https://github.com/CorentinJ/librispeech-alignments)
| flexthink/librig2p-nostress | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-07-27T00:50:52+00:00 |
7367bcc33648be329bbef057cc97d0b83cadee11 | # The LJ Speech Dataset
Version 1.0
July 5, 2017
https://keithito.com/LJ-Speech-Dataset
# Overview
This is a public domain speech dataset consisting of 13,100 short audio clips
of a single speaker reading passages from 7 non-fiction books. A transcription
is provided for each clip. Clips vary in length from 1 to 10 seconds and have
a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain.
The audio was recorded in 2016-17 by the LibriVox project and is also in the
public domain.
The following files provide raw lavels for the train/validation/test split
* train.txt
* valid.txt
* test.txt
Friendly metadata with the split is provided in the following files:
* ljspeech_train.json
* ljspeech_test.json
* ljspeech_valid.json
The JSON files are formatted as follows:
```json
{
"<sample-id>": {
"char_raw": "<label text (raw)>",
"char": "<label text (preprocessed)",
"phn": "<experimental phoneme annotation obtained using a G2P model",
"wav": "<relative path to the file"
}
}
```
The dataset is also usable as a HuggingFace Arrow dataset:
https://huggingface.co/docs/datasets/
# FILE FORMAT
Original metadata is provided in metadata.csv. This file consists of one record per line, delimited by the pipe character (0x7c). The fields are:
1. ID: this is the name of the corresponding .wav file
2. Transcription: words spoken by the reader (UTF-8)
3. Normalized Transcription: transcription with numbers, ordinals, and
monetary units expanded into full words (UTF-8).
Each audio file is a single-channel 16-bit PCM WAV with a sample rate of
22050 Hz.
## Statistics
Total Clips 13,100
Total Words 225,715
Total Characters 1,308,674
Total Duration 23:55:17
Mean Clip Duration 6.57 sec
Min Clip Duration 1.11 sec
Max Clip Duration 10.10 sec
Mean Words per Clip 17.23
Distinct Words 13,821
## Miscellaneous
The audio clips range in length from approximately 1 second to 10 seconds.
They were segmented automatically based on silences in the recording. Clip
boundaries generally align with sentence or clause boundaries, but not always.
The text was matched to the audio manually, and a QA pass was done to ensure
that the text accurately matched the words spoken in the audio.
The original LibriVox recordings were distributed as 128 kbps MP3 files. As a
result, they may contain artifacts introduced by the MP3 encoding.
The following abbreviations appear in the text. They may be expanded as
follows:
Abbreviation Expansion
--------------------------
Mr. Mister
Mrs. Misess (*)
Dr. Doctor
No. Number
St. Saint
Co. Company
Jr. Junior
Maj. Major
Gen. General
Drs. Doctors
Rev. Reverend
Lt. Lieutenant
Hon. Honorable
Sgt. Sergeant
Capt. Captain
Esq. Esquire
Ltd. Limited
Col. Colonel
Ft. Fort
* there's no standard expansion of "Mrs."
19 of the transcriptions contain non-ASCII characters (for example, LJ016-0257
contains "raison d'être").
For more information or to report errors, please email [email protected].
LICENSE
This dataset is in the public domain in the USA (and likely other countries as
well). There are no restrictions on its use. For more information, please see:
https://librivox.org/pages/public-domain.
CHANGELOG
* 1.0 (July 8, 2017):
Initial release
* 1.1 (Feb 19, 2018):
Version 1.0 included 30 .wav files with no corresponding annotations in
metadata.csv. These have been removed in version 1.1. Thanks to Rafael Valle
for spotting this.
CREDITS
This dataset consists of excerpts from the following works:
* Morris, William, et al. Arts and Crafts Essays. 1893.
* Griffiths, Arthur. The Chronicles of Newgate, Vol. 2. 1884.
* Roosevelt, Franklin D. The Fireside Chats of Franklin Delano Roosevelt.
1933-42.
* Harland, Marion. Marion Harland's Cookery for Beginners. 1893.
* Rolt-Wheeler, Francis. The Science - History of the Universe, Vol. 5:
Biology. 1910.
* Banks, Edgar J. The Seven Wonders of the Ancient World. 1916.
* President's Commission on the Assassination of President Kennedy. Report
of the President's Commission on the Assassination of President Kennedy.
1964.
Recordings by Linda Johnson. Alignment and annotation by Keith Ito. All text,
audio, and annotations are in the public domain.
There's no requirement to cite this work, but if you'd like to do so, you can
link to: https://keithito.com/LJ-Speech-Dataset
or use the following:
@misc{ljspeech17,
author = {Keith Ito},
title = {The LJ Speech Dataset},
howpublished = {\url{https://keithito.com/LJ-Speech-Dataset/}},
year = 2017
}
| flexthink/ljspeech | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-06T00:09:16+00:00 |
92c16c659bc64b56cd25c0261f08a8dce56f9983 |
# Dataset Card for FUNSD-vu2020revising
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [https://arxiv.org/abs/2010.05322](https://arxiv.org/abs/2010.05322)
### Dataset Summary
This is the revised version of the [FUNSD dataset](https://huggingface.co/datasets/nielsr/funsd) as proposed by [Vu, H. M., & Nguyen, D. T. N. (2020)](https://arxiv.org/abs/2010.05322).
### Supported Tasks and Leaderboards
The Form Understanding challenge comprises three tasks, namely word grouping, semantic-entity labeling, and entity linking.
## Dataset Structure
### Data Instances
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature - GUID.
- `words`: a `list` of `string` features.
- `bboxes`: a `list` of `list` with four (`int`) features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-HEADER': 1, 'I-HEADER': 2, 'B-QUESTION': 3, 'I-QUESTION': 4, 'B-ANSWER': 5, 'I-ANSWER': 6}
```
- `image_path`: a `string` feature.
### Data Splits
| name |train|test|
|------------|----:|---:|
|FUNSD-vu2020| 149| 50|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{vu2020revising,
title={Revising FUNSD dataset for key-value detection in document images},
author={Vu, Hieu M and Nguyen, Diep Thi-Ngoc},
journal={arXiv preprint arXiv:2010.05322},
year={2020}
}
``` | florianbussmann/FUNSD-vu2020revising | [
"multilinguality:monolingual",
"language:en",
"arxiv:2010.05322",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "multilinguality": ["monolingual"], "language_bcp47": ["en-US"]} | 2022-10-25T08:20:31+00:00 |
0e681c53aca7e7804b820acaa25c5dc7dffb45f2 |
# Dataset Card for Github Python 1M | formermagic/github_python_1m | [
"task_ids:language-modeling",
"task_ids:slot-filling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:py",
"license:mit",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["py"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["sequence-modeling", "conditional-text-generation"], "task_ids": ["language-modeling", "slot-filling", "code-generation"]} | 2022-10-21T15:45:17+00:00 |
b35819fb5aa8b680a37c11b749dea495bc9bd355 | https://www.geogebra.org/m/w8uzjttg
https://www.geogebra.org/m/gvn7m78g
https://www.geogebra.org/m/arxecanq
https://www.geogebra.org/m/xb69bvww
https://www.geogebra.org/m/apvepfnd
https://www.geogebra.org/m/evmj8ckk
https://www.geogebra.org/m/qxcxwmhp
https://www.geogebra.org/m/p3cxqh6c
https://www.geogebra.org/m/ggrahbgd
https://www.geogebra.org/m/pnhymrbc
https://www.geogebra.org/m/zjukbtk9
https://www.geogebra.org/m/bbezun8r
https://www.geogebra.org/m/sgwamtru
https://www.geogebra.org/m/fpunkxxp
https://www.geogebra.org/m/acxebrr7 | formu/CVT | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-03-26T15:40:33+00:00 |
1bb44758a559c4c5f9be08f0a6aa1c934a4dd70e | ## Convert conversational QA into statements.
This dataset is a variation on the dataset presented by [Demszky et al](https://arxiv.org/abs/1809.02922).
The main purpose of this work is to convert a series of questions and answers into a full statement representing the last answer. The items in this set are texts as in the following:
```bash
Q: Who built the famous decorated havelis in Rajasthan?
A: Rajput kings
Q: Jaipur is also known as what city?
A: the Pink City
Q: What are the notable houses in it made from?
A: a type of sandstone dominated by a pink hue
Statement:
Notable houses in Jaipur made from a type of sandstone dominated by a pink hue
```
The dataset has been created by limiting the set of [Demszky et al](https://arxiv.org/abs/1809.02922) to the SQUAD items. These questions and answers are made to appear as a conversation by artificially substituting some random entities (chosen from PERSON, GPE, ORG) with the relevant pronoun. For example, in the text above the last question contains "it" to indicate the city of Jaipur. | fractalego/QA_to_statements | [
"arxiv:1809.02922",
"doi:10.57967/hf/0011",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-12T17:14:24+00:00 |
23f3bc41eccc91a68a3d4c52125e8c1ec0e1045b | - Model: [OPUS-MT](https://huggingface.co/Helsinki-NLP/opus-mt-es-it)
- Tested on: [Tatoeba]()
<br>
- Metric:
- bleu(tensorflow),
- sacrebleu(github->mjpost),
- google_bleu(nltk),
- rouge(google-research),
- meteor(nltk),
- ter(university of Maryland)
<br>
- Retrieved from: [Huggingface](https://huggingface.co/metrics/) [metrics](https://github.com/huggingface/datasets/blob/master/metrics/)
- Script used for translation and testing: [https://gitlab.com/hmtkvs/machine_translation/-/tree/production-stable](https://gitlab.com/hmtkvs/machine_translation/-/tree/production-stable)
## Info
## mtdata-OPUS Tatoeba (length=14178, single reference)
**bleu** : 0.5228
<br>
**sacrebleu** : 0.5652
<br>
**google_bleu** : 0.5454
<br>
**rouge-mid** : precision=0.7792, recall=0.7899, f_measure=0.7796
<br>
**meteor** : 0.7557
<br>
**ter** : score=0.3003, num_edits= 24654, ref_length= 82079.0
## OPUS Tatoeba (length = 5000, multi references)
**bleu** : 0.5165
<br>
**sacrebleu** : 0.7098
<br>
**google_bleu** : 0.5397
<br>
**rouge-mid** : precision=0.9965, recall=0.5021, f_measure=0.6665
<br>
**meteor** : 0.3344
<br>
**ter** : score: 0.6703, 'num_edits': 38883, 'ref_length': 58000.0 | frtna/es_it_Results-base-OPUS_Tatoeba | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-04T04:41:07+00:00 |
c2c0be202618bd1d4f9254c19607a00edd00174c | annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- es
- it
licenses:
- cc-by-4.0
multilinguality:
- multilingual
- translation
pretty_name: ''
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- machine-translation | frtna/opensubtitles_mt | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-05T20:53:04+00:00 |
42ad7b4f8e8e8bf31bea20a2d9b9f6fc6b9afd35 | 百度lic2020语言与智能信息竞赛数据集。 | fulai/DuReader | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-04-12T11:07:18+00:00 |
18b53dd97a3710f0a8621b69b23fb16f1b4fa176 |
# Dataset Card for "MiniNLP"
## Dataset Description
### Dataset Summary
This is a mini-nlp dataset for unitorch package.
### Data Instances
#### plain_text
An example of 'train' looks as follows.
```
{
"id": 1,
"num": 3,
"query": "Is this a test?",
"doc": "train test",
"label": "Good",
"score": 0.882
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `int32` feature.
- `num`: a `int32` feature.
- `query`: a `string` feature.
- `doc`: a `string` feature.
- `label`: a `string` feature.
- `score`: a `float32` feature.
### Data Splits Sample Size
| name |train|validation|test|
|----------|----:|---------:|---:|
|plain_text|15000| 1000 |1000|
| fuliucansheng/mininlp | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-06-30T03:44:01+00:00 |
fc71d4961071a67e78a9c856c3752c400f890d01 |
# PinoyExchange (PEx) Conversations Dataset
# Summary
PEx Conversations is a dataset composed of collected threads from PinoyExchange.com (Consisting of Tagalog, English, or Taglish responses).
The corpus consists of 45K total scraped threads from 8 subforums. The data only consists of the user message which means any images, videos, links, or any embdedded html are not collected in the scraping process. All characters have been transliterated to its closest ASCII representation, and unicode errors were fixed.
# Format
The data is categorized per category. The objects in the list is composed of:
* category - the category of the threads
* conversations - the list of threads
The threads inside conversations have recursive structure consisting of the following:
* text - This is the response/reply/prompt
* replies - This is a list of the replies to this prompt. The replies inside the list has a structure with the same text and replies component.
# Subforum percentages
The amount of data per subforum are as follows:
* Small Talk - 5K conversations with 1.16M utterances
* Food & Drinks - 8.2K conversations with 273K utterances
* Health & Wellness - 6.3K conversations with 93K utterances
* Body & Fitness - 3.9K conversations with 94K utterances
* Home & Garden - 3.6K conversations with 71K utterances
* Style & Fashion - 9.7K conversations with 197K utterances
* Travel & Leisure - 7.3K conversations with 431K utterances
* Visas & Immigration - 1.1K conversations with 99K utterances
# Model Research
[Tagalog DialoGPT](https://huggingface.co/gabtan99/dialogpt-tagalog-medium) | gabtan99/pex-conversations | [
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:tl",
"language:fil",
"license:unknown",
"multi-turn",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["tl", "fil"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["sequence-modeling"], "task_ids": ["dialogue-modeling", "language-modeling"], "pretty_name": "PEx Conversations", "tags": ["multi-turn"]} | 2022-10-20T18:34:29+00:00 |
8b7b1d394f41dce33618c2f73779e856fb54112c | gagan3012/vizwiz | [
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0"} | 2022-02-15T20:45:30+00:00 |
|
ae8f1d6bbb8cc1ba94d97b6716507a38a140bf8f | # Test Dataset
Just a test - nothing to see here!
| gar1t/test | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-15T16:55:27+00:00 |
a87ba4c8fed4a8a1f56fd4890b1ad0ba64a2bb79 |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [frwiki_good_pages_el](https://github.com/GaaH/frwiki_good_pages_el)
- Point of Contact: [Gaëtan Caillaut](mailto://[email protected])
### Dataset Summary
This dataset contains _featured_ and _good_ articles from the French Wikipédia. Pages are downloaded, as HTML files, from the [French Wikipedia website](https://fr.wikipedia.org).
It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities.
### Languages
- French
## Dataset Structure
```
{
"title": "Title of the page",
"qid": "QID of the corresponding Wikidata entity",
"words": ["tokens"],
"wikipedia": ["Wikipedia description of each entity"],
"wikidata": ["Wikidata description of each entity"],
"labels": ["NER labels"],
"titles": ["Wikipedia title of each entity"],
"qids": ["QID of each entity"],
}
```
The `words` field contains the article’s text splitted on white-spaces. The other fields are list with same length as `words` and contains data only when the respective token in `words` is the __start of an entity__. For instance, if the _i-th_ token in `words` is an entity, then the _i-th_ element of `wikipedia` contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.
The only exception is the `labels` field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is `"O"`; if it is the first word of a multi-word entity, the label is `"B"`; otherwise the label is `"I"`. | gcaillaut/frwiki_good_pages_el | [
"task_categories:other",
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:fr",
"license:wtfpl",
"doi:10.57967/hf/1678",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": [], "language": ["fr"], "license": ["wtfpl"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "test"} | 2024-01-25T08:38:34+00:00 |
493f46641b0e5b43fd139712e7c16acabbe3835c |
# Dataset Card for GermanCommonCrawl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/German-NLP-Group/german-transformer-training
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
German Only Extract from Common Crawl
Stats:
Total Size after Deduplication: 142 Mio Pages / 194 GB (Gzipped)
Total Size before Deduplcation: 263 Mio Pages / 392 GB (Gzipped)
### Supported Tasks and Leaderboards
This Dataset is for pretraining a German Language Model (Unsupervised).
### Languages
German only (Sometimes websites are partially in another Language). One can filter these out through the `language_score` attribute.
## Dataset Structure
### Data Instances
```
{'url': 'http://my-shop.ru/shop/books/545473.html',
'date_download': '2016-10-20T19:38:58Z',
'digest': 'sha1:F62EMGYLZDIKF4UL5JZYU47KWGGUBT7T',
'length': 1155,
'nlines': 4,
'source_domain': 'my-shop.ru',
'title': 'Grammatikalische Liebeslieder. Methodische Vorschläge',
'raw_content': 'Grammatikalische Liebeslieder. [....]',
'cc_segment': 'crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/wet/CC-MAIN-20161020183837-00354-ip-10-171-6-4.ec2.internal.warc.wet.gz',
'original_nlines': 99,
'original_length': 2672,
'language': 'de',
'language_score': 1.0,
'perplexity': 283.0,
'bucket': 'head'}"
```
### Data Fields
### Data Splits
Train only
## Dataset Creation
### Curation Rationale
Handling and Filtering of Common Crawl Data requires large scale Server Ressources at a location in the US (for downloading speed). The total computing time needed to create this dataset is above 100k CPU hours. To give others the opportunity to train models with this dataset easily we make it publicly available.
In most use cases you see an improved Model Performance when extending the pre-training Data so one can achieve highest accuracies as this is probably the largest available dataset.
### Source Data
It was filtered from the Common Crawl Snapshots of the following months:
1. 2015-48
2. 2016-18
3. 2016-44
4. 2017-33
5. 2017-30
6. 2017-30
7. 2017-39
8. 2017-51
9. 2018-09
10. 2018-17
11. 2018-30
12. 2018-39
13. 2018-51
14. 2019-09
15. 2019-18
16. 2019-30
17. 2019-47
18. 2020-10
#### Initial Data Collection and Normalization
Filtering and deduplication of each month seperalety was performed with [CC_Net](https://github.com/facebookresearch/cc_net). The current datasets only contains the best part (head part) with the highest text quality (see CC_Net Paper for more details). Middle and tail part may be uploaded soon as well, or are available on request.
Afterwards this Dataset was deduplicated again to filter out Websites which occur in multiple monthly snapshots. This deduplication removes all Websites which have either the same url or the same hash (this is to filter out websites which are accessible under multiple domains)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wenzek2020ccnet,
title={CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data},
author={Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, {\'E}douard},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={4003--4012},
year={2020}
``` | german-nlp-group/german_common_crawl | [
"language:de",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["de"]} | 2023-10-03T13:50:28+00:00 |
e4c5fbd4dec8e46a5dc869216fe1c94cc585757a |
# Dataset Card for arxiv-abstracts-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Clement et al., 2019, On the Use of ArXiv as a Dataset, https://arxiv.org/abs/1905.00075](https://arxiv.org/abs/1905.00075)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Giancarlo Fissore](mailto:[email protected])
### Dataset Summary
A dataset of metadata including title and abstract for all arXiv articles up to the end of 2021 (~2 million papers).
Possible applications include trend analysis, paper recommender engines, category prediction, knowledge graph construction and semantic search interfaces.
In contrast to [arxiv_dataset](https://huggingface.co/datasets/arxiv_dataset), this dataset doesn't include papers submitted to arXiv after 2021 and it doesn't require any external download.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{
"id": "1706.03762",
"submitter": "Ashish Vaswani",
"authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\n Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin",
"title": "Attention Is All You Need",
"comments": "15 pages, 5 figures",
"journal-ref": null,
"doi": null,
"abstract": " The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural
networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through
an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention
mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show
these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to
train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing
best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model
establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small
fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well
to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training
data.\n",
"report-no": null,
"categories": [
"cs.CL cs.LG"
],
"versions": [
"v1",
"v2",
"v3",
"v4",
"v5"
]
}
```
### Data Fields
These fields are detailed on the [arXiv](https://arxiv.org/help/prep):
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system
### Data Splits
No splits
## Dataset Creation
### Curation Rationale
For about 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming, depth. In these times of unique global challenges, efficient extraction of insights from data is essential. The `arxiv-abstracts-2021` dataset aims at making the arXiv more easily accessible for machine learning applications, by providing important metadata (including title and abstract) for ~2 million papers.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The language producers are members of the scientific community at large, but not necessarily affiliated to any institution.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The full names of the papers' authors are included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by [ArXiv](https://arxiv.org/)
### Licensing Information
The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` | gfissore/arxiv-abstracts-2021 | [
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:explanation-generation",
"task_ids:text-simplification",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"arxiv:1905.00075",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["summarization", "text-retrieval", "text2text-generation"], "task_ids": ["explanation-generation", "text-simplification", "document-retrieval", "entity-linking-retrieval", "fact-checking-retrieval"], "pretty_name": "arxiv-abstracts-2021"} | 2022-10-27T16:08:00+00:00 |
8f4deb948be91a72eefc1fff64f5e70d1c7dc1de | annotations_creators:
- expert-generated
language_creators:
- expert-generated
languages:
- en
licenses:
- unknown
multilinguality:
- monolingual
paperswithcode_id: bc4chemd
pretty_name: BC4CHEMD
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
# Dataset Card for BC4CHEMD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
https://biocreative.bioinformatics.udel.edu/tasks/biocreative-v/track-3-cdr/
- **Repository:** https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BC4CHEMD
- **Paper:** BioCreative V CDR task corpus: a resource for chemical disease relation extraction
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Zhiyong Lu] (mailto: [email protected])
### Dataset Summary
A corpus for both named entity recognition and chemical-disease relations in the literature. A total of 1500 articles have been annotated with automated assistance from PubTator. Jaccard agreement results and corpus statistics verified the reliability of the corpus.
### Supported Tasks and Leaderboards
named-entity-recognition
### Languages
en
## Dataset Structure
### Data Instances
Instances of the dataset contain an array of `tokens`, `ner_tags` and an `id`. An example of an instance of the dataset:
{
'tokens': ['DPP6','as','a','candidate','gene','for','neuroleptic','-','induced','tardive','dyskinesia','.']
, 'ner_tags': [0,0,0,0,0,0,0,0,0,0,0,0],
'id': '0'
}
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a chemical and `2` the subsequent chemical tokens.
### Data Splits
The data is split into a train (3500 instances), validation (3500 instances) and test set (3000 instances).
## Dataset Creation
### Curation Rationale
The goal of the dataset consists on improving the state-of-the-art in chemical name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.
### Source Data
#### Initial Data Collection and Normalization
The dataset consists on abstracts extracted from PubMed.
#### Who are the source language producers?
The source language producers are the authors of publication abstracts hosted in PubMed.
### Annotations
#### Annotation process
The curators were trained to mark up the text according to the labels specified in the guidelines. The raw text was not tokenized prior to the annotation and only the title was distinguished from the PubMed abstract. The selection of text spans was done at the character level, they did not allow nested annotations and distinct entity mentions should not overlap. Each text span was selected according to the annotation guidelines and classified manually into one of the CEM classes.
#### Who are the annotators?
The group of curators used for preparing the annotations was composed mainly of organic chemistry postgraduates with an average experience of 3-4 years in the annotation of chemical names and chemical structures.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
To avoid annotator bias, pairs of annotators were chosen randomly for each set, so that each pair of annotators overlapped for at most two sets.
### Discussion of Biases
The used CHEMDNER document set had to be representative and balanced in order to reflect the kind of documents that might mention the entity of interest.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | ghadeermobasher/BC5CDR-Chemical-Disease | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-25T10:31:51+00:00 |
448370989f17daccc03447dfe16cf588a0075e57 | # AO3 Style Change
A Style Change detection dataset in the style of the PAN21 challenge but on much longer data (>10,000 tokens).
Warning: Due to the fanfiction source, this does contain some NSFW language. | ghomasHudson/ao3_style_change | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-09T20:37:28+00:00 |
b8d98fb25c8aeda712dfc382c5875aee2c2da458 | # HotpotQA-extended
> Version of the HotpotQA dataset with full Wikipedia articles.
The HotpotQA dataset consists of questions from crowd workers which require information from multiple Wikipedia articles in order to answer,thus testing the ability for models to perform multi-hop question answering. The data is commonly presented as a list of paragraphs containing relevant information plus a setting where the addition of ’distractor paragraphs’ fully test the ability of the model to comprehend which information is relevant to the question asked.
In this dataset, we increase the length of the inputs by expanding each paragraph with its full Wikipedia page as well as adding additional distractor articles from similar topics in order to meet the 10,000 token minimum length requirement for this benchmark. | ghomasHudson/hotpotExtended | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-13T21:45:03+00:00 |
41ad346644ee5f4284a280a6c001716b5e3d881b | Filtered ContraPro dataset for long document translation. | ghomasHudson/long_contra_pro | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-07-07T11:26:30+00:00 |
eb92b66ad9d8b6a59cad50beccfc170346a013c8 |
# MuLD
> The Multitask Long Document Benchmark

MuLD (Multitask Long Document Benchmark) is a set of 6 NLP tasks where the inputs consist of at least 10,000 words. The benchmark covers a wide variety of task types including translation, summarization, question answering, and classification. Additionally there is a range of output lengths from a single word classification label all the way up to an output longer than the input text.
- **Repository:** https://github.com/ghomasHudson/muld
- **Paper:** https://arxiv.org/abs/2202.07362
### Supported Tasks and Leaderboards
The 6 MuLD tasks consist of:
- **NarrativeQA** - A question answering dataset requiring an understanding of the plot of books and films.
- **HotpotQA** - An expanded version of HotpotQA requiring multihop reasoning between multiple wikipedia pages. This expanded version includes the full Wikipedia pages.
- **OpenSubtitles** - A translation dataset based on the OpenSubtitles 2018 dataset. The entire subtitles for each tv show is provided, one subtitle per line in both English and German.
- **VLSP (Very Long Scientific Papers)** - An expanded version of the Scientific Papers summarization dataset. Instead of removing very long papers (e.g. thesis), we explicitly include them removing any short papers.
- **AO3 Style Change Detection** - Consists of documents formed from the work of multiple [Archive of Our Own](ao3.org) authors, where the task is to predict the author for each paragraph.
- **Movie Character Types** - Predicting whether a named character is the Hero/Villain given a movie script.
### Dataset Structure
The data is presented in a text-to-text format where each instance contains a input string, output string and (optionally) json encoded metadata.
```
{'input: 'Who was wearing the blue shirt? The beginning...', 'output': ['John'], 'metadata': ''}
```
### Data Fields
- `input`: a string which has a differing structure per task but is presented in a unified format
- `output`: a list of strings where each is a possible answer. Most instances only have a single answer, but some such as narrativeQA and VLSP may have multiple.
- `metadata`: Additional metadata which may be helpful for evaluation. In this version, only the OpenSubtitles task contains metadata (for the ContraPro annotations).
### Data Splits
Each tasks contains different splits depending what was available in the source datasets:
| Task Name | Train | Validation | Test |
|----------------------------|----|----|-----|
| NarrativeQA | ✔️ | ✔️ | ✔️ |
| HotpotQA | ✔️ | ✔️ | |
| AO3 Style Change Detection | ✔️ | ✔️ | ✔️ |
| Movie Character Types | ✔️ | ✔️ | ✔️ |
| VLSP | | | ✔️ |
| OpenSubtitles | ✔️ | | ✔️ |
### Citation Information
```
@misc{hudson2022muld,
title={MuLD: The Multitask Long Document Benchmark},
author={G Thomas Hudson and Noura Al Moubayed},
year={2022},
eprint={2202.07362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please also cite the papers directly used in this benchmark. | ghomasHudson/muld | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:abstractive-qa",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:translation",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"source_datasets:extended|hotpot_qa",
"source_datasets:extended|open_subtitles",
"language:en",
"language:de",
"conditional-text-generation",
"arxiv:2202.07362",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found", "crowdsourced"], "language_creators": ["found"], "language": ["en", "de"], "license": [], "multilinguality": ["translation", "monolingual"], "size_categories": ["unknown"], "source_datasets": ["original", "extended|hotpot_qa", "extended|open_subtitles"], "task_categories": ["question-answering", "summarization", "text-generation", "translation"], "task_ids": ["abstractive-qa"], "pretty_name": "The Multitask Long Document Benchmark", "tags": ["conditional-text-generation"]} | 2022-11-02T12:55:17+00:00 |
0458b63225091d3bf55d72492c3aa60419fd6f4b |
# Dataset Card for vlsp
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/ghomasHudson/very_long_scientific_papers
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset following the methodology of the scientific_papers dataset, but specifically designed for very long documents (>10,000 words). This is gathered from arxiv.org by searching for theses.
The dataset has 2 features:
- article: the body of the document.
- abstract: the abstract of the document.
### Supported Tasks and Leaderboards
Summarization
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
Only a test set is provided.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
| ghomasHudson/vlsp | [
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"]} | 2022-10-25T08:20:37+00:00 |
643cc6391a43781f688022acd18b872d0789c309 |
## Dataset Description
- **Homepage:** http://www.openslr.org/57/
### Dataset Summary
This corpus consists of approximately 22 hours of speech recordings. Transcripts are provided for all the recordings. The corpus can be divided into 3 parts:
1. Yaounde
Collected by a team from the U.S. Military Academy's Center for Technology Enhanced Language Learning (CTELL) in 2003 in Yaoundé, Cameroon. It has recordings from 84 speakers, 48 male and 36 female.
2. CA16
This part was collected by a RDECOM Science Team who participated in the United Nations exercise Central Accord 16 (CA16) in Libreville, Gabon in June 2016. The Science Team included DARPA's Dr. Boyan Onyshkevich and Dr. Aaron Lawson (SRI International), as well as RDECOM scientists. It has recordings from 125 speakers from Cameroon, Chad, Congo and Gabon.
3. Niger
This part was collected from 23 speakers in Niamey, Niger, Oct. 26-30 2015. These speakers were students in a course for officers and sergeants presented by Army trainers assigned to U.S. Army Africa. The data was collected by RDECOM Science & Technology Advisors Major Eddie Strimel and Mr. Bill Bergen.
### Languages
French
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train and test.
The train split consists of 9401 audio clips and the related sentences.
The test split consists of 1985 audio clips and the related sentences.
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. | gigant/african_accented_french | [
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["fr"], "license": "cc", "size_categories": {"fr": ["10K<n<100K"]}, "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "African Accented French"} | 2022-10-24T16:39:03+00:00 |
71ec8b9e1b5351ea514cdf748c92592b13b14175 |
## Dataset Description
- **Homepage:** https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/
### Dataset Summary
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details).
### Languages
French
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has not been subdivided into portions, everything is in the "train" split.
The train split consists of 82825 audio clips and the related sentences.
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. | gigant/m-ailabs_speech_dataset_fr | [
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["fr"], "license": "cc", "size_categories": {"fr": ["10K<n<100K"]}, "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "M-AILABS Speech Dataset (French)"} | 2022-10-24T16:38:45+00:00 |
863b81ce584d8e6b20fc8ce509dd53d85f2cb4d7 | gigant/ro_corpora_parliament_processed | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-02T15:29:18+00:00 |
|
b4dd8109d62276134bdc035cb274018825428582 |
## Dataset Description
- **Homepage:** https://romaniantts.com/rssdb/
- **Paper:** https://www.sciencedirect.com/science/article/abs/pii/S0167639310002074
### Dataset Summary
The Romanian speech synthesis (RSS) corpus was recorded in a hemianechoic chamber (anechoic walls and ceiling; floor partially anechoic) at the University of Edinburgh. We used three high quality studio microphones: a Neumann u89i (large diaphragm condenser), a Sennheiser MKH 800 (small diaphragm condenser with very wide bandwidth) and a DPA 4035 (headset-mounted condenser). Although the current release includes only speech data recorded via Sennheiser MKH 800, we may release speech data recorded via other microphones in the future. All recordings were made at 96 kHz sampling frequency and 24 bits per sample, then downsampled to 48 kHz sampling frequency. For recording, downsampling and bit rate conversion, we used ProTools HD hardware and software. We conducted 8 sessions over the course of a month, recording about 500 sentences in each session. At the start of each session, the speaker listened to a previously recorded sample, in order to attain a similar voice quality and intonation.
### Languages
Romanian
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train and test.
The train split consists of 3180 audio clips and the related sentences.
The test split consists of 536 audio clips and the related sentences.
### Citation Information
```
@article{Stan2011442,
author = {Adriana Stan and Junichi Yamagishi and Simon King and
Matthew Aylett},
title = {The {R}omanian speech synthesis ({RSS}) corpus:
Building a high quality {HMM}-based speech synthesis
system using a high sampling rate},
journal = {Speech Communication},
volume = {53},
number = {3},
pages = {442--450},
note = {},
abstract = {This paper first introduces a newly-recorded high
quality Romanian speech corpus designed for speech
synthesis, called ''RSS'', along with Romanian
front-end text processing modules and HMM-based
synthetic voices built from the corpus. All of these
are now freely available for academic use in order to
promote Romanian speech technology research. The RSS
corpus comprises 3500 training sentences and 500 test
sentences uttered by a female speaker and was recorded
using multiple microphones at 96 kHz sampling
frequency in a hemianechoic chamber. The details of the
new Romanian text processor we have developed are also
given. Using the database, we then revisit some basic
configuration choices of speech synthesis, such as
waveform sampling frequency and auditory frequency
warping scale, with the aim of improving speaker
similarity, which is an acknowledged weakness of
current HMM-based speech synthesisers. As we
demonstrate using perceptual tests, these configuration
choices can make substantial differences to the quality
of the synthetic speech. Contrary to common practice in
automatic speech recognition, higher waveform sampling
frequencies can offer enhanced feature extraction and
improved speaker similarity for HMM-based speech
synthesis.},
doi = {10.1016/j.specom.2010.12.002},
issn = {0167-6393},
keywords = {Speech synthesis, HTS, Romanian, HMMs, Sampling
frequency, Auditory scale},
url = {http://www.sciencedirect.com/science/article/pii/S0167639310002074},
year = 2011
}
```
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. | gigant/romanian_speech_synthesis_0_8_1 | [
"task_categories:automatic-speech-recognition",
"language:ro",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["ro"], "license": ["unknown"], "size_categories": {"ro": ["1K<n<10K"]}, "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Romanian Speech Synthesis"} | 2022-10-24T16:38:35+00:00 |
47ca07324dea12a571fa09411bba27e4ede64fa9 | giganticode/java-cmpx-v1 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"size_categories:unknown",
"license:mit",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["java"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": ["java-cmpx"]} | 2022-07-01T19:32:52+00:00 |
|
0375da233f178717aa85164da93ebd223ba2dda0 | giganticode/java-cmpx | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"size_categories:unknown",
"license:mit",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["java"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": ["java-cmpx"]} | 2022-07-01T19:33:03+00:00 |
|
55d70dc0b1d1d0b2151c5e22815d823fedac3f2f | The TICO-19 evaluation set provides:
* Predefined dev and test splits. We provide English-XX translation files under both the `dev` and `test` directories.
* The dev set includes 971 sentences, and the test set includes 2100 sentences.
* The corresponding IDs are listed in the `dev.ids` and `test.ids` files.
The format of the files is:
~~~
{sourceLang}\t{targetLang}\t{sourceString}\t{targetString}\t{stringID}\t{sourceURL}\t{license}\t{translator_ID}
~~~
Currently available languages:
* Amharic (am)
* Arabic (ar)
* Bengali (bn)
* Kurdish Sorani (ckb)
* Latin American Spanish (es-LA)
* Farsi (fa)
* French (fr)
* Nigerian Fulfulde (fuv)
* Hausa (ha)
* Hindi (hi)
* Indonesian (id)
* Kurdish Kurmanji (ku)
* Lingala (ln)
* Luganda (lg)
* Marathi (mr)
* Malay (ms)
* Muanmar (my)
* Nepali (ne)
* Oromo (om)
* Dari (prs)
* Pashto (ps)
* Brazilian Portuguese (pt-BR)
* Russian (ru)
* Kinyarwanda (rw)
* Somali (so)
* kiSwahili (sw)
* Ethiopian Tigrinya (ti)
* Tagalog (tl)
* Urdu (ur)
* Chinese (Simplified) (zh)
* Zulu (zu)
All translations are released under a CC-0 license. | gmnlp/tico19 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-03T18:00:13+00:00 |
79987d1537e8f14b28d69214ec5f14704a9edc64 |
# Turkish Ted talk translations
# Created from ted-multi dataset
adding processing steps here if you want another language
```python
#using Turkish as target
target_lang="tr" # change to your target lang
from datasets import load_dataset
#ted-multi is a multiple language translated dataset
#fits for our case , not to big and curated but need a simple processing
dataset = load_dataset("ted_multi")
dataset.cleanup_cache_files()
#original from patrick's
#chars_to_ignore_regex = '[,?.!\-\;\:\"“%‘”�—’…–]' # change to the ignored characters of your fine-tuned model
#will use cahya/wav2vec2-base-turkish-artificial-cv
#checking inside model repository to find which chars removed (no run.sh)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
import re
def extract_target_lang_entries(batch):
#specific mapping for ted_multi dataset
#need to find index of language in each translation as it can shift
try:
target_index_for_lang= batch["translations"]["language"].index(target_lang)
except ValueError:
#target not in list empty it for later processing
batch["text"] = None
return batch
#index_translation_pairs = zip(batch, target_index_for_batch)
text= batch["translations"]["translation"][target_index_for_lang]
batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower())
return batch
#this dataset has additional columns need to say it
cols_to_remove = ['translations', 'talk_name']
dataset = dataset.map(extract_target_lang_entries, remove_columns=cols_to_remove)
#on preocessing we tagged None for empty ones
dataset_cleaned = dataset.filter(lambda x: x['text'] is not None)
dataset_cleaned
from huggingface_hub import notebook_login
notebook_login()
dataset_cleaned.push_to_hub(f"{target_lang}_ted_talk_translated")
``` | gorkemgoknar/tr_ted_talk_translated | [
"language:tr",
"license:apache-2.0",
"dataset",
"turkish",
"ted-multi",
"cleaned",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["tr"], "license": "apache-2.0", "tags": ["dataset", "turkish", "ted-multi", "cleaned"], "datasets": ["ted-multi"]} | 2022-01-13T09:14:54+00:00 |
ceb0129e499ea5344dba1391c0a046222ddba631 |
# Dataset Card for CHANGE-IT
## Table of Contents
- [Dataset Card for CHANGE-IT](#dataset-card-for-change-it)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Style Transfer](#style-transfer)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://live.european-language-grid.eu/catalogue/corpus/7373](https://live.european-language-grid.eu/catalogue/corpus/7373)
- **Repository:** [Github](https://github.com/michelecafagna26/CHANGE-IT)
- **Paper:** [CEUR-ws.org](http://ceur-ws.org/Vol-2765/paper169.pdf)
- **Video** [Vimeo](https://vimeo.com/484098874)
- **Point of Contact:** [Lorenzo De Mattei]([email protected])
- **Size of downloaded dataset files:** 168.7 MB
- **Size of the generated dataset:** 411 MB
- **Total amount of disk used:** 579.7 MB
### Dataset Summary
The CHANGE-IT dataset contains approximately 152,000 article-headline pairs, collected from two Italian newspapers situated at opposite ends of the political spectrum, namely la Repubblica (left) and Il Giornale (right), with the two newspapers equally represented. The dataset has been used in the context
of the [CHANGE-IT task](https://sites.google.com/view/change-it) during the [Evalita 2020 evaluation campaign](http://www.evalita.it/2020). CHANGE-IT is a generation task for Italian – more specifically, a style transfer task for headlines of Italian newspapers. Given a (collection of) headlines from one newspaper, namely Il Giornale (G) or La Repubblica (R), it challenges automatic systems to change all G-headlines to headlines in style R, and all R-headlines to headlines in style G. Although the task only concerns headline change, the dataset comprehends both the headlines as well as their respective full articles.
**Disclaimer**: *The CHANGE-IT dataset is hosted by the [European Language Grid](https://live.european-language-grid.eu/) and licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). To use the dataset using* 🤗 *Datasets, download and unzip the folder from its [ELG page](https://live.european-language-grid.eu/catalogue/corpus/7373) and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('gsarti/change_it', data_dir='path/to/unzipped/folder')`
### Supported Tasks and Leaderboards
#### Style Transfer
The following table is taken from Table 4 of the original paper, where a *pointer-network* architecture is used as a baseline to perform style transfer in two settings. In the **rep2gio** variant the system is trained to summarize Repubblica headlines from full texts (vice versa for **gio2rep**), and the style transfer is performed by summarizing full texts of the other newspaper in the source newspaper's headline style. **avg** is the average of the two settings.
| | HH| AH|Main|Compliancy|
|--------:|---:|---:|---:|---------:|
|`rep2gio`|.649|.876|.799| .449|
|`gio2rep`|.639|.871|.435| .240|
| `avg`|.644|.874|.616| .345|
Here **Main**, **HH** and **AH** are all BERT-base models trained to evaluate the quality of style transfer as follows:
- **Main**: the model is trained to classify a generated headline either as `ilgiornale` or `repubblica`, achieving ~80% F1 score on gold data. Tests whether the transfer has been successful.
- **Headline-Headline (HH)**: the model is trained to check the compatibility between original and generated headlines. Tests whether the generation is coherent with the reference.
- **Article-Headline (AH)**: the model is trained to check the compatibility between original fulltext article and generated headlines. Tests whether the generation is coherent with the source article.
The final metric, **Overall compliancy**, is a binary metric that is positive if the other three metrics match (**Main** decision is reversed, **HH** and **AH** predict match), and negative otherwise. Refer to Section 3 of the original paper for more details.
### Languages
The language data in CHANGE-IT is in Italian (BCP-47 `it`)
## Dataset Structure
### Data Instances
A sample from the `test` split of the `ilgiornale` config is provided below. The other configuration, `ilgiornale`, has the same structure.
```json
{
"id": 0,
"headline": "Ucraina, coalizione della Timoshenko denuncia irruzione nella sede",
"full_text": "Rimane alta la tensione in Ucraina , dove da giorni i manifestanti scendono in piazza per protestare contro la decisione del presidente Viktor Yanukovich, che ha deciso di congelare l'accordo di associazione con l'Unione Europea. Il momento è molto delicato. L'opposizione teme una repressione violenza della protesta, con le forze speciali che hanno costretto i manifestanti a Kiev ad allontanarsi dalla sede del governo, per ripiegare su piazza Indipendenza. Il leader d'opposizione Vitaly Klitschko ha invitato il presidente a non utilizzare la forza, se non vuole avere il sangue dei manifestanti sulle sue mani. Nel frattempo il presidente Yanukovich ha aperto alla possibilità di un dialogo, annunciando per domani un incontro con i suoi due predecessori, Leonid Kuchma e Viktor Yushchenko. Ieri un milioni di persone sono scese in piazza, scaduti i due giorni di ultimatum dati al governo per indire nuove elezioni, I manifestanti hanno rovesciato la grande statua di Lenin posta sul boulevard Shevchenko. Piazza Indipendenza (Maidan Nezalezhnosti) resta il punto più caldo della capitale. Qui sono state erette barricate davanti agli ingressi della metropolitana, nel tentativo di preparsi a un'azione della polizia, che al momento non ha però preso iniziative contro i dimostranti. In serata Batkivshcyna, la coalizione dell'ex premier Yulia Timoshenko , ha denunciato l'irruzione di almeno venti agenti della polizia antisommossa nel proprio quartier generale. Il portavoce della polizia, Olga Bilyk, ha smentito: \"Né la polizia di Kiev, né la Berkut - ha dichiarato - hanno condotto operazioni nella sede\".",
"alignment": "A2"
}
```
The text is provided as-is, without further preprocessing or tokenization.
### Data Fields
- `headline`: The original headline for the newspaper.
- `full_text`: The article full text associated to the respective headline.
- `alignment`: The alignment value used for the style transfer experiments. Values:
- `A1`: Top 5K pairs, highly aligned.
- `A2`: Test set, highly aligned.
- `A3`: 10K to 20K pairs, fairly aligned.
- `R`: Bottom ~50K pairs, weakly/not aligned.
### Data Splits
| config| train| test|
|---------:|-------------------------------------:|-----------:|
|`ilgiornale`|5'000 (A1) + 10'000 (A3) + 48'701 (R) | 5'000 (A2) |
|`repubblica`|5'000 (A1) + 10'000 (A3) + 48'701 (R) | 5'000 (A2) |
### Dataset Creation
Please refer to the original article [CHANGE-IT @ EVALITA 2020: Change Headlines, Adapt News, GEnerate](http://ceur-ws.org/Vol-2765/paper169.pdf) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The organizers of the CHANGE-IT shared tasks are the curators of the original dataset. For problems or updates on the 🤗 Datasets version, please contact [[email protected]](mailto:[email protected]).
### Licensing Information
Licensed with Creative Commons Attribution Non Commercial Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```
@inproceedings{demattei-etal-2020-changeit,
author = {De Mattei, Lorenzo and Cafagna, Michele and Dell'Orletta, Felice and Nissim, Malvina and Gatt, Albert},
title = {{CHANGE-IT @ EVALITA 2020}: Change Headlines, Adapt News, GEnerate},
booktitle = {Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)},
editor = {Basile, Valerio and Croce, Danilo and Di Maro, Maria, and Passaro, Lucia C.},
publisher = {CEUR.org},
year = {2020},
address = {Online}
}
| gsarti/change_it | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:it",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"style-transfer",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["it"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation"], "task_ids": [], "pretty_name": "change-it", "tags": ["conditional-text-generation", "style-transfer"]} | 2022-10-27T07:37:09+00:00 |
8281df3f5a2e765a5cc30e4feacac61e94ffdce4 |
# Dataset Card for Clean Italian mC4 🇮🇹
## Table of Contents
- [Dataset Card for Clean](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Preprocessing](#preprocessing)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Italian split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4), with subsequent preprocessing performed by [Gabriele Sarti](https://gsarti.com) following a standard procedure for all dataset shards.
### Preprocessing
The preprocessing of the dataset follows the procedure used by Yeb Havinga for training the model [`t5-base-dutch`](https://huggingface.co/flax-community/t5-base-dutch) on a portion of the cleaned Dutch split of mC4. The original code, that was adapted for Italian in this case, is available on [GitLab](https://gitlab.com/yhavinga/c4nlpreproc). In summary, the preprocessing procedure includes:
- Removing documents containing words from a selection of the [Italian and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
- Removing sentences containing:
- Less than 3 words.
- A word longer than 1000 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Italian or English.
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- Not identified as prevalently Italian by the `LangDetect` package.
Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Italian shards of mC4 (1024 of ~220Mb train, 8 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'timestamp': '2020-02-22T22:24:31Z',
'url': 'https://altreconomia.it/una-rotonda-sul-pane/',
'text': 'Per raggiungere il campo attraversiamo la striscia d’asfalto che porta verso la provinciale numero 13. Mettiamo a rischio la nostra incolumità in un territorio di auto e camion. Sullo sfondo, i profili della Grigna e del Resegone. Più vicini, quelli del solito ipermercato di provincia, e delle villette a schiera che avanzano tra le coltivazioni. È lo sprawling, l’avanzata del cemento.\\nDa questo lato dalla strada, invece, è ancora regno contadino. Almeno per ora. Torniamo a Caponago (Mb), Brianza pura, dove ha avuto i natali il progetto “Spiga e madia”. Ne parlammo su Ae nel gennaio 2009: in un territorio “spaesato”, il Comitato “verso il Distretto di economia solidale della Brianza” (Desbri) e la “Retina” dei gruppi di acquisto locali danno vita a un progetto di produzione di frumento, molitura, panificazione e distribuzione in un raggio di 20 chilometri. Si comincia da zero, nel 2007, senza alcun di finanziamento, quando una famiglia del [...]. Il giochino vale almeno 3 miliardi di euro all’anno. La misura, introdotta in via straordinaria con la finanziaria 2005, è stata prorogata anche con l’ultimo decreto “milleproroghe”.'
}
```
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Splits
To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. For Italian, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following the naming style `c4-it.tfrecord-0XXXX-of-01024.json.gz` and 8 for validation following the naming style `c4-it-validation.tfrecord-0000X-of-00008.json.gz`. The full set of preprocessed files takes roughly 215GB of disk space to download with Git LFS.
For ease of use under different storage capacities, the following incremental splits are available (sizes are estimates). **Important**: The sizes in GB represent the estimated weight for :
|split |train size (docs, words, download + preproc disk space)|validation size|
|:-----|------------------------------------------------------:|--------------:|
|tiny | 10M docs, 4B words (9 GB + 27 GB) | 12k docs |
|small | 20M docs, 8B words (18 GB + 54 GB) | 24k docs |
|medium| 50M docs, 20B words (47 GB + 135 GB) | 48k docs |
|large | 75M docs, 30B words (71 GB + 203 GB) | 72k docs |
|full | 103M docs, 41B words (109 GB + 279 GB) | 96k docs |
You can load any subset like this:
```python
from datasets import load_dataset
mc4_it_tiny = load_dataset("gsarti/clean_mc4_it", "tiny")
```
Since splits are quite large, you may want to traverse them using the streaming mode available starting from 🤗 Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_it_full_stream = load_dataset("gsarti/clean_mc4_it", "full", split='train', streaming=True)
print(next(iter(mc4_it_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Social Impact of Dataset
With more than 200GB of cleaned Italian text and more than 41B estimated words, this is by far the largest available corpus for the Italian language. The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 69GB in size for its deduplicated variant. Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language. This can in turn have important repercussions for the development of commercial language technology applications for the Italian language.
### Discussion of Biases
Despit the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Dataset Curators
Authors at AllenAI are the original curators for the `mc4` corpus. For inquiries or requests regarding the Italian cleaned portion contained in this repository, please contact me at [[email protected]](mailto:[email protected])
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
If you use this dataset in your work, please cite us and the original mC4 authors as:
```
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
@inproceedings{xue-etal-2021-mt5,
title = "m{T}5: A Massively Multilingual Pre-trained Text-to-Text Transformer",
author = "Xue, Linting and
Constant, Noah and
Roberts, Adam and
Kale, Mihir and
Al-Rfou, Rami and
Siddhant, Aditya and
Barua, Aditya and
Raffel, Colin",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.41",
doi = "10.18653/v1/2021.naacl-main.41",
pages = "483--498",
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| gsarti/clean_mc4_it | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:it",
"license:odc-by",
"arxiv:1910.10683",
"arxiv:2203.03759",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["it"], "license": ["odc-by"], "multilinguality": ["monolingual"], "size_categories": {"tiny": ["1M<n<10M"], "small": ["10M<n<100M"], "medium": ["10M<n<100M"], "large": ["10M<n<100M"], "full": ["100M<n<1B"]}, "source_datasets": ["extended"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "mc4", "pretty_name": "mC4_it"} | 2022-10-23T08:01:21+00:00 |
bc58ae43b22607b3e1e2bf3ae1bc5cb053495abb |
# Dataset Card for Flores 101
## Table of Contents
- [Dataset Card for Flores 101](#dataset-card-for-flores-101)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Home:** [WMT](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html)
- **Repository:** [Github](https://github.com/facebookresearch/flores)
- **Blogpost:** [FAIR](https://ai.facebook.com/blog/the-flores-101-data-set-helping-build-better-translation-systems-around-the-world)
- **Paper:** [Arxiv](https://arxiv.org/abs/2106.03193)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Leaderboard** [Dynabench](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL))
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
Abstract from the original paper:
> One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [[email protected]](mailto:[email protected]).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
``` | gsarti/flores_101 | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|flores",
"language:af",
"language:am",
"language:ar",
"language:hy",
"language:as",
"language:ast",
"language:az",
"language:be",
"language:bn",
"language:bs",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:zho",
"language:hr",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:ff",
"language:gl",
"language:lg",
"language:ka",
"language:de",
"language:el",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:ig",
"language:id",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:kea",
"language:kam",
"language:kn",
"language:kk",
"language:km",
"language:ko",
"language:ky",
"language:lo",
"language:lv",
"language:ln",
"language:lt",
"language:luo",
"language:lb",
"language:mk",
"language:ms",
"language:ml",
"language:mt",
"language:mi",
"language:mr",
"language:mn",
"language:ne",
"language:ns",
"language:no",
"language:ny",
"language:oc",
"language:or",
"language:om",
"language:ps",
"language:fa",
"language:pl",
"language:pt",
"language:pa",
"language:ro",
"language:ru",
"language:sr",
"language:sn",
"language:sd",
"language:sk",
"language:sl",
"language:so",
"language:ku",
"language:es",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:umb",
"language:ur",
"language:uz",
"language:vi",
"language:cy",
"language:wo",
"language:xh",
"language:yo",
"language:zu",
"license:cc-by-sa-4.0",
"conditional-text-generation",
"arxiv:2106.03193",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["expert-generated"], "language": ["af", "am", "ar", "hy", "as", "ast", "az", "be", "bn", "bs", "bg", "my", "ca", "ceb", "zho", "hr", "cs", "da", "nl", "en", "et", "tl", "fi", "fr", "ff", "gl", "lg", "ka", "de", "el", "gu", "ha", "he", "hi", "hu", "is", "ig", "id", "ga", "it", "ja", "jv", "kea", "kam", "kn", "kk", "km", "ko", "ky", "lo", "lv", "ln", "lt", "luo", "lb", "mk", "ms", "ml", "mt", "mi", "mr", "mn", "ne", "ns", "no", "ny", "oc", "or", "om", "ps", "fa", "pl", "pt", "pa", "ro", "ru", "sr", "sn", "sd", "sk", "sl", "so", "ku", "es", "sw", "sv", "tg", "ta", "te", "th", "tr", "uk", "umb", "ur", "uz", "vi", "cy", "wo", "xh", "yo", "zu"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|flores"], "task_categories": ["text-generation", "translation"], "task_ids": [], "paperswithcode_id": "flores", "pretty_name": "flores101", "tags": ["conditional-text-generation"]} | 2022-10-27T07:37:36+00:00 |
f8f98e5c4d3059cf1a00c8eb3d70aa271423f636 |
# Dataset Card for ItaCoLA
## Table of Contents
- [Dataset Card for ItaCoLA](#dataset-card-for-itacola)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Acceptability Classification](#acceptability-classification)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Scores Configuration](#scores-configuration)
- [Phenomena Configuration](#phenomena-configuration)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/dhfbk/ItaCoLA-dataset)
- **Paper:** [Arxiv](http://ceur-ws.org/Vol-2765/paper169.pdf)
- **Point of Contact:** [Daniela Trotta]([email protected])
### Dataset Summary
The Italian Corpus of Linguistic Acceptability includes almost 10k sentences taken from linguistic literature with a binary annotation made by the original authors themselves. The work is inspired by the English [Corpus of Linguistic Acceptability](https://nyu-mll.github.io/CoLA/).
**Disclaimer**: *The ItaCoLA corpus is hosted on Github by the [Digital Humanities group at FBK](https://dh.fbk.eu/)*. It was introduced in the article [Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus](https://arxiv.org/abs/2109.12053) by [Daniela Trotta](https://dh.fbk.eu/author/daniela/), [Raffaele Guarasci](https://www.icar.cnr.it/persone/guarasci/), [Elisa Leonardelli](https://dh.fbk.eu/author/elisa/), [Sara Tonelli](https://dh.fbk.eu/author/sara/)
### Supported Tasks and Leaderboards
#### Acceptability Classification
The following table is taken from Table 4 of the original paper, where an LSTM and a BERT model pretrained on the Italian languages are fine-tuned on the `train` split of the corpus and evaluated respectively on the `test` split (*In-domain*, `in`) and on the acceptability portion of the [AcCompl-it] corpus (*Out-of-domain*, `out`). Models are evaluated with accuracy (*Acc.*) and Matthews Correlation Coefficient (*MCC*) in both settings. Results are averaged over 10 runs with ±stdev. error bounds.
| | `in`, Acc.| `in`, MCC| `out`, Acc.|`out`, MCC|
|---------:|-----------:|----------:|-----------:|---------:|
|`LSTM` | 0.794 | 0.278 ± 0.029 | 0.605 | 0.147 ± 0.066 |
|`ITA-BERT`| 0.904 | 0.603 ± 0.022 | 0.683 | 0.198 ± 0.036 |
### Languages
The language data in ItaCoLA is in Italian (BCP-47 `it`)
## Dataset Structure
### Data Instances
#### Scores Configuration
The `scores` configuration contains sentences with acceptability judgments. An example from the `train` split of the `scores` config (default) is provided below.
```json
{
"unique_id": 1,
"source": "Graffi_1994",
"acceptability": 1,
"sentence": "Quest'uomo mi ha colpito."
}
```
The text is provided as-is, without further preprocessing or tokenization.
The fields are the following:
- `unique_id`: Unique identifier for the sentence across configurations.
- `source`: Original source for the sentence.
- `acceptability`: Binary score, 1 = acceptable, 0 = not acceptable.
- `sentence`: The evaluated sentence.
#### Phenomena Configuration
The `phenomena` configuration contains a sample of sentences from `scores` that has been manually annotated to denote the presence of 9 linguistic phenomena. An example from the `train` split is provided below:
```json
{
"unique_id": 1,
"source": "Graffi_1994",
"acceptability": 1,
"sentence": "Quest'uomo mi ha colpito.",
"cleft_construction": 0,
"copular_construction": 0,
"subject_verb_agreement": 1,
"wh_islands_violations": 0,
"simple": 0,
"question": 0,
"auxiliary": 1,
"bind": 0,
"indefinite_pronouns": 0
}
```
For each one of the new fields, the value of the binary score denotes the presence (1) or the absence (0) of the respective phenomenon. Refer to the original paper for a detailed description of each phenomenon.
### Data Splits
| config| train| test|
|----------:|-----:|----:|
|`scores` | 7801 | 975 |
|`phenomena`| 2088 | - |
### Dataset Creation
Please refer to the original article [Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus](https://arxiv.org/abs/2109.12053) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [[email protected]](mailto:[email protected]).
### Licensing Information
No licensing information available.
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{trotta-etal-2021-monolingual-cross,
title = "Monolingual and Cross-Lingual Acceptability Judgments with the {I}talian {C}o{LA} corpus",
author = "Trotta, Daniela and
Guarasci, Raffaele and
Leonardelli, Elisa and
Tonelli, Sara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.250",
doi = "10.18653/v1/2021.findings-emnlp.250",
pages = "2929--2940"
}
```
| gsarti/itacola | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:it",
"license:unknown",
"arxiv:2109.12053",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["it"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification"], "pretty_name": "itacola"} | 2022-07-01T14:38:55+00:00 |
986c40d9c5a10d748051440873fffa65f37e82d9 |
# Dataset Card for Variance-Aware MT Test Sets
## Table of Contents
- [Dataset Card for Variance-Aware MT Test Sets](#dataset-card-for-variance-aware-mt-test-sets)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/NLP2CT/Variance-Aware-MT-Test-Sets)
- **Paper:** [NeurIPS](https://openreview.net/forum?id=hhKA5k0oVy5)
- **Point of Contact:** [Runzhe Zhan](mailto:[email protected])
### Dataset Summary
This dataset comprises 70 small and discriminative test sets for machine translation (MT) evaluation called variance-aware test sets (VAT), covering 35 translation directions from WMT16 to WMT20 competitions. VAT is automatically created by a novel variance-aware filtering method that filters the indiscriminative test instances of the current MT benchmark without any human labor. Experimental results show that VAT outperforms the original WMT benchmark in terms of the correlation with human judgment across mainstream language pairs and test sets. Further analysis on the properties of VAT reveals the challenging linguistic features (e.g., translation of low-frequency words and proper nouns) for the competitive MT systems, providing guidance for constructing future MT test sets.
**Disclaimer**: *The VAT test sets are hosted through Github by the [Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory (NLP2CT Lab)](http://nlp2ct.cis.um.edu.mo/) of the University of Macau. They were introduced by the paper [Variance-Aware Machine Translation Test Sets](https://openreview.net/forum?id=hhKA5k0oVy5) by [Runzhe Zhan](https://runzhe.me/), [Xuebo Liu](https://sunbowliu.github.io/), [Derek F. Wong](https://www.fst.um.edu.mo/personal/derek-wong/), [Lidia S. Chao](https://aclanthology.org/people/l/lidia-s-chao/) and follow the original licensing for WMT test sets.
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](https://openreview.net/forum?id=hhKA5k0oVy5) for additional details on model evaluation on VAT.
### Languages
The following table taken from the original paper lists the languages supported by the VAT test sets, for a total of 70 language pairs:
| ↔️ | `wmt16` | `wmt17` | `wmt18` | `wmt19` | `wmt20` |
|----------:|:--------|:--------|:--------|--------:|--------:|
| `xx_en` | `cs`,`de`,`fi`, <br /> `ro`,`ru`,`tr` | `cs`,`de`,`fi`,`lv`, <br /> `ru`,`tr`,`zh` | `cs`,`de`,`et`,`fi`, <br /> `ru`,`tr`,`zh` | `de`,`fi`,`gu`, <br /> `kk`,`lt`,`ru`,`zh` | `cs`,`de`,`iu`,`ja`,`km`, <br /> `pl`,`ps`,`ru`,`ta`,`zh`|
| `en_xx` | `ru` | `cs`,`de`,`fi`, <br /> `lv`,`ru`,`tr`,`zh` | `cs`,`de`,`et`,`fi`, <br /> `ru`,`tr`,`zh` | `cs`,`de`,`fi`,`gu`, <br /> `kk`,`lt`,`ru`,`zh` | `cs`,`de`,`ja`,`pl`, <br /> `ru`,`ta`,`zh`|
| `xx_yy` | / | / | / | `de_cs`,`de_fr`, <br /> `fr_de` | / |
To use any one of the test set, pass `wmtXX_src_tgt` as configuration name to the `load_dataset` command. E.g. to load the English-Russian test set from `wmt16`, use `load_dataset('gsarti/wmt_vat', 'wmt16_en_ru')`.
## Dataset Structure
### Data Instances
A sample from the `test` split (the only available split) for the WMT16 English-Russian language (`wmt16_en_ru` config) is provided below. All configurations have the same structure.
```python
{
'orig_id': 0,
'source': 'The social card of residents of Ivanovo region is to be recognised as an electronic payment instrument.',
'reference': 'Социальная карта жителя Ивановской области признается электронным средством платежа.'
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `orig_id`: Id corresponding to the row id in the original dataset, before variance-aware filtering.
- `source`: The source sentence.
- `reference`: The reference sentence in the target language.
### Data Splits
Taken from the original repository:
| Configuration | # Sentences | # Words | # Vocabulary |
| :-----------: | :--------: | :-----: | :--------------: |
| `wmt20_km_en` | 928 | 17170 | 3645 |
| `wmt20_cs_en` | 266 | 12568 | 3502 |
| `wmt20_en_de` | 567 | 21336 | 5945 |
| `wmt20_ja_en` | 397 | 10526 | 3063 |
| `wmt20_ps_en` | 1088 | 20296 | 4303 |
| `wmt20_en_zh` | 567 | 18224 | 5019 |
| `wmt20_en_ta` | 400 | 7809 | 4028 |
| `wmt20_de_en` | 314 | 16083 | 4046 |
| `wmt20_zh_en` | 800 | 35132 | 6457 |
| `wmt20_en_ja` | 400 | 12718 | 2969 |
| `wmt20_en_cs` | 567 | 16579 | 6391 |
| `wmt20_en_pl` | 400 | 8423 | 3834 |
| `wmt20_en_ru` | 801 | 17446 | 6877 |
| `wmt20_pl_en` | 400 | 7394 | 2399 |
| `wmt20_iu_en` | 1188 | 23494 | 3876 |
| `wmt20_ru_en` | 396 | 6966 | 2330 |
| `wmt20_ta_en` | 399 | 7427 | 2148 |
| `wmt19_zh_en` | 800 | 36739 | 6168 |
| `wmt19_en_cs` | 799 | 15433 | 6111 |
| `wmt19_de_en` | 800 | 15219 | 4222 |
| `wmt19_en_gu` | 399 | 8494 | 3548 |
| `wmt19_fr_de` | 680 | 12616 | 3698 |
| `wmt19_en_zh` | 799 | 20230 | 5547 |
| `wmt19_fi_en` | 798 | 13759 | 3555 |
| `wmt19_en_fi` | 799 | 13303 | 6149 |
| `wmt19_kk_en` | 400 | 9283 | 2584 |
| `wmt19_de_cs` | 799 | 15080 | 6166 |
| `wmt19_lt_en` | 400 | 10474 | 2874 |
| `wmt19_en_lt` | 399 | 7251 | 3364 |
| `wmt19_ru_en` | 800 | 14693 | 3817 |
| `wmt19_en_kk` | 399 | 6411 | 3252 |
| `wmt19_en_ru` | 799 | 16393 | 6125 |
| `wmt19_gu_en` | 406 | 8061 | 2434 |
| `wmt19_de_fr` | 680 | 16181 | 3517 |
| `wmt19_en_de` | 799 | 18946 | 5340 |
| `wmt18_en_cs` | 1193 | 19552 | 7926 |
| `wmt18_cs_en` | 1193 | 23439 | 5453 |
| `wmt18_en_fi` | 1200 | 16239 | 7696 |
| `wmt18_en_tr` | 1200 | 19621 | 8613 |
| `wmt18_en_et` | 800 | 13034 | 6001 |
| `wmt18_ru_en` | 1200 | 26747 | 6045 |
| `wmt18_et_en` | 800 | 20045 | 5045 |
| `wmt18_tr_en` | 1200 | 25689 | 5955 |
| `wmt18_fi_en` | 1200 | 24912 | 5834 |
| `wmt18_zh_en` | 1592 | 42983 | 7985 |
| `wmt18_en_zh` | 1592 | 34796 | 8579 |
| `wmt18_en_ru` | 1200 | 22830 | 8679 |
| `wmt18_de_en` | 1199 | 28275 | 6487 |
| `wmt18_en_de` | 1199 | 25473 | 7130 |
| `wmt17_en_lv` | 800 | 14453 | 6161 |
| `wmt17_zh_en` | 800 | 20590 | 5149 |
| `wmt17_en_tr` | 1203 | 17612 | 7714 |
| `wmt17_lv_en` | 800 | 18653 | 4747 |
| `wmt17_en_de` | 1202 | 22055 | 6463 |
| `wmt17_ru_en` | 1200 | 24807 | 5790 |
| `wmt17_en_fi` | 1201 | 17284 | 7763 |
| `wmt17_tr_en` | 1203 | 23037 | 5387 |
| `wmt17_en_zh` | 800 | 18001 | 5629 |
| `wmt17_en_ru` | 1200 | 22251 | 8761 |
| `wmt17_fi_en` | 1201 | 23791 | 5300 |
| `wmt17_en_cs` | 1202 | 21278 | 8256 |
| `wmt17_de_en` | 1202 | 23838 | 5487 |
| `wmt17_cs_en` | 1202 | 22707 | 5310 |
| `wmt16_tr_en` | 1200 | 19225 | 4823 |
| `wmt16_ru_en` | 1199 | 23010 | 5442 |
| `wmt16_ro_en` | 800 | 16200 | 3968 |
| `wmt16_de_en` | 1200 | 22612 | 5511 |
| `wmt16_en_ru` | 1199 | 20233 | 7872 |
| `wmt16_fi_en` | 1200 | 20744 | 5176 |
| `wmt16_cs_en` | 1200 | 23235 | 5324 |
### Dataset Creation
The dataset was created by retaining a subset of the top 40% instances from various WMT test sets for which the variance between automatic scores (BLEU, BLEURT, COMET, BERTScore) was the highest. Please refer to the original article [Variance-Aware Machine Translation Test Sets](https://openreview.net/forum?id=hhKA5k0oVy5) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of VAT are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [[email protected]](mailto:[email protected]).
### Licensing Information
The variance-aware test set were created based on the original WMT test set. Thus, the the [original data licensing plan](http://www.statmt.org/wmt20/translation-task.html) already stated by WMT organizers is still applicable:
> The data released for the WMT news translation task can be freely used for research purposes, we just ask that you cite the WMT shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.
### Citation Information
Please cite the authors if you use these corpora in your work. It is also advised to cite the original WMT shared task paper for the specific test sets that were used.
```bibtex
@inproceedings{
zhan2021varianceaware,
title={Variance-Aware Machine Translation Test Sets},
author={Runzhe Zhan and Xuebo Liu and Derek F. Wong and Lidia S. Chao},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems, Datasets and Benchmarks Track},
year={2021},
url={https://openreview.net/forum?id=hhKA5k0oVy5}
}
``` | gsarti/wmt_vat | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|wmt16",
"source_datasets:extended|wmt17",
"source_datasets:extended|wmt18",
"source_datasets:extended|wmt19",
"source_datasets:extended|wmt20",
"language:cs",
"language:de",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:gu",
"language:iu",
"language:ja",
"language:kk",
"language:km",
"language:lt",
"language:lv",
"language:pl",
"language:ps",
"language:ro",
"language:ru",
"language:ta",
"language:tr",
"language:zh",
"license:unknown",
"conditional-text-generation",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["expert-generated"], "language": ["cs", "de", "en", "et", "fi", "fr", "gu", "iu", "ja", "kk", "km", "lt", "lv", "pl", "ps", "ro", "ru", "ta", "tr", "zh"], "license": ["unknown"], "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|wmt16", "extended|wmt17", "extended|wmt18", "extended|wmt19", "extended|wmt20"], "task_categories": ["text-generation", "translation"], "task_ids": [], "pretty_name": "wmt_vat", "tags": ["conditional-text-generation"]} | 2022-10-27T07:37:41+00:00 |
03ec56387c1aaf87b9db106a1389074390b9cb84 | Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.
The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.
Supported Tasks and Leaderboards
[Needs More Information]
Languages
English
Dataset Structure
Data Instances
A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path':nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000} `
Data Fields
client_id: An id for which client (voice) made the recording
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
sentence: The sentence the user was prompted to speak
up_votes: How many upvotes the audio file has received from reviewers
down_votes: How many downvotes the audio file has received from reviewers
age: The age of the speaker.
gender: The gender of the speaker
accent: Accent of the speaker
locale: The locale of the speaker
segment: Usually empty field
Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers and recieved downvotes that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
Dataset Creation
Curation Rationale
[Needs More Information]
Source Data
Initial Data Collection and Normalization
[Needs More Information]
Who are the source language producers?
[Needs More Information]
Annotations
Annotation process
[Needs More Information]
Who are the annotators?
[Needs More Information]
Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
Considerations for Using the Data
Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
Public Domain, CC-0
Citation Information
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
| guoqiang/cuge | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-25T05:30:29+00:00 |
ec57bf8c8b1653a209c13f6e9ee66b12df0fc2db | This dataset includes 2 document images of the [DocVQA](https://docvqa.org/) dataset.
They are used for testing the LayoutLMv2FeatureExtractor + LayoutLMv2Processor inside the HuggingFace Transformers library.
More specifically, they are used in `tests/test_feature_extraction_layoutlmv2.py` and `tests/test_processor_layoutlmv2.py`. | hf-internal-testing/fixtures_docvqa | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2023-09-18T16:39:07+00:00 |
8665b8ad25d24519d073c267af0765cb43578523 | This dataset includes 5 images for testing.
It includes 4 different kinds of images (RGBA, LA, L, Rotated Image) as well as an original cats image of the COCO dataset.
This dataset is used for testing in the HuggingFace Transformers library. You can see [here](https://github.com/huggingface/transformers/search?q=fixtures_image_utils) where this dataset is used. | hf-internal-testing/fixtures_image_utils | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-07T08:06:37+00:00 |
fbeeabc448702f972cfa1c708c04cbcddf1bac81 | This dataset includes 2 images: one of the [IAM Handwriting Database](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database) and one of the [SRIOE](https://rrc.cvc.uab.es/?ch=13) dataset.
They are used for testing OCR models that are part of the HuggingFace Transformers library. See [here](https://github.com/huggingface/transformers/search?q=fixtures_ocr) for details.
More specifically, they are used inside `test_modeling_vision_encoder_decoder_model.py`, for testing the TrOCR models. | hf-internal-testing/fixtures_ocr | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-07T08:07:29+00:00 |
b02c58310ffb9713f6579afc2e4c73de016c3f3d | Swedish text corpus created by extracting the `"text"` from `dataset = load_dataset("europarl_bilingual", lang1="en", lang2="sv", split="train")` and processing it with:
```python
import re
def extract_text(batch):
text = batch["translation"]["sv"]
batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower())
return batch
``` | hf-test/sv_corpora_parliament_processed | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-10T10:17:51+00:00 |
f9653921a210c090fdde832534d5ac9cf3930330 | # ❤️🩹 Sensai: Toxic Chat Dataset
Sensai is a toxic chat dataset consists of live chats from Virtual YouTubers' live streams.
Download the dataset from [Kaggle Datasets](https://www.kaggle.com/uetchy/sensai) and join `#livechat-dataset` channel on [holodata Discord](https://holodata.org/discord) for discussions.
## Provenance
- **Source:** YouTube Live Chat events (all streams covered by [Holodex](https://holodex.net), including Hololive, Nijisanji, 774inc, etc)
- **Temporal Coverage:** From 2021-01-15T05:15:33Z
- **Update Frequency:** At least once per month
## Research Ideas
- Toxic Chat Classification
- Spam Detection
- Sentence Transformer for Live Chats
See [public notebooks](https://www.kaggle.com/uetchy/sensai/code) for ideas.
## Files
| filename | summary | size |
| ------------------------- | -------------------------------------------------------------- | -------- |
| `chats_flagged_%Y-%m.csv` | Chats flagged as either deleted or banned by mods (3,100,000+) | ~ 400 MB |
| `chats_nonflag_%Y-%m.csv` | Non-flagged chats (3,100,000+) | ~ 300 MB |
To make it a balanced dataset, the number of `chats_nonflags` is adjusted (randomly sampled) to be the same as `chats_flagged`.
Ban and deletion are equivalent to `markChatItemsByAuthorAsDeletedAction` and `markChatItemAsDeletedAction` respectively.
## Dataset Breakdown
### Chats (`chats_%Y-%m.csv`)
| column | type | description |
| --------------- | ------ | ---------------------------- |
| body | string | chat message |
| membership | string | membership status |
| authorChannelId | string | anonymized author channel id |
| channelId | string | source channel id |
#### Membership status
| value | duration |
| ----------------- | ------------------------- |
| unknown | Indistinguishable |
| non-member | 0 |
| less than 1 month | < 1 month |
| 1 month | >= 1 month, < 2 months |
| 2 months | >= 2 months, < 6 months |
| 6 months | >= 6 months, < 12 months |
| 1 year | >= 12 months, < 24 months |
| 2 years | >= 24 months |
#### Pandas usage
Set `keep_default_na` to `False` and `na_values` to `''` in `read_csv`. Otherwise, chat message like `NA` would incorrectly be treated as NaN value.
```python
import pandas as pd
from glob import iglob
flagged = pd.concat([
pd.read_csv(f,
na_values='',
keep_default_na=False)
for f in iglob('../input/sensai/chats_flagged_*.csv')
],
ignore_index=True)
```
## Consideration
### Anonymization
`authorChannelId` are anonymized by SHA-1 hashing algorithm with a pinch of undisclosed salt.
### Handling Custom Emojis
All custom emojis are replaced with a Unicode replacement character `U+FFFD`.
## Citation
```latex
@misc{sensai-dataset,
author={Yasuaki Uechi},
title={Sensai: Toxic Chat Dataset},
year={2021},
month={8},
version={31},
url={https://github.com/holodata/sensai-dataset}
}
```
## License
- Code: [MIT License](https://github.com/holodata/sensai-dataset/blob/master/LICENSE)
- Dataset: [ODC Public Domain Dedication and Licence (PDDL)](https://opendatacommons.org/licenses/pddl/1-0/index.html)
| holodata/sensai | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-01T05:16:32+00:00 |
1061da9ff8290ae64d2ab4659eccd5c78407ff13 |
# ReCAM: Reading Comprehension of Abstract Meaning
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
This dataset is from SemEval 2021 Task 4: Reading Comprehension of Abstract Meaning. [Original repository for the dataset and baseline code can be accessed here.](https://github.com/boyuanzheng010/SemEval2021-Reading-Comprehension-of-Abstract-Meaning)
- **Paper:** [SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning in ACL](https://aclanthology.org/2021.semeval-1.4.pdf)
- **Leaderboard:** [CodaLab](https://competitions.codalab.org/competitions/26153#learn_the_details)
### Dataset Summary
Refer to [this page](https://competitions.codalab.org/competitions/26153#learn_the_details).
## Dataset Structure
Refer to [the GitHub](https://github.com/boyuanzheng010/SemEval2021-Reading-Comprehension-of-Abstract-Meaning).
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{zheng-etal-2021-semeval,
title = "{S}em{E}val-2021 Task 4: Reading Comprehension of Abstract Meaning",
author = "Zheng, Boyuan and
Yang, Xiaoyu and
Ruan, Yu-Ping and
Ling, Zhenhua and
Liu, Quan and
Wei, Si and
Zhu, Xiaodan",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.4",
doi = "10.18653/v1/2021.semeval-1.4",
pages = "37--50",
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | holylovenia/recam | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]} | 2021-10-18T02:28:53+00:00 |
9afb7128141a75b91efe36a2bc29e5a8a5072c04 |
# Dataset Card for "huggingartists/100-gecs"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.182347 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9fd98af9a817af8cd78636f71895b6ad.500x500x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/100-gecs">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">100 gecs</div>
<a href="https://genius.com/artists/100-gecs">
<div style="text-align: center; font-size: 14px;">@100-gecs</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/100-gecs).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/100-gecs")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|140| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/100-gecs")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/100-gecs | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:20:43+00:00 |
a57512972a8db01d96eacae0d9d01645d961e831 |
# Dataset Card for "huggingartists/21-savage"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.073984 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/aa32202cc20d1dde62e57940a8b278b2.770x770x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/21-savage">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">21 Savage</div>
<a href="https://genius.com/artists/21-savage">
<div style="text-align: center; font-size: 14px;">@21-savage</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/21-savage).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/21-savage")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|435| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/21-savage")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/21-savage | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:20:49+00:00 |
9c72440451a339fc53722f5c1d8c36679af02c66 |
# Dataset Card for "huggingartists/25-17"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.678946 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/4fedc5dd2830a874a5274bf1cac62002.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/25-17">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">25/17</div>
<a href="https://genius.com/artists/25-17">
<div style="text-align: center; font-size: 14px;">@25-17</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/25-17).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/25-17")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|195| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/25-17")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/25-17 | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:20:55+00:00 |
921cd4a4253b5e18fd1c78ad97b52762574a1375 |
# Dataset Card for "huggingartists/50-cent"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.267733 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2aa85f8fdffe5d0552ff319221fc63e4.959x959x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/50-cent">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">50 Cent</div>
<a href="https://genius.com/artists/50-cent">
<div style="text-align: center; font-size: 14px;">@50-cent</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/50-cent).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/50-cent")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|840| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/50-cent")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/50-cent | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:21:02+00:00 |
70a16443bb1235ae7cda785b235ad01b8be152d6 |
# Dataset Card for "huggingartists/5nizza"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.13617 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/289ded19d51d41798be99217d6059eb3.458x458x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/5nizza">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">5’Nizza</div>
<a href="https://genius.com/artists/5nizza">
<div style="text-align: center; font-size: 14px;">@5nizza</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/5nizza).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/5nizza")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|51| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/5nizza")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/5nizza | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:00+00:00 |
c11e5903c0eeb4af4e08a2776591d0bd7751b7e7 |
# Dataset Card for "huggingartists/5opka"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.110132 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c56dce03a151e17a9626e55e6c295bb1.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/5opka">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">5opka</div>
<a href="https://genius.com/artists/5opka">
<div style="text-align: center; font-size: 14px;">@5opka</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/5opka).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/5opka")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|35| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/5opka")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/5opka | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:06+00:00 |
a23afc79eb3bf6a714dbf13e818404c3e90dd4da |
# Dataset Card for "huggingartists/6ix9ine"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.350166 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b2b164a7c6c02dd0843ad597df5dbf4b.1000x1000x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/6ix9ine">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">6ix9ine</div>
<a href="https://genius.com/artists/6ix9ine">
<div style="text-align: center; font-size: 14px;">@6ix9ine</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/6ix9ine).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/6ix9ine")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|173| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/6ix9ine")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/6ix9ine | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:13+00:00 |
f3f921e36fd178eb85ecc777acaa6df65b24dee0 |
# Dataset Card for "huggingartists/aaron-watson"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.266584 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/894021d09a748eef8c6d63ad898b814b.650x430x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/aaron-watson">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Aaron Watson</div>
<a href="https://genius.com/artists/aaron-watson">
<div style="text-align: center; font-size: 14px;">@aaron-watson</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/aaron-watson).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/aaron-watson")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|181| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/aaron-watson")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/aaron-watson | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:20+00:00 |
aab13ac8c61a1eb65e78d32884fcc37513d7e099 |
# Dataset Card for "huggingartists/abba"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.309428 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2fa03267661cbc8112b4ef31685e2721.220x220x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/abba">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">ABBA</div>
<a href="https://genius.com/artists/abba">
<div style="text-align: center; font-size: 14px;">@abba</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/abba).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/abba")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|202| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/abba")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/abba | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:26+00:00 |
98a09d23a1203e7d8591575cf5ef866fbca54470 |
# Dataset Card for "huggingartists/adele"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.304292 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/45ccf22bba4c1f80989e645c2fd4ec44.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/adele">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Adele</div>
<a href="https://genius.com/artists/adele">
<div style="text-align: center; font-size: 14px;">@adele</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/adele).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/adele")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|203| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/adele")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/adele | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:32+00:00 |
71d7d9f05be661df1e40e3f9c69e908934531c7b |
# Dataset Card for "huggingartists/agata-christie"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.143508 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/61b6b0a0b7f6587d1b33542d5c18ad3c.489x489x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/agata-christie">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Агата Кристи (Agata Christie)</div>
<a href="https://genius.com/artists/agata-christie">
<div style="text-align: center; font-size: 14px;">@agata-christie</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/agata-christie).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/agata-christie")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|78| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/agata-christie")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/agata-christie | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:38+00:00 |
4de2290c08655371befbf808d6c9a83b3dcc7333 |
# Dataset Card for "huggingartists/aikko"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.029888 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a1a40316d1405fa83df2a21923d64168.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/aikko">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">aikko</div>
<a href="https://genius.com/artists/aikko">
<div style="text-align: center; font-size: 14px;">@aikko</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/aikko).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/aikko")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|305| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/aikko")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/aikko | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:22:45+00:00 |
Subsets and Splits