sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
d96383a684966e482803d86e9189f0de0c460c6b |
# Dataset Card for "huggingartists/selena-gomez"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.587236 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/622e6858ce207990b4eb25cd9cdf8f8c.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/selena-gomez">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Selena Gomez</div>
<a href="https://genius.com/artists/selena-gomez">
<div style="text-align: center; font-size: 14px;">@selena-gomez</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/selena-gomez).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/selena-gomez")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|397| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/selena-gomez")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/selena-gomez | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:43:28+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/selena-gomez"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.587236 MB
HuggingArtists Model
Selena Gomez
[@selena-gomez](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.035123 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a5717aec4301e2adfb464d3b85701f74.300x300x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sergei-letov">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Сергей Летов (Sergei Letov)</div>
<a href="https://genius.com/artists/sergei-letov">
<div style="text-align: center; font-size: 14px;">@sergei-letov</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sergei-letov).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sergei-letov")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|7| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sergei-letov")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sergei-letov | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:43:35+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sergei-letov"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.035123 MB
HuggingArtists Model
Сергей Летов (Sergei Letov)
[@sergei-letov](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.063932 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e2576b95c2049862de20cbd0f1a4e0d7.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/shadowraze">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">shadowraze</div>
<a href="https://genius.com/artists/shadowraze">
<div style="text-align: center; font-size: 14px;">@shadowraze</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/shadowraze).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/shadowraze")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|14| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/shadowraze")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/shadowraze | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:44:30+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/shadowraze"
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.063932 MB
HuggingArtists Model
shadowraze
[@shadowraze](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.038296 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/aba931aaf48b7728f3f4869b13eb9741.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sia">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sia</div>
<a href="https://genius.com/artists/sia">
<div style="text-align: center; font-size: 14px;">@sia</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sia).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sia")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|742| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sia")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sia | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:44:37+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sia"
=====================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.038296 MB
HuggingArtists Model
Sia
[@sia](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.088515 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://assets.genius.com/images/default_avatar_300.png?1629387721')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sid-sriram">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sid Sriram</div>
<a href="https://genius.com/artists/sid-sriram">
<div style="text-align: center; font-size: 14px;">@sid-sriram</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sid-sriram).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sid-sriram")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|36| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sid-sriram")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sid-sriram | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:44:43+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sid-sriram"
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.088515 MB
HuggingArtists Model
Sid Sriram
[@sid-sriram](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.283317 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c42b7baa88dae01013eebc53c0aed177.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/skillet">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Skillet</div>
<a href="https://genius.com/artists/skillet">
<div style="text-align: center; font-size: 14px;">@skillet</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/skillet).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/skillet")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|189| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/skillet")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/skillet | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:44:48+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/skillet"
=========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.283317 MB
HuggingArtists Model
Skillet
[@skillet](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 3.88329 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e63e3a804916ed71bf2941ac4e190063.847x847x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/slava-kpss">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Слава КПСС (Slava KPSS)</div>
<a href="https://genius.com/artists/slava-kpss">
<div style="text-align: center; font-size: 14px;">@slava-kpss</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/slava-kpss).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/slava-kpss")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|897| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/slava-kpss")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/slava-kpss | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:44:55+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/slava-kpss"
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 3.88329 MB
HuggingArtists Model
Слава КПСС (Slava KPSS)
[@slava-kpss](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.471147 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e308b1bc9eeb159ecfa9d807d715f095.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/slava-marlow">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">SLAVA MARLOW</div>
<a href="https://genius.com/artists/slava-marlow">
<div style="text-align: center; font-size: 14px;">@slava-marlow</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/slava-marlow).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/slava-marlow")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|249| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/slava-marlow")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/slava-marlow | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:03+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/slava-marlow"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.471147 MB
HuggingArtists Model
SLAVA MARLOW
[@slava-marlow](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 4.603835 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/91bd22f5e53a3ea3cb1436de8f4a3722.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/snoop-dogg">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Snoop Dogg</div>
<a href="https://genius.com/artists/snoop-dogg">
<div style="text-align: center; font-size: 14px;">@snoop-dogg</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/snoop-dogg).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/snoop-dogg")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|1773| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/snoop-dogg")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/snoop-dogg | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:10+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/snoop-dogg"
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 4.603835 MB
HuggingArtists Model
Snoop Dogg
[@snoop-dogg](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.067786 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/3557a234d4c5912569afbea078a23eff.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sqwore">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sqwore</div>
<a href="https://genius.com/artists/sqwore">
<div style="text-align: center; font-size: 14px;">@sqwore</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sqwore).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sqwore")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|19| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sqwore")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sqwore | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:16+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sqwore"
========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.067786 MB
HuggingArtists Model
Sqwore
[@sqwore](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.164888 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/8b5c8fe74f6176047b2b5681e0e0e2d4.273x273x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sugar-ray">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sugar Ray</div>
<a href="https://genius.com/artists/sugar-ray">
<div style="text-align: center; font-size: 14px;">@sugar-ray</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sugar-ray).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sugar-ray")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|117| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sugar-ray")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sugar-ray | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:22+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sugar-ray"
===========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.164888 MB
HuggingArtists Model
Sugar Ray
[@sugar-ray](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.052767 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/86b0ba099a6797bab3deeba685f3dbc2.800x800x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/suicideoscope">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Suicideoscope</div>
<a href="https://genius.com/artists/suicideoscope">
<div style="text-align: center; font-size: 14px;">@suicideoscope</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/suicideoscope).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/suicideoscope")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|11| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/suicideoscope")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/suicideoscope | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:28+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/suicideoscope"
===============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.052767 MB
HuggingArtists Model
Suicideoscope
[@suicideoscope](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.196472 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/7cf5f61ac4ffe9a0fd1f6a4b235b95eb.320x320x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sum-41">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sum 41</div>
<a href="https://genius.com/artists/sum-41">
<div style="text-align: center; font-size: 14px;">@sum-41</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sum-41).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sum-41")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|134| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sum-41")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sum-41 | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:34+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sum-41"
========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.196472 MB
HuggingArtists Model
Sum 41
[@sum-41](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.081864 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/afbb51b0dc0e4618f79565e67991a9fd.360x360x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/sundara-karma">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sundara Karma</div>
<a href="https://genius.com/artists/sundara-karma">
<div style="text-align: center; font-size: 14px;">@sundara-karma</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/sundara-karma).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sundara-karma")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|46| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/sundara-karma")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/sundara-karma | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:40+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/sundara-karma"
===============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.081864 MB
HuggingArtists Model
Sundara Karma
[@sundara-karma](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.178799 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5688d59e74bfc07b0531636114f56c1e.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/system-of-a-down">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">System of a Down</div>
<a href="https://genius.com/artists/system-of-a-down">
<div style="text-align: center; font-size: 14px;">@system-of-a-down</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/system-of-a-down).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/system-of-a-down")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|129| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/system-of-a-down")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/system-of-a-down | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:46+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/system-of-a-down"
==================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.178799 MB
HuggingArtists Model
System of a Down
[@system-of-a-down](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.339224 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a97f2d2c76c51779fb5cbd7362b06789.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/t-fest">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">T-Fest</div>
<a href="https://genius.com/artists/t-fest">
<div style="text-align: center; font-size: 14px;">@t-fest</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/t-fest).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/t-fest")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|127| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/t-fest")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/t-fest | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:52+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/t-fest"
========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.339224 MB
HuggingArtists Model
T-Fest
[@t-fest](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.036726 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/73716ad8dca0ea2fd5f02924ffcbcdad.639x639x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tanzy-minus">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Танцы Минус (Tanzy Minus)</div>
<a href="https://genius.com/artists/tanzy-minus">
<div style="text-align: center; font-size: 14px;">@tanzy-minus</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tanzy-minus).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tanzy-minus")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|5| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tanzy-minus")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tanzy-minus | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:45:59+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tanzy-minus"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.036726 MB
HuggingArtists Model
Танцы Минус (Tanzy Minus)
[@tanzy-minus](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.469581 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/3c1f124fcbbc2857a95e513fb34cc5a8.400x400x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/taylor-swift">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Taylor Swift</div>
<a href="https://genius.com/artists/taylor-swift">
<div style="text-align: center; font-size: 14px;">@taylor-swift</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/taylor-swift).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/taylor-swift")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|762| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/taylor-swift")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/taylor-swift | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:46:05+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/taylor-swift"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.469581 MB
HuggingArtists Model
Taylor Swift
[@taylor-swift](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.115986 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c99108a2e14512dcbe143ccb53dd2319.564x564x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tedeschi-trucks-band">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tedeschi Trucks Band</div>
<a href="https://genius.com/artists/tedeschi-trucks-band">
<div style="text-align: center; font-size: 14px;">@tedeschi-trucks-band</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tedeschi-trucks-band).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tedeschi-trucks-band")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|87| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tedeschi-trucks-band")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tedeschi-trucks-band | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:46:11+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tedeschi-trucks-band"
======================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.115986 MB
HuggingArtists Model
Tedeschi Trucks Band
[@tedeschi-trucks-band](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.162381 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9e0451fa9d3f8cf38aa11994dbd934a8.600x600x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-69-eyes">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The 69 Eyes</div>
<a href="https://genius.com/artists/the-69-eyes">
<div style="text-align: center; font-size: 14px;">@the-69-eyes</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-69-eyes).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-69-eyes")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|168| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-69-eyes")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-69-eyes | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:46:18+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-69-eyes"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.162381 MB
HuggingArtists Model
The 69 Eyes
[@the-69-eyes](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.123553 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e22f7806a402c82b09336cb5cf79a618.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-avalanches">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Avalanches</div>
<a href="https://genius.com/artists/the-avalanches">
<div style="text-align: center; font-size: 14px;">@the-avalanches</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-avalanches).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-avalanches")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|111| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-avalanches")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-avalanches | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:46:25+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-avalanches"
================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.123553 MB
HuggingArtists Model
The Avalanches
[@the-avalanches](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.07072 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/df75ede64ffcf049727bfbb01d323081.400x400x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-beatles">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Beatles</div>
<a href="https://genius.com/artists/the-beatles">
<div style="text-align: center; font-size: 14px;">@the-beatles</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-beatles).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-beatles")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|878| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-beatles")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-beatles | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:46:31+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-beatles"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.07072 MB
HuggingArtists Model
The Beatles
[@the-beatles](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.121064 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9793a6d598f68414ca37eb1135e6b0c1.686x686x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-gazette">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Gazette</div>
<a href="https://genius.com/artists/the-gazette">
<div style="text-align: center; font-size: 14px;">@the-gazette</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-gazette).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-gazette")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|98| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-gazette")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-gazette | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:15+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-gazette"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.121064 MB
HuggingArtists Model
The Gazette
[@the-gazette](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.732505 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18f21c424e2f02f0c9a59c15bac56406.736x736x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-grateful-dead">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Grateful Dead</div>
<a href="https://genius.com/artists/the-grateful-dead">
<div style="text-align: center; font-size: 14px;">@the-grateful-dead</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-grateful-dead).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-grateful-dead")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|2266| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-grateful-dead")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-grateful-dead | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:22+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-grateful-dead"
===================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 2.732505 MB
HuggingArtists Model
The Grateful Dead
[@the-grateful-dead](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.189886 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/eab8847b08e686561c3593f987917434.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-king-and-the-jester">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Король и Шут (The King and the Jester)</div>
<a href="https://genius.com/artists/the-king-and-the-jester">
<div style="text-align: center; font-size: 14px;">@the-king-and-the-jester</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-king-and-the-jester).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-king-and-the-jester")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|94| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-king-and-the-jester")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-king-and-the-jester | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:30+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-king-and-the-jester"
=========================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.189886 MB
HuggingArtists Model
Король и Шут (The King and the Jester)
[@the-king-and-the-jester](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.676645 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/664976b54a605d6ac0df2415a8ccac16.564x564x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-notorious-big">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Notorious B.I.G.</div>
<a href="https://genius.com/artists/the-notorious-big">
<div style="text-align: center; font-size: 14px;">@the-notorious-big</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-notorious-big).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-notorious-big")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|592| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-notorious-big")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-notorious-big | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:38+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-notorious-big"
===================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.676645 MB
HuggingArtists Model
The Notorious B.I.G.
[@the-notorious-big](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.077715 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/da10eeb7730741736a4f7ac4cc998c4e.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-sugarcubes">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Sugarcubes</div>
<a href="https://genius.com/artists/the-sugarcubes">
<div style="text-align: center; font-size: 14px;">@the-sugarcubes</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-sugarcubes).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-sugarcubes")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|52| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-sugarcubes")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-sugarcubes | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:46+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-sugarcubes"
================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.077715 MB
HuggingArtists Model
The Sugarcubes
[@the-sugarcubes](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.077582 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2f1fd1b951237ad3387096f392d41fa5.720x720x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-the-pigs">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The ‘’Вепри’’ (The Pigs)</div>
<a href="https://genius.com/artists/the-the-pigs">
<div style="text-align: center; font-size: 14px;">@the-the-pigs</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-the-pigs).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-the-pigs")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|28| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-the-pigs")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-the-pigs | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:52+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-the-pigs"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.077582 MB
HuggingArtists Model
The ‘’Вепри’’ (The Pigs)
[@the-the-pigs](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.327672 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://s3.amazonaws.com/rapgenius/vu.jpeg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-velvet-underground">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Velvet Underground</div>
<a href="https://genius.com/artists/the-velvet-underground">
<div style="text-align: center; font-size: 14px;">@the-velvet-underground</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-velvet-underground).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-velvet-underground")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|241| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-velvet-underground")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/the-velvet-underground | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:47:58+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-velvet-underground"
========================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.327672 MB
HuggingArtists Model
The Velvet Underground
[@the-velvet-underground](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.849373 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f0813e600d43b8b43c94e8ba1dde880a.640x640x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-weeknd">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Weeknd</div>
<a href="https://genius.com/artists/the-weeknd">
<div style="text-align: center; font-size: 14px;">@the-weeknd</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-weeknd).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-weeknd")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|TRAIN_1.849373| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-weeknd")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
| huggingartists/the-weeknd | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:04+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/the-weeknd"
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.849373 MB
HuggingArtists Model
The Weeknd
[@the-weeknd](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
| [
"### Dataset Summary\n\n\nThe Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.\nModel is available here.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nen\n\n\nHow to use\n----------\n\n\nHow to load this dataset directly with the datasets library:\n\n\nDataset Structure\n-----------------\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'text': a 'string' feature.",
"### Data Splits\n\n\n\n'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#language-English #huggingartists #lyrics #region-us \n",
"### Dataset Summary\n\n\nThe Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.\nModel is available here.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nen\n\n\nHow to use\n----------\n\n\nHow to load this dataset directly with the datasets library:\n\n\nDataset Structure\n-----------------\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'text': a 'string' feature.",
"### Data Splits\n\n\n\n'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information"
] |
b223aafe94011d352a2acc4b654a79eff9e01c54 |
# Dataset Card for "huggingartists/tiamat"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.115111 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9ca13ed308504f6f9ac7c3cabdb54138.556x556x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tiamat">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tiamat</div>
<a href="https://genius.com/artists/tiamat">
<div style="text-align: center; font-size: 14px;">@tiamat</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tiamat).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tiamat")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|122| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tiamat")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tiamat | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:11+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tiamat"
========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.115111 MB
HuggingArtists Model
Tiamat
[@tiamat](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.275488 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/48d6ca7ca17a9dfc9ad3034e71533a89.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/till-lindemann">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Till Lindemann</div>
<a href="https://genius.com/artists/till-lindemann">
<div style="text-align: center; font-size: 14px;">@till-lindemann</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/till-lindemann).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/till-lindemann")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|257| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/till-lindemann")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/till-lindemann | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:17+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/till-lindemann"
================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.275488 MB
HuggingArtists Model
Till Lindemann
[@till-lindemann](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.818237 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/505d2d5d1d43304dca446fd2e788a0f8.750x750x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tom-waits">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Waits</div>
<a href="https://genius.com/artists/tom-waits">
<div style="text-align: center; font-size: 14px;">@tom-waits</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tom-waits).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tom-waits")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|681| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tom-waits")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tom-waits | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:23+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tom-waits"
===========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.818237 MB
HuggingArtists Model
Tom Waits
[@tom-waits](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.083901 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/7249d6785a5c87095850bd4048595e08.989x989x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tony-raut-and-garry-topor">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Тони Раут (Tony Raut) & Гарри Топор (Garry Topor)</div>
<a href="https://genius.com/artists/tony-raut-and-garry-topor">
<div style="text-align: center; font-size: 14px;">@tony-raut-and-garry-topor</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tony-raut-and-garry-topor).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tony-raut-and-garry-topor")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|15| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tony-raut-and-garry-topor")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tony-raut-and-garry-topor | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:30+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tony-raut-and-garry-topor"
===========================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.083901 MB
HuggingArtists Model
Тони Раут (Tony Raut) & Гарри Топор (Garry Topor)
[@tony-raut-and-garry-topor](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.129846 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/acf1d51a2d729391074dc51a6dd26857.1000x1000x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tool">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tool</div>
<a href="https://genius.com/artists/tool">
<div style="text-align: center; font-size: 14px;">@tool</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tool).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tool")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|101| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tool")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tool | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:37+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tool"
======================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.129846 MB
HuggingArtists Model
Tool
[@tool](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.245029 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/ea3dc2eb7b35254ae3764df28bc02500.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/totpoc">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">totpoc</div>
<a href="https://genius.com/artists/totpoc">
<div style="text-align: center; font-size: 14px;">@totpoc</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/totpoc).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/totpoc")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|78| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/totpoc")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/totpoc | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:43+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/totpoc"
========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.245029 MB
HuggingArtists Model
totpoc
[@totpoc](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.483549 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5d19fecdb3828ca9ec89dda588e2eb7d.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/travis-scott">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Travis Scott</div>
<a href="https://genius.com/artists/travis-scott">
<div style="text-align: center; font-size: 14px;">@travis-scott</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/travis-scott).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/travis-scott")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|761| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/travis-scott")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/travis-scott | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:52+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/travis-scott"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.483549 MB
HuggingArtists Model
Travis Scott
[@travis-scott](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.348302 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5ab9e38cf86aa170734fea1731610abc.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/twenty-one-pilots">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">twenty one pilots</div>
<a href="https://genius.com/artists/twenty-one-pilots">
<div style="text-align: center; font-size: 14px;">@twenty-one-pilots</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/twenty-one-pilots).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/twenty-one-pilots")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|197| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/twenty-one-pilots")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/twenty-one-pilots | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:59+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/twenty-one-pilots"
===================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.348302 MB
HuggingArtists Model
twenty one pilots
[@twenty-one-pilots](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.072102 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/80c9c64ebed6a29681aaeaebe57edf91.984x984x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tyler-the-creator">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tyler, The Creator</div>
<a href="https://genius.com/artists/tyler-the-creator">
<div style="text-align: center; font-size: 14px;">@tyler-the-creator</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tyler-the-creator).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tyler-the-creator")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|529| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tyler-the-creator")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tyler-the-creator | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:07+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/tyler-the-creator"
===================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.072102 MB
HuggingArtists Model
Tyler, The Creator
[@tyler-the-creator](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.168635 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e0fa9b5bdd037ab75031dd7372d05cd6.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/upsahl">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">UPSAHL</div>
<a href="https://genius.com/artists/upsahl">
<div style="text-align: center; font-size: 14px;">@upsahl</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/upsahl).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/upsahl")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|107| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/upsahl")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/upsahl | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:14+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/upsahl"
========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.168635 MB
HuggingArtists Model
UPSAHL
[@upsahl](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.198634 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/08ad78acc3e91c45a426390e7524d4e9.853x853x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/v-x-v-prince">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">V $ X V PRiNCE</div>
<a href="https://genius.com/artists/v-x-v-prince">
<div style="text-align: center; font-size: 14px;">@v-x-v-prince</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/v-x-v-prince).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/v-x-v-prince")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|77| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/v-x-v-prince")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/v-x-v-prince | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:21+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/v-x-v-prince"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.198634 MB
HuggingArtists Model
V $ X V PRiNCE
[@v-x-v-prince](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.062718 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2f97270cc1d1420867052a6c331d5820.1000x667x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/van-morrison">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Van Morrison</div>
<a href="https://genius.com/artists/van-morrison">
<div style="text-align: center; font-size: 14px;">@van-morrison</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/van-morrison).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/van-morrison")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|929| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/van-morrison")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/van-morrison | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:27+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/van-morrison"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 1.062718 MB
HuggingArtists Model
Van Morrison
[@van-morrison](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.220878 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d14c9e27b39f0e250784a2dce037a03d.720x720x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/veggietales">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">VeggieTales</div>
<a href="https://genius.com/artists/veggietales">
<div style="text-align: center; font-size: 14px;">@veggietales</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/veggietales).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/veggietales")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|163| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/veggietales")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/veggietales | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:47+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/veggietales"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.220878 MB
HuggingArtists Model
VeggieTales
[@veggietales](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.189002 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f9d03b2a6c45897724e74fab6a1aa86c.500x500x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/viktor-tsoi">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Виктор Цой (Viktor Tsoi)</div>
<a href="https://genius.com/artists/viktor-tsoi">
<div style="text-align: center; font-size: 14px;">@viktor-tsoi</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/viktor-tsoi).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/viktor-tsoi")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|118| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/viktor-tsoi")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/viktor-tsoi | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:55+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/viktor-tsoi"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.189002 MB
HuggingArtists Model
Виктор Цой (Viktor Tsoi)
[@viktor-tsoi](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.124261 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18735fe10bace7b3f615b2da9c95ac73.938x938x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/vladimir-vysotsky">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Владимир Высоцкий (Vladimir Vysotsky)</div>
<a href="https://genius.com/artists/vladimir-vysotsky">
<div style="text-align: center; font-size: 14px;">@vladimir-vysotsky</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/vladimir-vysotsky).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/vladimir-vysotsky")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|47| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/vladimir-vysotsky")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/vladimir-vysotsky | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:03+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/vladimir-vysotsky"
===================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.124261 MB
HuggingArtists Model
Владимир Высоцкий (Vladimir Vysotsky)
[@vladimir-vysotsky](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.957186 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f72572986d8187cf35f0fc9f9d06afb2.900x900x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/xxxtentacion">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">XXXTENTACION</div>
<a href="https://genius.com/artists/xxxtentacion">
<div style="text-align: center; font-size: 14px;">@xxxtentacion</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/xxxtentacion).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/xxxtentacion")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|784| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/xxxtentacion")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/xxxtentacion | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:12+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/xxxtentacion"
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.957186 MB
HuggingArtists Model
XXXTENTACION
[@xxxtentacion](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 4.254273 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b08755976e2dcad78a75ee47059adcbc.777x777x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/young-thug">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Young Thug</div>
<a href="https://genius.com/artists/young-thug">
<div style="text-align: center; font-size: 14px;">@young-thug</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/young-thug).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/young-thug")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|1656| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/young-thug")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/young-thug | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:19+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/young-thug"
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 4.254273 MB
HuggingArtists Model
Young Thug
[@young-thug](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.441891 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/8c898f8c39dbd271b3ccfd5303d423c7.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/yung-lean">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yung Lean</div>
<a href="https://genius.com/artists/yung-lean">
<div style="text-align: center; font-size: 14px;">@yung-lean</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/yung-lean).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/yung-lean")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|269| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/yung-lean")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/yung-lean | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:26+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/yung-lean"
===========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.441891 MB
HuggingArtists Model
Yung Lean
[@yung-lean](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.109415 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6c0f8e02f467c694379f242ea2897efd.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/yung-plague">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yung Plague</div>
<a href="https://genius.com/artists/yung-plague">
<div style="text-align: center; font-size: 14px;">@yung-plague</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/yung-plague).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/yung-plague")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|38| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/yung-plague")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/yung-plague | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:33+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/yung-plague"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.109415 MB
HuggingArtists Model
Yung Plague
[@yung-plague](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.226796 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/df440220b2dd0a34a119db791da90e59.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/zemfira">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Земфира (Zemfira)</div>
<a href="https://genius.com/artists/zemfira">
<div style="text-align: center; font-size: 14px;">@zemfira</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/zemfira).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/zemfira")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|165| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/zemfira")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/zemfira | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:39+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/zemfira"
=========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.226796 MB
HuggingArtists Model
Земфира (Zemfira)
[@zemfira](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n for several datasets.
Current datasets include:
- ImageNet-1k
- ImageNet-22k (also called ImageNet-21k as there are 21,843 classes)
- COCO detection 2017
- COCO panoptic 2017
- ADE20k (actually, the [MIT Scene Parsing benchmark](http://sceneparsing.csail.mit.edu/), which is a subset of ADE20k)
- Cityscapes
- VQAv2
- Kinetics-700
- RVL-CDIP
- PASCAL VOC
- Kinetics-400
- ...
You can read in a label file as follows (using the `huggingface_hub` library):
```
from huggingface_hub import hf_hub_download
import json
repo_id = "huggingface/label-files"
filename = "imagenet-22k-id2label.json"
id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r"))
id2label = {int(k):v for k,v in id2label.items()}
```
To add an `id2label` mapping for a new dataset, simply define a Python dictionary, and then save that dictionary as a JSON file, like so:
```
import json
# simple example
id2label = {0: 'cat', 1: 'dog'}
with open('cats-and-dogs-id2label.json', 'w') as fp:
json.dump(id2label, fp)
```
You can then upload it to this repository (assuming you have write access).
| huggingface/label-files | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2023-03-15T06:51:19+00:00 | [] | [] | TAGS
#region-us
| This repository contains the mapping from integer id's to actual label names (in HuggingFace Transformers typically called 'id2label') for several datasets.
Current datasets include:
- ImageNet-1k
- ImageNet-22k (also called ImageNet-21k as there are 21,843 classes)
- COCO detection 2017
- COCO panoptic 2017
- ADE20k (actually, the MIT Scene Parsing benchmark, which is a subset of ADE20k)
- Cityscapes
- VQAv2
- Kinetics-700
- RVL-CDIP
- PASCAL VOC
- Kinetics-400
- ...
You can read in a label file as follows (using the 'huggingface_hub' library):
To add an 'id2label' mapping for a new dataset, simply define a Python dictionary, and then save that dictionary as a JSON file, like so:
You can then upload it to this repository (assuming you have write access).
| [] | [
"TAGS\n#region-us \n"
] |
cba9be1dee92bc1e663bae387587859d02435cdf | d2 | huyongquan/d2 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-19T10:37:27+00:00 | [] | [] | TAGS
#region-us
| d2 | [] | [
"TAGS\n#region-us \n"
] |
7e9a0fb84fd6c61d81fab5718bdb235f93625600 | This is the same dataset as the question_generator dataset but with the context removed and the question and answer in separate fields. This is intended to be used with the [question_generator](https://github.com/AMontgomerie/question_generator) repo to train the qa_evaluator model which predicts whether a question and answer pair makes sense. | iarfmoose/qa_evaluator | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-29T05:20:10+00:00 | [] | [] | TAGS
#region-us
| This is the same dataset as the question_generator dataset but with the context removed and the question and answer in separate fields. This is intended to be used with the question_generator repo to train the qa_evaluator model which predicts whether a question and answer pair makes sense. | [] | [
"TAGS\n#region-us \n"
] |
107f93838cc2fe938b5cc5d21d70f0e288040c60 | This dataset is made up of data taken from SQuAD v2.0, RACE, CoQA, and MSMARCO. Some examples have been filtered out of the original datasets and others have been modified.
There are two fields; question and text. The question field contains the question, and the text field contains both the answer and the context in the following format:
"\<answer> (answer text) \<context> (context text)"
The <answer> and <context> are included as special tokens in the question generator's tokenizer.
This dataset is intended to be used with the [question_generator repo](https://github.com/AMontgomerie/question_generator) to train the question generator model.
| iarfmoose/question_generator | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-29T05:22:03+00:00 | [] | [] | TAGS
#region-us
| This dataset is made up of data taken from SQuAD v2.0, RACE, CoQA, and MSMARCO. Some examples have been filtered out of the original datasets and others have been modified.
There are two fields; question and text. The question field contains the question, and the text field contains both the answer and the context in the following format:
"\<answer> (answer text) \<context> (context text)"
The <answer> and <context> are included as special tokens in the question generator's tokenizer.
This dataset is intended to be used with the question_generator repo to train the question generator model.
| [] | [
"TAGS\n#region-us \n"
] |
c9ab866576b08dd92819e413fc0b3853757da304 | # The Unsplash Dataset

The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.
The Unsplash Dataset is offered in two datasets:
- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches
- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches
As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/).
We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets.
For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data).
## Download
### Lite Dataset
The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
[⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw]
### Full Dataset
The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)).
## Documentation
See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md).
## Usage
You can follow these examples to load the dataset in these common formats:
- [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql)
- [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python)
- [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example)
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [[email protected]](mailto:[email protected]).
If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data).
----
The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).

| image-search-2/unsplash_lite_image_dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-19T12:44:46+00:00 | [] | [] | TAGS
#region-us
| # The Unsplash Dataset
).
## Documentation
See the documentation for a complete list of tables and fields.
## Usage
You can follow these examples to load the dataset in these common formats:
- Load the dataset in a PostgreSQL database
- Load the dataset in a Python environment
- Submit an example doc
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at data@URL.
If you're using the dataset in a research paper, you can attribute the dataset as 'Unsplash Lite Dataset 1.2.0' or 'Unsplash Full Dataset 1.2.0' and link to the permalink 'URL
----
The Unsplash Dataset is made available for research purposes. It cannot be used to redistribute the images contained within. To use the Unsplash library in a product, see the Unsplash API.
).",
"## Documentation\n\nSee the documentation for a complete list of tables and fields.",
"## Usage\n\nYou can follow these examples to load the dataset in these common formats:\n\n- Load the dataset in a PostgreSQL database\n- Load the dataset in a Python environment\n- Submit an example doc",
"## Share your work\n\nWe're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.\n\nWe'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at data@URL.\n\nIf you're using the dataset in a research paper, you can attribute the dataset as 'Unsplash Lite Dataset 1.2.0' or 'Unsplash Full Dataset 1.2.0' and link to the permalink 'URL\n\n----\n\nThe Unsplash Dataset is made available for research purposes. It cannot be used to redistribute the images contained within. To use the Unsplash library in a product, see the Unsplash API.\n\n).",
"## Documentation\n\nSee the documentation for a complete list of tables and fields.",
"## Usage\n\nYou can follow these examples to load the dataset in these common formats:\n\n- Load the dataset in a PostgreSQL database\n- Load the dataset in a Python environment\n- Submit an example doc",
"## Share your work\n\nWe're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.\n\nWe'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at data@URL.\n\nIf you're using the dataset in a research paper, you can attribute the dataset as 'Unsplash Lite Dataset 1.2.0' or 'Unsplash Full Dataset 1.2.0' and link to the permalink 'URL\n\n----\n\nThe Unsplash Dataset is made available for research purposes. It cannot be used to redistribute the images contained within. To use the Unsplash library in a product, see the Unsplash API.\n\n
```
## Dataset information
This dataset was created from `https://github.com/binhvq/news-corpus` dump with date 21/05/2021. I applied some simple preprocessing:
- Using BeautifulSoup to clean content
- Each record is concatenate of (title + "\n" + sapo + "\n" + content)
- Then perform shuffling + split train & validation + deduplicate (exact match using sha256) | imthanhlv/binhvq_dedup | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-01T16:42:00+00:00 | [] | [] | TAGS
#region-us
| # BinhVQ dedup
Important: Please install 'lm_dataformat' by 'pip install lm_dataformat' before using this dataset
## How to use
## Dataset information
This dataset was created from 'URL dump with date 21/05/2021. I applied some simple preprocessing:
- Using BeautifulSoup to clean content
- Each record is concatenate of (title + "\n" + sapo + "\n" + content)
- Then perform shuffling + split train & validation + deduplicate (exact match using sha256) | [
"# BinhVQ dedup\n\nImportant: Please install 'lm_dataformat' by 'pip install lm_dataformat' before using this dataset",
"## How to use",
"## Dataset information\n\nThis dataset was created from 'URL dump with date 21/05/2021. I applied some simple preprocessing:\n- Using BeautifulSoup to clean content\n- Each record is concatenate of (title + \"\\n\" + sapo + \"\\n\" + content)\n- Then perform shuffling + split train & validation + deduplicate (exact match using sha256)"
] | [
"TAGS\n#region-us \n",
"# BinhVQ dedup\n\nImportant: Please install 'lm_dataformat' by 'pip install lm_dataformat' before using this dataset",
"## How to use",
"## Dataset information\n\nThis dataset was created from 'URL dump with date 21/05/2021. I applied some simple preprocessing:\n- Using BeautifulSoup to clean content\n- Each record is concatenate of (title + \"\\n\" + sapo + \"\\n\" + content)\n- Then perform shuffling + split train & validation + deduplicate (exact match using sha256)"
] |
6b85e353e04d9235d004a7fc2b3357e7f46217bd |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/89efd3a0fa3ead3f0b8e432e8796697a738d4561b24ff91f4fb2cc25d86e9fb0/train/ccef55189b7843d49110228cb0a71bfa115.wav',
'array': array([-0.01217651, -0.04351807, -0.06278992, ..., -0.00018311,
-0.00146484, -0.00349426]),
'sampling_rate': 16000},
'sentence': 'מצד אחד ובתנועה הציונית הצעירה'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 20306 | 5076 |
| hours | 28.88 | 7.23 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_coursera,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Coursera},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera},
}
```
### Contributions
[More Information Needed] | imvladikon/hebrew_speech_coursera | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["he"], "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6670706136.352, "num_examples": 20306}, {"name": "validation", "num_bytes": 1648062261.28, "num_examples": 5076}], "download_size": 7726933856, "dataset_size": 8318768397.632}} | 2023-05-05T08:05:00+00:00 | [] | [
"he"
] | TAGS
#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-Hebrew #region-us
| Dataset Card for Dataset Name
=============================
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
train: number of samples, validation: 20306
train: hours, validation: 28.88
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\ntrain: number of samples, validation: 20306\ntrain: hours, validation: 28.88\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-Hebrew #region-us \n",
"### Dataset Summary\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\ntrain: number of samples, validation: 20306\ntrain: hours, validation: 28.88\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
e0e3988bc3c78be1f697b21c8feb5b49d55d9faa |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hebrew Dataset for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/8ce7402f6482c6053251d7f3000eec88668c994beb48b7ca7352e77ef810a0b6/train/e429593fede945c185897e378a5839f4198.wav',
'array': array([-0.00265503, -0.0018158 , -0.00149536, ..., -0.00135803,
-0.00231934, -0.00190735]),
'sampling_rate': 16000},
'sentence': 'היא מבינה אותי יותר מכל אחד אחר'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 8000 | 2000 |
| hours | 6.92 | 1.73 |
## Dataset Creation
### Curation Rationale
scraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_kan,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Kan},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_kan},
}
```
### Contributions
[More Information Needed] | imvladikon/hebrew_speech_kan | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["he"], "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1569850175.0, "num_examples": 8000}, {"name": "validation", "num_bytes": 394275049.0, "num_examples": 2000}], "download_size": 1989406585, "dataset_size": 1964125224.0}} | 2023-05-05T08:12:15+00:00 | [] | [
"he"
] | TAGS
#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-Hebrew #region-us
| Dataset Card for Dataset Name
=============================
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
Hebrew Dataset for ASR
### Supported Tasks and Leaderboards
### Languages
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
train: number of samples, validation: 8000
train: hours, validation: 6.92
Dataset Creation
----------------
### Curation Rationale
scraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nHebrew Dataset for ASR",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\ntrain: number of samples, validation: 8000\ntrain: hours, validation: 6.92\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nscraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-Hebrew #region-us \n",
"### Dataset Summary\n\n\nHebrew Dataset for ASR",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\ntrain: number of samples, validation: 8000\ntrain: hours, validation: 6.92\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nscraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
3319e7f6e629f7f2dfaa381ef318b95b96399af4 | # Dataset Card
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://zenodo.org/record/2707356](https://zenodo.org/record/2707356)
- **Repository:** [https://github.com/NLPH/knesset-2004-2005](https://github.com/NLPH/knesset-2004-2005)
- **Paper:**
- **Point of Contact:**
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
An example of a sample:
```
{
"text": <text content of given document>,
"path": <file path to docx>
}
```
Dataset usage
Available "kneset16","kneset17","knesset_tagged" configurations
And only train set.
```python
train_ds = load_dataset("imvladikon/knesset_meetings_corpus", "kneset16", split="train")
```
The Knesset Meetings Corpus 2004-2005 is made up of two components:
* Raw texts - 282 files made up of 867,725 lines together. These can be downloaded in two formats:
* As ``doc`` files, encoded using ``windows-1255`` encoding:
* ``kneset16.zip`` - Contains 164 text files made up of 543,228 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/docs/kneset16.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset16.zip?raw=true>`_
* ``kneset17.zip`` - Contains 118 text files made up of 324,497 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/docs/kneset17.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset17.zip?raw=true>`_
* As ``txt`` files, encoded using ``utf8`` encoding:
* ``kneset.tar.gz`` - An archive of all the raw text files, divided into two folders: `[Github mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset.tar.gz>`_
* ``16`` - Contains 164 text files made up of 543,228 lines together.
* ``17`` - Contains 118 text files made up of 324,497 lines together.
* ``knesset_txt_16.tar.gz``- Contains 164 text files made up of 543,228 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/utf8/knesset_txt_16.tar.gz>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/knesset_txt_16.tar.gz?raw=true>`_
* ``knesset_txt_17.zip`` - Contains 118 text files made up of 324,497 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/utf8/knesset_txt_17.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/knesset_txt_17.zip?raw=true>`_
* Tokenized and morphologically tagged texts - Tagged versions exist only for the files in the ``16`` folder. The texts are encoded using `MILA's XML schema for corpora <http://www.mila.cs.technion.ac.il/eng/resources_standards.html>`_. These can be downloaded in two ways:
* ``knesset_tagged_16.tar.gz`` - An archive of all tokenized and tagged files. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/tagged/knesset_tagged_16.tar.gz>`_ `[Archive.org mirror] <https://archive.org/details/knesset_transcripts_2004_2005>`_
Mirrors
-------
This repository is a mirror of this dataset `found on MILA's website <http://www.mila.cs.technion.ac.il/eng/resources_corpora_haknesset.html>`_.
Zenodo mirror: `https://zenodo.org/record/2707356 <https://zenodo.org/record/2707356>`_
License
-------
All Knesset meeting protocols are in the `public domain <https://en.wikipedia.org/wiki/Public_domain>`_ (`רשות הציבור <https://he.wikipedia.org/wiki/%D7%A8%D7%A9%D7%95%D7%AA_%D7%94%D7%A6%D7%99%D7%91%D7%95%D7%A8>`_) by law. These files are thus in the public doamin and do not require any license or public domain dedication to set their status.
.. |DOI| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.2707356.svg
:target: https://doi.org/10.5281/zenodo.2707356
.. |LICENCE| image:: https://github.com/NLPH/knesset-2004-2005/blob/master/public_domain_shield.svg
:target: https://en.wikipedia.org/wiki/Public_domain
.. |PUBDOM| image:: https://github.com/NLPH/knesset-2004-2005/blob/master/public_domain.png
:target: https://en.wikipedia.org/wiki/Public_domain
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is available under the [ Open Data Commons Public Domain Dedication & License 1.0](https://opendatacommons.org/licenses/pddl/).
### Citation Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Contributions
| imvladikon/knesset_meetings_corpus | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:he",
"license:pddl",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["he"], "license": ["pddl"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Knesset Meetings Corpus"} | 2022-10-23T10:45:02+00:00 | [] | [
"he"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Hebrew #license-pddl #region-us
| # Dataset Card
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
- Size of downloaded dataset files:
- Size of the generated dataset:
- Total amount of disk used:
### Dataset Summary
An example of a sample:
Dataset usage
Available "kneset16","kneset17","knesset_tagged" configurations
And only train set.
The Knesset Meetings Corpus 2004-2005 is made up of two components:
* Raw texts - 282 files made up of 867,725 lines together. These can be downloaded in two formats:
* As ''doc'' files, encoded using ''windows-1255'' encoding:
* ''URL'' - Contains 164 text files made up of 543,228 lines together. '[MILA host] <URL '[Github Mirror] <URL
* ''URL'' - Contains 118 text files made up of 324,497 lines together. '[MILA host] <URL '[Github Mirror] <URL
* As ''txt'' files, encoded using ''utf8'' encoding:
* ''URL'' - An archive of all the raw text files, divided into two folders: '[Github mirror] <URL
* ''16'' - Contains 164 text files made up of 543,228 lines together.
* ''17'' - Contains 118 text files made up of 324,497 lines together.
* ''knesset_txt_16.URL''- Contains 164 text files made up of 543,228 lines together. '[MILA host] <URL '[Github Mirror] <URL
* ''knesset_txt_17.zip'' - Contains 118 text files made up of 324,497 lines together. '[MILA host] <URL '[Github Mirror] <URL
* Tokenized and morphologically tagged texts - Tagged versions exist only for the files in the ''16'' folder. The texts are encoded using 'MILA's XML schema for corpora <URL These can be downloaded in two ways:
* ''knesset_tagged_16.URL'' - An archive of all tokenized and tagged files. '[MILA host] <URL '[URL mirror] <URL
Mirrors
-------
This repository is a mirror of this dataset 'found on MILA's website <URL
Zenodo mirror: 'URL <URL
License
-------
All Knesset meeting protocols are in the 'public domain <URL ('רשות הציבור <URL by law. These files are thus in the public doamin and do not require any license or public domain dedication to set their status.
.. |DOI| image:: URL
:target: URL
.. |LICENCE| image:: URL
:target: URL
.. |PUBDOM| image:: URL
:target: URL
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
The dataset is available under the Open Data Commons Public Domain Dedication & License 1.0.
### Contributions
| [
"# Dataset Card",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact: \n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used:",
"### Dataset Summary\n\nAn example of a sample:\n\n\nDataset usage\nAvailable \"kneset16\",\"kneset17\",\"knesset_tagged\" configurations\nAnd only train set.\n\n\n\nThe Knesset Meetings Corpus 2004-2005 is made up of two components:\n\n* Raw texts - 282 files made up of 867,725 lines together. These can be downloaded in two formats:\n\n * As ''doc'' files, encoded using ''windows-1255'' encoding:\n\n * ''URL'' - Contains 164 text files made up of 543,228 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n * ''URL'' - Contains 118 text files made up of 324,497 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n * As ''txt'' files, encoded using ''utf8'' encoding:\n\n * ''URL'' - An archive of all the raw text files, divided into two folders: '[Github mirror] <URL\n\n * ''16'' - Contains 164 text files made up of 543,228 lines together.\n \n * ''17'' - Contains 118 text files made up of 324,497 lines together.\n \n * ''knesset_txt_16.URL''- Contains 164 text files made up of 543,228 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n * ''knesset_txt_17.zip'' - Contains 118 text files made up of 324,497 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n* Tokenized and morphologically tagged texts - Tagged versions exist only for the files in the ''16'' folder. The texts are encoded using 'MILA's XML schema for corpora <URL These can be downloaded in two ways:\n\n * ''knesset_tagged_16.URL'' - An archive of all tokenized and tagged files. '[MILA host] <URL '[URL mirror] <URL\n \n \nMirrors\n-------\n\nThis repository is a mirror of this dataset 'found on MILA's website <URL\n\nZenodo mirror: 'URL <URL\n \n \nLicense\n-------\n\nAll Knesset meeting protocols are in the 'public domain <URL ('רשות הציבור <URL by law. These files are thus in the public doamin and do not require any license or public domain dedication to set their status.\n\n.. |DOI| image:: URL\n :target: URL\n\n.. |LICENCE| image:: URL\n :target: URL\n\n.. |PUBDOM| image:: URL\n :target: URL",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nThe dataset is available under the Open Data Commons Public Domain Dedication & License 1.0.",
"### Contributions"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Hebrew #license-pddl #region-us \n",
"# Dataset Card",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact: \n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used:",
"### Dataset Summary\n\nAn example of a sample:\n\n\nDataset usage\nAvailable \"kneset16\",\"kneset17\",\"knesset_tagged\" configurations\nAnd only train set.\n\n\n\nThe Knesset Meetings Corpus 2004-2005 is made up of two components:\n\n* Raw texts - 282 files made up of 867,725 lines together. These can be downloaded in two formats:\n\n * As ''doc'' files, encoded using ''windows-1255'' encoding:\n\n * ''URL'' - Contains 164 text files made up of 543,228 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n * ''URL'' - Contains 118 text files made up of 324,497 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n * As ''txt'' files, encoded using ''utf8'' encoding:\n\n * ''URL'' - An archive of all the raw text files, divided into two folders: '[Github mirror] <URL\n\n * ''16'' - Contains 164 text files made up of 543,228 lines together.\n \n * ''17'' - Contains 118 text files made up of 324,497 lines together.\n \n * ''knesset_txt_16.URL''- Contains 164 text files made up of 543,228 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n * ''knesset_txt_17.zip'' - Contains 118 text files made up of 324,497 lines together. '[MILA host] <URL '[Github Mirror] <URL\n \n* Tokenized and morphologically tagged texts - Tagged versions exist only for the files in the ''16'' folder. The texts are encoded using 'MILA's XML schema for corpora <URL These can be downloaded in two ways:\n\n * ''knesset_tagged_16.URL'' - An archive of all tokenized and tagged files. '[MILA host] <URL '[URL mirror] <URL\n \n \nMirrors\n-------\n\nThis repository is a mirror of this dataset 'found on MILA's website <URL\n\nZenodo mirror: 'URL <URL\n \n \nLicense\n-------\n\nAll Knesset meeting protocols are in the 'public domain <URL ('רשות הציבור <URL by law. These files are thus in the public doamin and do not require any license or public domain dedication to set their status.\n\n.. |DOI| image:: URL\n :target: URL\n\n.. |LICENCE| image:: URL\n :target: URL\n\n.. |PUBDOM| image:: URL\n :target: URL",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nThe dataset is available under the Open Data Commons Public Domain Dedication & License 1.0.",
"### Contributions"
] |
f2a2a1344cd41ec9574181b324f4d800061cb05a |
# Dataset of Indonesian Online Newspaper
This is a copy of dataset created by **Feryandi Nurdiantoro** (<https://github.com/feryandi/Dataset-Artikel>). The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.
Following is the compressed files:
* newspaper-json.gz: the compressed original 500K json files.
* newspaper.txt.gz: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.
The license has been copied from the source:
## License
Proyek ini dilisensikan dibawah lisensi **Creative Commons Attribution-ShareAlike 4.0 International License**\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.
This work is licensed under a **Creative Commons Attribution-ShareAlike 4.0 International License**. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.
| indonesian-nlp/id_newspapers_2018 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Indonesian Newspapers 2018"} | 2022-10-25T12:47:43+00:00 | [] | [
"id"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Indonesian #license-cc-by-4.0 #region-us
|
# Dataset of Indonesian Online Newspaper
This is a copy of dataset created by Feryandi Nurdiantoro (<URL The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (URL) is around 2.2GB, and the cleaned uncompressed in a big text file (URL) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.
Following is the compressed files:
* URL: the compressed original 500K json files.
* URL: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.
The license has been copied from the source:
## License
Proyek ini dilisensikan dibawah lisensi Creative Commons Attribution-ShareAlike 4.0 International License\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.
| [
"# Dataset of Indonesian Online Newspaper\n\nThis is a copy of dataset created by Feryandi Nurdiantoro (<URL The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.\n\nThe dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (URL) is around 2.2GB, and the cleaned uncompressed in a big text file (URL) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.\n\nFollowing is the compressed files:\n\n* URL: the compressed original 500K json files.\n* URL: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.\n\nThe license has been copied from the source:",
"## License\n\nProyek ini dilisensikan dibawah lisensi Creative Commons Attribution-ShareAlike 4.0 International License\\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.\n\nThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Indonesian #license-cc-by-4.0 #region-us \n",
"# Dataset of Indonesian Online Newspaper\n\nThis is a copy of dataset created by Feryandi Nurdiantoro (<URL The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.\n\nThe dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (URL) is around 2.2GB, and the cleaned uncompressed in a big text file (URL) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.\n\nFollowing is the compressed files:\n\n* URL: the compressed original 500K json files.\n* URL: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.\n\nThe license has been copied from the source:",
"## License\n\nProyek ini dilisensikan dibawah lisensi Creative Commons Attribution-ShareAlike 4.0 International License\\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.\n\nThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer."
] |
30e6fbf9e2fd959a4620116a2868dc98b5db918d | astrophysics
astroparticle
simulation
timeseries
point-cloud
# Dataset Card for FACT Open Crab Sample
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://factdata.app.tu-dortmund.de/
- **Repository:** [Needs More Information]
- **Paper:** https://iopscience.iop.org/article/10.1088/1748-0221/8/06/P06008/pdf, https://iopscience.iop.org/article/10.1088/1748-0221/9/10/P10012/pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.
### Supported Tasks and Leaderboards
- 'classification': Classification of simulated events as either hadron or gamma events.
- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
The goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.
### Source Data
#### Initial Data Collection and Normalization
The initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | jacobbieker/open-crab-sample | [
"doi:10.57967/hf/1649",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-11T11:56:00+00:00 | [] | [] | TAGS
#doi-10.57967/hf/1649 #region-us
| astrophysics
astroparticle
simulation
timeseries
point-cloud
# Dataset Card for FACT Open Crab Sample
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository:
- Paper: URL URL
- Leaderboard:
- Point of Contact:
### Dataset Summary
This is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.
### Supported Tasks and Leaderboards
- 'classification': Classification of simulated events as either hadron or gamma events.
- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
The goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.
### Source Data
#### Initial Data Collection and Normalization
The initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.
#### Who are the source language producers?
### Annotations
#### Annotation process
The simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for FACT Open Crab Sample",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.",
"### Supported Tasks and Leaderboards\n\n- 'classification': Classification of simulated events as either hadron or gamma events.\n- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nThe goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#doi-10.57967/hf/1649 #region-us \n",
"# Dataset Card for FACT Open Crab Sample",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.",
"### Supported Tasks and Leaderboards\n\n- 'classification': Classification of simulated events as either hadron or gamma events.\n- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nThe goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
9083269e47b7faeb22e61eed9f467d9077d72d5e | test | jamol1741/test_dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-10T10:12:46+00:00 | [] | [] | TAGS
#region-us
| test | [] | [
"TAGS\n#region-us \n"
] |
9a3686ebeddd8751304c63f0be2fa4d28b8b0854 | This is a translated version of SNLI in Dutch. The translation was performed using Google Translate. | jegormeister/dutch-snli | [
"language:nl",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["nl"]} | 2023-10-02T18:06:35+00:00 | [] | [
"nl"
] | TAGS
#language-Dutch #region-us
| This is a translated version of SNLI in Dutch. The translation was performed using Google Translate. | [] | [
"TAGS\n#language-Dutch #region-us \n"
] |
7019f71cc4cdfe11bf8f52f18375bc1b407313ca |
# Dataset Card for "LegalGLUE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://git.rwth-aachen.de/johanna.frenz/legalglue
### Dataset Summary
The "Legal General Language Understanding Evaluation" (LegalGLUE) dataset was created as part of a bachelor thesis.
It consists of four already existing datasets covering three task types and a total of 23 different languages.
### Supported Tasks
<table>
<tr><td>Dataset</td><td>Source</td><td>Task Type</td><td>Languages</td><tr>
<tr><td>German_LER</td><td> <a href="https://arxiv.org/abs/2003.13016">Leitner et al.</a></td><td>Named Entity Recognition</td><td>German</td></tr>
<tr><td>LeNER_Br</td><td> <a href="https://github.com/peluz/lener-br"> de Araujo et al., 2018</a></td><td>Named Entity Recognition</td><td> Portuguese </td></tr>
<tr><td>SwissJudgmentPrediction</td><td> <a href="https://arxiv.org/abs/2110.00806">Niklaus et al.</a> </td><td>Binary Text Classification</td><td>German, French, Italian</td></tr>
<tr><td>MultEURLEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. </a> </td><td>Multi-label Text Classification</td><td>23 languages (see below)</td></tr>
</table>
### Languages
see Split section
## Dataset Structure
### Data Instances
#### German_LER
German_LER example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'german_ler')
```
```json
{
'id': '66722',
'tokens':['4.', 'Die', 'Kostenentscheidung', 'für', 'das', 'gerichtliche', 'Antragsverfahren', 'beruht', 'auf', '§', '21', 'Abs.', '2', 'Satz', '1', 'i.', 'V.', 'm.', '§', '20', 'Abs.', '1', 'Satz', '1', 'WBO', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 38]
}
```
#### LeNER-Br
LeNER-Br example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'lener_br')
```
```json
{
'id': '7826',
'tokens': ['Firmado', 'por', 'assinatura', 'digital', '(', 'MP', '2.200-2/2001', ')', 'JOSÉ', 'ROBERTO', 'FREIRE', 'PIMENTA', 'Ministro', 'Relator', 'fls', '.', 'PROCESSO', 'Nº', 'TST-RR-1603-79.2010.5.20.0001'],
'ner_tags': [0, 0, 0, 0, 0, 9, 10, 0, 3, 4, 4, 4, 0, 0, 0, 0, 11, 12, 12]}
```
#### SwissJudgmentPrediction
swissJudgmentPrediction_de example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'swissJudgmentPrediction_de')
```
```json
{
'id': 48755,
'year': 2014,
'text': "Sachverhalt: A. X._ fuhr am 25. Juli 2012 bei Mülligen mit seinem Personenwagen auf dem zweiten Überholstreifen der Autobahn A1 in Richtung Zürich. Gemäss Anklage schloss er auf einen Lieferwagen auf und schwenkte vom zweiten auf den ersten Überholstreifen aus. Danach fuhr er an zwei Fahrzeugen rechts vorbei und wechselte auf die zweite Überholspur zurück. B. Das Obergericht des Kantons Aargau erklärte X._ am 14. Januar 2014 zweitinstanzlich der groben Verletzung der Verkehrsregeln schuldig. Es bestrafte ihn mit einer bedingten Geldstrafe von 30 Tagessätzen zu Fr. 430.-- und einer Busse von Fr. 3'000.--. C. X._ führt Beschwerde in Strafsachen. Er beantragt, er sei von Schuld und Strafe freizusprechen. Eventualiter sei die Sache an die Vorinstanz zurückzuweisen. ",
'label': 0,
'language': 'de',
'region': 'Northwestern Switzerland',
'canton': 'ag',
'legal area': 'penal law'
}
```
#### MultiEURLEX
Monolingual example out of the MultiEURLEX-Dataset
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de')
```
```json
{
'celex_id': '32002R0130',
'text': 'Verordnung (EG) Nr. 130/2002 der Kommission\nvom 24. Januar 2002\nbezüglich der im Rahmen der Auss...',
'labels': [3, 17, 5]}
```
Multilingual example out of the MultiEURLEX-Dataset
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_all_languages')
```
```json
{
'celex_id': '32002R0130',
'text': {
'bg': None,
'cs': None,
'da': 'Kommissionens ...',
'de': 'Verordnung ... ',
'el': '...',
'en': '...',
...
},
'labels': [3, 17, 5]
}
```
### Data Fields
#### German_LER
- `id`: id of the sample
- `tokens`: the tokens of the sample text
- `ner_tags`: the NER tags of each token
#### LeNER_Br
- `id`: id of the sample
- `tokens`: the tokens of the sample text
- `ner_tags`: the NER tags of each token
#### SwissJudgmentPrediction
- `id`: (**int**) ID of the document
- `year`: (**int**) the publication year
- `text`: (**str**) the facts of the case
- `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval)
- `language`: (**str**) one of (de, fr, it)
- `region`: (**str**) the region of the lower court
- `canton`: (**str**) the canton of the lower court
- `legal area`: (**str**) the legal area of the case
#### MultiEURLEX
Monolingual use:
- `celex_id`: (**str**) Official Document ID of the document
- `text`: (**str**) An EU Law
- `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels)
Multilingual use:
- `celex_id`: (**str**) Official Document ID of the document
- `text`: (dict[**str**]) A dictionary with the 23 languages as keys and the corresponding EU Law as values.
- `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels)
The labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label_level parameter when loading the dataset. (available levels: level_1, level_2, level_3, all_levels)
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de', label_level="level_3")
```
### Data Splits
<table>
<tr><th>Dataset</th><th> Language </th> <th> ISO code </th> <th> Number of Documents train/dev/test </th> </tr>
<tr><td>German-LER</td><td>German</td> <td><b>de</b></td> <td> 66723 / - / - </td> </tr>
<tr><td>LeNER-Br</td><td>Portuguese</td> <td><b>pt</b></td> <td> 7828 / 1177 / 1390 </td> </tr>
<tr><td rowspan="3">SwissJudgmentPrediction</td><td>German</td> <td><b>de</b></td> <td> 35458 / 4705 / 9725 </td> </tr>
<tr><td> French </td><td><b>fr</b></td><td> 21179 / 3095 / 6820 </td> </tr>
<tr><td> Italian </td><td><b>it</b></td><td> 3072 / 408 / 812 </td> </tr>
<tr><td rowspan="23">MultiEURLEX</td><td>English </td> <td><b>en</b></td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Italian </td> <td> <b>it</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Spanish </td> <td> <b>es</b> </td> <td> 52,785 / 5,000 / 5,000 </td> </tr>
<tr><td> Polish </td> <td> <b>pl</b> </td> <td> 23,197 / 5,000 / 5,000 </td> </tr>
<tr><td> Romanian </td> <td> <b>ro</b> </td> <td> 15,921 / 5,000 / 5,000 </td> </tr>
<tr><td> Dutch </td> <td> <b>nl</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Hungarian </td> <td> <b>hu</b> </td> <td> 22,664 / 5,000 / 5,000 </td> </tr>
<tr><td> Portuguese </td> <td> <b>pt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Czech </td> <td> <b>cs</b> </td> <td> 23,187 / 5,000 / 5,000 </td> </tr>
<tr><td> Swedish </td> <td> <b>sv</b> </td> <td> 42,490 / 5,000 / 5,000 </td> </tr>
<tr><td> Bulgarian </td> <td> <b>bg</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Danish </td> <td> <b>da</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Finnish </td> <td> <b>fi</b> </td> <td> 42,497 / 5,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Lithuanian </td> <td> <b>lt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Croatian </td> <td> <b>hr</b> </td> <td> 7,944 / 2,500 / 5,000 </td> </tr>
<tr><td> Slovene </td> <td> <b>sl</b> </td> <td> 23,184 / 5,000 / 5,000 </td> </tr>
<tr><td> Estonian </td> <td> <b>et</b> </td> <td> 23,126 / 5,000 / 5,000 </td> </tr>
<tr><td> Latvian </td> <td> <b>lv</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Maltese </td> <td> <b>mt</b> </td> <td> 17,521 / 5,000 / 5,000 </td> </tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
| jfrenz/legalglue | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"multilinguality:multilingual",
"source_datasets:extended",
"language:en",
"language:da",
"language:de",
"language:nl",
"language:sv",
"language:bg",
"language:cs",
"language:hr",
"language:pl",
"language:sk",
"language:sl",
"language:es",
"language:fr",
"language:it",
"language:pt",
"language:ro",
"language:et",
"language:fi",
"language:hu",
"language:lt",
"language:lv",
"language:el",
"language:mt",
"german-ler",
"lener-br",
"arxiv:2003.13016",
"arxiv:2110.00806",
"arxiv:2109.00904",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en", "da", "de", "nl", "sv", "bg", "cs", "hr", "pl", "sk", "sl", "es", "fr", "it", "pt", "ro", "et", "fi", "hu", "lt", "lv", "el", "mt"], "multilinguality": ["multilingual"], "source_datasets": ["extended"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["named-entity-recognition", "multi-label-classification", "topic-classification"], "pretty_name": "LegalGLUE", "tags": ["german-ler", "lener-br"]} | 2022-10-22T21:14:36+00:00 | [
"2003.13016",
"2110.00806",
"2109.00904"
] | [
"en",
"da",
"de",
"nl",
"sv",
"bg",
"cs",
"hr",
"pl",
"sk",
"sl",
"es",
"fr",
"it",
"pt",
"ro",
"et",
"fi",
"hu",
"lt",
"lv",
"el",
"mt"
] | TAGS
#task_categories-text-classification #task_categories-token-classification #task_ids-named-entity-recognition #task_ids-multi-label-classification #task_ids-topic-classification #multilinguality-multilingual #source_datasets-extended #language-English #language-Danish #language-German #language-Dutch #language-Swedish #language-Bulgarian #language-Czech #language-Croatian #language-Polish #language-Slovak #language-Slovenian #language-Spanish #language-French #language-Italian #language-Portuguese #language-Romanian #language-Estonian #language-Finnish #language-Hungarian #language-Lithuanian #language-Latvian #language-Modern Greek (1453-) #language-Maltese #german-ler #lener-br #arxiv-2003.13016 #arxiv-2110.00806 #arxiv-2109.00904 #region-us
| Dataset Card for "LegalGLUE"
============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Repository: URL
### Dataset Summary
The "Legal General Language Understanding Evaluation" (LegalGLUE) dataset was created as part of a bachelor thesis.
It consists of four already existing datasets covering three task types and a total of 23 different languages.
### Supported Tasks
### Languages
see Split section
Dataset Structure
-----------------
### Data Instances
#### German\_LER
German\_LER example
#### LeNER-Br
LeNER-Br example
#### SwissJudgmentPrediction
swissJudgmentPrediction\_de example
#### MultiEURLEX
Monolingual example out of the MultiEURLEX-Dataset
Multilingual example out of the MultiEURLEX-Dataset
### Data Fields
#### German\_LER
* 'id': id of the sample
* 'tokens': the tokens of the sample text
* 'ner\_tags': the NER tags of each token
#### LeNER\_Br
* 'id': id of the sample
* 'tokens': the tokens of the sample text
* 'ner\_tags': the NER tags of each token
#### SwissJudgmentPrediction
* 'id': (int) ID of the document
* 'year': (int) the publication year
* 'text': (str) the facts of the case
* 'label': (class label) the judgment outcome: 0 (dismissal) or 1 (approval)
* 'language': (str) one of (de, fr, it)
* 'region': (str) the region of the lower court
* 'canton': (str) the canton of the lower court
* 'legal area': (str) the legal area of the case
#### MultiEURLEX
Monolingual use:
* 'celex\_id': (str) Official Document ID of the document
* 'text': (str) An EU Law
* 'labels': (List[int]) List of relevant EUROVOC concepts (labels)
Multilingual use:
* 'celex\_id': (str) Official Document ID of the document
* 'text': (dict[str]) A dictionary with the 23 languages as keys and the corresponding EU Law as values.
* 'labels': (List[int]) List of relevant EUROVOC concepts (labels)
The labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label\_level parameter when loading the dataset. (available levels: level\_1, level\_2, level\_3, all\_levels)
### Data Splits
Dataset: German-LER, Language : German, ISO code : **de** 66723 / - / -
Dataset: LeNER-Br, Language : Portuguese, ISO code : **pt** | 7828 / 1177 / 1390
Dataset: French , Language : **fr**, ISO code : 21179 / 3095 / 6820
Dataset: Italian , Language : **it**, ISO code : 3072 / 408 / 812
Dataset: German , Language : **de** , ISO code : 55,000 / 5,000 / 5,000
Dataset: French , Language : **fr** , ISO code : 55,000 / 5,000 / 5,000
Dataset: Italian , Language : **it** , ISO code : 55,000 / 5,000 / 5,000
Dataset: Spanish , Language : **es** , ISO code : 52,785 / 5,000 / 5,000
Dataset: Polish , Language : **pl** , ISO code : 23,197 / 5,000 / 5,000
Dataset: Romanian , Language : **ro** , ISO code : 15,921 / 5,000 / 5,000
Dataset: Dutch , Language : **nl** , ISO code : 55,000 / 5,000 / 5,000
Dataset: Greek , Language : **el** , ISO code : 55,000 / 5,000 / 5,000
Dataset: Hungarian , Language : **hu** , ISO code : 22,664 / 5,000 / 5,000
Dataset: Portuguese , Language : **pt** , ISO code : 23,188 / 5,000 / 5,000
Dataset: Czech , Language : **cs** , ISO code : 23,187 / 5,000 / 5,000
Dataset: Swedish , Language : **sv** , ISO code : 42,490 / 5,000 / 5,000
Dataset: Bulgarian , Language : **bg** , ISO code : 15,986 / 5,000 / 5,000
Dataset: Danish , Language : **da** , ISO code : 55,000 / 5,000 / 5,000
Dataset: Finnish , Language : **fi** , ISO code : 42,497 / 5,000 / 5,000
Dataset: Slovak , Language : **sk** , ISO code : 15,986 / 5,000 / 5,000
Dataset: Lithuanian , Language : **lt** , ISO code : 23,188 / 5,000 / 5,000
Dataset: Croatian , Language : **hr** , ISO code : 7,944 / 2,500 / 5,000
Dataset: Slovene , Language : **sl** , ISO code : 23,184 / 5,000 / 5,000
Dataset: Estonian , Language : **et** , ISO code : 23,126 / 5,000 / 5,000
Dataset: Latvian , Language : **lv** , ISO code : 23,188 / 5,000 / 5,000
Dataset: Maltese , Language : **mt** , ISO code : 17,521 / 5,000 / 5,000 |
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThe \"Legal General Language Understanding Evaluation\" (LegalGLUE) dataset was created as part of a bachelor thesis.\nIt consists of four already existing datasets covering three task types and a total of 23 different languages.",
"### Supported Tasks",
"### Languages\n\n\nsee Split section\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### German\\_LER\n\n\nGerman\\_LER example",
"#### LeNER-Br\n\n\nLeNER-Br example",
"#### SwissJudgmentPrediction\n\n\nswissJudgmentPrediction\\_de example",
"#### MultiEURLEX\n\n\nMonolingual example out of the MultiEURLEX-Dataset\n\n\nMultilingual example out of the MultiEURLEX-Dataset",
"### Data Fields",
"#### German\\_LER\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the sample text\n* 'ner\\_tags': the NER tags of each token",
"#### LeNER\\_Br\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the sample text\n* 'ner\\_tags': the NER tags of each token",
"#### SwissJudgmentPrediction\n\n\n* 'id': (int) ID of the document\n* 'year': (int) the publication year\n* 'text': (str) the facts of the case\n* 'label': (class label) the judgment outcome: 0 (dismissal) or 1 (approval)\n* 'language': (str) one of (de, fr, it)\n* 'region': (str) the region of the lower court\n* 'canton': (str) the canton of the lower court\n* 'legal area': (str) the legal area of the case",
"#### MultiEURLEX\n\n\nMonolingual use:\n\n\n* 'celex\\_id': (str) Official Document ID of the document\n* 'text': (str) An EU Law\n* 'labels': (List[int]) List of relevant EUROVOC concepts (labels)\n\n\nMultilingual use:\n\n\n* 'celex\\_id': (str) Official Document ID of the document\n* 'text': (dict[str]) A dictionary with the 23 languages as keys and the corresponding EU Law as values.\n* 'labels': (List[int]) List of relevant EUROVOC concepts (labels)\n\n\nThe labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label\\_level parameter when loading the dataset. (available levels: level\\_1, level\\_2, level\\_3, all\\_levels)",
"### Data Splits\n\n\nDataset: German-LER, Language : German, ISO code : **de** 66723 / - / - \nDataset: LeNER-Br, Language : Portuguese, ISO code : **pt** | 7828 / 1177 / 1390 \nDataset: French , Language : **fr**, ISO code : 21179 / 3095 / 6820 \nDataset: Italian , Language : **it**, ISO code : 3072 / 408 / 812 \nDataset: German , Language : **de** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: French , Language : **fr** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Italian , Language : **it** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Spanish , Language : **es** , ISO code : 52,785 / 5,000 / 5,000 \nDataset: Polish , Language : **pl** , ISO code : 23,197 / 5,000 / 5,000 \nDataset: Romanian , Language : **ro** , ISO code : 15,921 / 5,000 / 5,000 \nDataset: Dutch , Language : **nl** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Greek , Language : **el** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Hungarian , Language : **hu** , ISO code : 22,664 / 5,000 / 5,000 \nDataset: Portuguese , Language : **pt** , ISO code : 23,188 / 5,000 / 5,000 \nDataset: Czech , Language : **cs** , ISO code : 23,187 / 5,000 / 5,000 \nDataset: Swedish , Language : **sv** , ISO code : 42,490 / 5,000 / 5,000 \nDataset: Bulgarian , Language : **bg** , ISO code : 15,986 / 5,000 / 5,000 \nDataset: Danish , Language : **da** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Finnish , Language : **fi** , ISO code : 42,497 / 5,000 / 5,000 \nDataset: Slovak , Language : **sk** , ISO code : 15,986 / 5,000 / 5,000 \nDataset: Lithuanian , Language : **lt** , ISO code : 23,188 / 5,000 / 5,000 \nDataset: Croatian , Language : **hr** , ISO code : 7,944 / 2,500 / 5,000 \nDataset: Slovene , Language : **sl** , ISO code : 23,184 / 5,000 / 5,000 \nDataset: Estonian , Language : **et** , ISO code : 23,126 / 5,000 / 5,000 \nDataset: Latvian , Language : **lv** , ISO code : 23,188 / 5,000 / 5,000 \nDataset: Maltese , Language : **mt** , ISO code : 17,521 / 5,000 / 5,000 |\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-text-classification #task_categories-token-classification #task_ids-named-entity-recognition #task_ids-multi-label-classification #task_ids-topic-classification #multilinguality-multilingual #source_datasets-extended #language-English #language-Danish #language-German #language-Dutch #language-Swedish #language-Bulgarian #language-Czech #language-Croatian #language-Polish #language-Slovak #language-Slovenian #language-Spanish #language-French #language-Italian #language-Portuguese #language-Romanian #language-Estonian #language-Finnish #language-Hungarian #language-Lithuanian #language-Latvian #language-Modern Greek (1453-) #language-Maltese #german-ler #lener-br #arxiv-2003.13016 #arxiv-2110.00806 #arxiv-2109.00904 #region-us \n",
"### Dataset Summary\n\n\nThe \"Legal General Language Understanding Evaluation\" (LegalGLUE) dataset was created as part of a bachelor thesis.\nIt consists of four already existing datasets covering three task types and a total of 23 different languages.",
"### Supported Tasks",
"### Languages\n\n\nsee Split section\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### German\\_LER\n\n\nGerman\\_LER example",
"#### LeNER-Br\n\n\nLeNER-Br example",
"#### SwissJudgmentPrediction\n\n\nswissJudgmentPrediction\\_de example",
"#### MultiEURLEX\n\n\nMonolingual example out of the MultiEURLEX-Dataset\n\n\nMultilingual example out of the MultiEURLEX-Dataset",
"### Data Fields",
"#### German\\_LER\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the sample text\n* 'ner\\_tags': the NER tags of each token",
"#### LeNER\\_Br\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the sample text\n* 'ner\\_tags': the NER tags of each token",
"#### SwissJudgmentPrediction\n\n\n* 'id': (int) ID of the document\n* 'year': (int) the publication year\n* 'text': (str) the facts of the case\n* 'label': (class label) the judgment outcome: 0 (dismissal) or 1 (approval)\n* 'language': (str) one of (de, fr, it)\n* 'region': (str) the region of the lower court\n* 'canton': (str) the canton of the lower court\n* 'legal area': (str) the legal area of the case",
"#### MultiEURLEX\n\n\nMonolingual use:\n\n\n* 'celex\\_id': (str) Official Document ID of the document\n* 'text': (str) An EU Law\n* 'labels': (List[int]) List of relevant EUROVOC concepts (labels)\n\n\nMultilingual use:\n\n\n* 'celex\\_id': (str) Official Document ID of the document\n* 'text': (dict[str]) A dictionary with the 23 languages as keys and the corresponding EU Law as values.\n* 'labels': (List[int]) List of relevant EUROVOC concepts (labels)\n\n\nThe labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label\\_level parameter when loading the dataset. (available levels: level\\_1, level\\_2, level\\_3, all\\_levels)",
"### Data Splits\n\n\nDataset: German-LER, Language : German, ISO code : **de** 66723 / - / - \nDataset: LeNER-Br, Language : Portuguese, ISO code : **pt** | 7828 / 1177 / 1390 \nDataset: French , Language : **fr**, ISO code : 21179 / 3095 / 6820 \nDataset: Italian , Language : **it**, ISO code : 3072 / 408 / 812 \nDataset: German , Language : **de** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: French , Language : **fr** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Italian , Language : **it** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Spanish , Language : **es** , ISO code : 52,785 / 5,000 / 5,000 \nDataset: Polish , Language : **pl** , ISO code : 23,197 / 5,000 / 5,000 \nDataset: Romanian , Language : **ro** , ISO code : 15,921 / 5,000 / 5,000 \nDataset: Dutch , Language : **nl** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Greek , Language : **el** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Hungarian , Language : **hu** , ISO code : 22,664 / 5,000 / 5,000 \nDataset: Portuguese , Language : **pt** , ISO code : 23,188 / 5,000 / 5,000 \nDataset: Czech , Language : **cs** , ISO code : 23,187 / 5,000 / 5,000 \nDataset: Swedish , Language : **sv** , ISO code : 42,490 / 5,000 / 5,000 \nDataset: Bulgarian , Language : **bg** , ISO code : 15,986 / 5,000 / 5,000 \nDataset: Danish , Language : **da** , ISO code : 55,000 / 5,000 / 5,000 \nDataset: Finnish , Language : **fi** , ISO code : 42,497 / 5,000 / 5,000 \nDataset: Slovak , Language : **sk** , ISO code : 15,986 / 5,000 / 5,000 \nDataset: Lithuanian , Language : **lt** , ISO code : 23,188 / 5,000 / 5,000 \nDataset: Croatian , Language : **hr** , ISO code : 7,944 / 2,500 / 5,000 \nDataset: Slovene , Language : **sl** , ISO code : 23,184 / 5,000 / 5,000 \nDataset: Estonian , Language : **et** , ISO code : 23,126 / 5,000 / 5,000 \nDataset: Latvian , Language : **lv** , ISO code : 23,188 / 5,000 / 5,000 \nDataset: Maltese , Language : **mt** , ISO code : 17,521 / 5,000 / 5,000 |\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
7cee4936fb208443b00afa753d01c57376496856 |
# SAE-door-abstracts
This dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations. | jgammack/SAE-door-abstracts | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"], "pretty_name": "SAE-door-abstracts", "language_bcp47": ["en-US"]} | 2022-10-22T07:23:24+00:00 | [] | [
"en"
] | TAGS
#task_ids-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #region-us
|
# SAE-door-abstracts
This dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations. | [
"# SAE-door-abstracts\n\nThis dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations."
] | [
"TAGS\n#task_ids-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #region-us \n",
"# SAE-door-abstracts\n\nThis dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations."
] |
11e49b7ece33d62afd7f65bc05ce60ad37f9ba7b |
## How to use the data sets
This dataset contains 1.9M unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities. It can be used for fine-tuning a language model.
The data comes from the following sources:
- BindingDB
- PDBbind-cn
- BioLIP
- BindingMOAD
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/binding_affinity",split='train[:90%]')
validation = load_dataset("jglaser/binding_affinity",split='train[90%:]')
```
Optionally, datasets with certain protein sequences removed are available.
These can be used to test the predictive power for specific proteins even when
these are not part of the training data.
- `train_no_kras` (no KRAS proteins)
**Loading the data manually**
The file `data/all.parquet` contains the preprocessed data. To extract it,
you need download and install [git LFS support] https://git-lfs.github.com/].
### Pre-process yourself
To manually perform the preprocessing, download the data sets from
1. BindingDB
In `bindingdb`, download the database as tab separated values
<https://bindingdb.org> > Download > BindingDB_All_2021m4.tsv.zip
and extract the zip archive into `bindingdb/data`
Run the steps in `bindingdb.ipynb`
2. PDBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
Perform the steps in the notebook `pdbbind.ipynb`
3. BindingMOAD
Go to <https://bindingmoad.org> and download the files `every.csv`
(All of Binding MOAD, Binding Data) and the non-redundant biounits
(`nr_bind.zip`). Place and extract those files into `binding_moad`.
Run the script `moad.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 moad.py`).
Perform the steps in the notebook `moad.ipynb`
4. BioLIP
Download from <https://zhanglab.ccmb.med.umich.edu/BioLiP/> the files
- receptor1.tar.bz2 (Receptor1, Non-redudant set)
- ligand_2013-03-6.tar.bz2 (Ligands)
- BioLiP.tar.bz2 (Annotations)
and extract them in `biolip/data`.
The following steps are **optional**, they **do not** result in additional binding affinity data.
Download the script
- download_all_sets.pl
from the Weekly update subpage.
Update the 2013 database to its current state
`perl download_all-sets.pl`
Run the script `biolip.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 biolip.py`).
Perform the steps in the notebook `biolip.ipynb`
5. Final concatenation and filtering
Run the steps in the notebook `combine_dbs.ipynb`
| jglaser/binding_affinity | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"tags": ["molecules", "chemistry", "SMILES"]} | 2022-03-12T00:29:11+00:00 | [] | [] | TAGS
#molecules #chemistry #SMILES #region-us
|
## How to use the data sets
This dataset contains 1.9M unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities. It can be used for fine-tuning a language model.
The data comes from the following sources:
- BindingDB
- PDBbind-cn
- BioLIP
- BindingMOAD
### Use the already preprocessed data
Load a test/train split using
Optionally, datasets with certain protein sequences removed are available.
These can be used to test the predictive power for specific proteins even when
these are not part of the training data.
- 'train_no_kras' (no KRAS proteins)
Loading the data manually
The file 'data/all.parquet' contains the preprocessed data. To extract it,
you need download and install [git LFS support] URL
### Pre-process yourself
To manually perform the preprocessing, download the data sets from
1. BindingDB
In 'bindingdb', download the database as tab separated values
<URL> > Download > BindingDB_All_2021m4.URL
and extract the zip archive into 'bindingdb/data'
Run the steps in 'URL'
2. PDBBind-cn
Register for an account at <URL confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in 'pdbbind/data'
Run the script 'URL' in a compute job on an MPI-enabled cluster
(e.g., 'mpirun -n 64 URL').
Perform the steps in the notebook 'URL'
3. BindingMOAD
Go to <URL> and download the files 'URL'
(All of Binding MOAD, Binding Data) and the non-redundant biounits
('nr_bind.zip'). Place and extract those files into 'binding_moad'.
Run the script 'URL' in a compute job on an MPI-enabled cluster
(e.g., 'mpirun -n 64 URL').
Perform the steps in the notebook 'URL'
4. BioLIP
Download from <URL the files
- URL.bz2 (Receptor1, Non-redudant set)
- ligand_2013-URL.bz2 (Ligands)
- URL.bz2 (Annotations)
and extract them in 'biolip/data'.
The following steps are optional, they do not result in additional binding affinity data.
Download the script
- download_all_sets.pl
from the Weekly update subpage.
Update the 2013 database to its current state
'perl download_all-URL'
Run the script 'URL' in a compute job on an MPI-enabled cluster
(e.g., 'mpirun -n 64 URL').
Perform the steps in the notebook 'URL'
5. Final concatenation and filtering
Run the steps in the notebook 'combine_dbs.ipynb'
| [
"## How to use the data sets\n\nThis dataset contains 1.9M unique pairs of protein sequences and ligand SMILES with experimentally determined\nbinding affinities. It can be used for fine-tuning a language model.\n\nThe data comes from the following sources:\n- BindingDB\n- PDBbind-cn\n- BioLIP\n- BindingMOAD",
"### Use the already preprocessed data\n\nLoad a test/train split using\n\n\n\nOptionally, datasets with certain protein sequences removed are available.\nThese can be used to test the predictive power for specific proteins even when\nthese are not part of the training data.\n\n- 'train_no_kras' (no KRAS proteins)\n\nLoading the data manually\n\nThe file 'data/all.parquet' contains the preprocessed data. To extract it,\nyou need download and install [git LFS support] URL",
"### Pre-process yourself\n\nTo manually perform the preprocessing, download the data sets from\n\n1. BindingDB\n\nIn 'bindingdb', download the database as tab separated values\n<URL> > Download > BindingDB_All_2021m4.URL\nand extract the zip archive into 'bindingdb/data'\n\nRun the steps in 'URL'\n\n2. PDBBind-cn\n\nRegister for an account at <URL confirm the validation\nemail, then login and download \n\n- the Index files (1)\n- the general protein-ligand complexes (2)\n- the refined protein-ligand complexes (3)\n\nExtract those files in 'pdbbind/data'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'\n\n3. BindingMOAD\n\nGo to <URL> and download the files 'URL'\n(All of Binding MOAD, Binding Data) and the non-redundant biounits\n('nr_bind.zip'). Place and extract those files into 'binding_moad'.\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'\n\n4. BioLIP\n\nDownload from <URL the files\n- URL.bz2 (Receptor1, Non-redudant set)\n- ligand_2013-URL.bz2 (Ligands)\n- URL.bz2 (Annotations)\nand extract them in 'biolip/data'.\n\nThe following steps are optional, they do not result in additional binding affinity data.\n\nDownload the script\n- download_all_sets.pl\nfrom the Weekly update subpage.\n\nUpdate the 2013 database to its current state\n\n'perl download_all-URL'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'\n\n5. Final concatenation and filtering\n\nRun the steps in the notebook 'combine_dbs.ipynb'"
] | [
"TAGS\n#molecules #chemistry #SMILES #region-us \n",
"## How to use the data sets\n\nThis dataset contains 1.9M unique pairs of protein sequences and ligand SMILES with experimentally determined\nbinding affinities. It can be used for fine-tuning a language model.\n\nThe data comes from the following sources:\n- BindingDB\n- PDBbind-cn\n- BioLIP\n- BindingMOAD",
"### Use the already preprocessed data\n\nLoad a test/train split using\n\n\n\nOptionally, datasets with certain protein sequences removed are available.\nThese can be used to test the predictive power for specific proteins even when\nthese are not part of the training data.\n\n- 'train_no_kras' (no KRAS proteins)\n\nLoading the data manually\n\nThe file 'data/all.parquet' contains the preprocessed data. To extract it,\nyou need download and install [git LFS support] URL",
"### Pre-process yourself\n\nTo manually perform the preprocessing, download the data sets from\n\n1. BindingDB\n\nIn 'bindingdb', download the database as tab separated values\n<URL> > Download > BindingDB_All_2021m4.URL\nand extract the zip archive into 'bindingdb/data'\n\nRun the steps in 'URL'\n\n2. PDBBind-cn\n\nRegister for an account at <URL confirm the validation\nemail, then login and download \n\n- the Index files (1)\n- the general protein-ligand complexes (2)\n- the refined protein-ligand complexes (3)\n\nExtract those files in 'pdbbind/data'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'\n\n3. BindingMOAD\n\nGo to <URL> and download the files 'URL'\n(All of Binding MOAD, Binding Data) and the non-redundant biounits\n('nr_bind.zip'). Place and extract those files into 'binding_moad'.\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'\n\n4. BioLIP\n\nDownload from <URL the files\n- URL.bz2 (Receptor1, Non-redudant set)\n- ligand_2013-URL.bz2 (Ligands)\n- URL.bz2 (Annotations)\nand extract them in 'biolip/data'.\n\nThe following steps are optional, they do not result in additional binding affinity data.\n\nDownload the script\n- download_all_sets.pl\nfrom the Weekly update subpage.\n\nUpdate the 2013 database to its current state\n\n'perl download_all-URL'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'\n\n5. Final concatenation and filtering\n\nRun the steps in the notebook 'combine_dbs.ipynb'"
] |
12ef6ff7249d499ae2255caa3d3d80a1cccb308d |
# Dataset Card for ClarinPL Sejm/Senat Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Needs More Information]
- **Paper:** [System for Automatic Transcription of Sessions of the Polish Senate](https://acoustics.ippt.pan.pl/index.php/aa/article/view/327/pdf_32)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A collection of 97 hours of parliamentary speeches published on the ClarinPL website.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/4143b1d75559b10028c1c7e8800c9ccc05934ca5a8ea15f8f9a92770576a1ee3/SejmSenat/audio/AdamAbramowicz-20130410/file000.wav',
'id': 'AdamAbramowicz-20130410-file000',
'speaker_id': 'AdamAbramowicz',
'text': 'panie marszałku wysoka izbo panie ministrze próbuje się przedstawiać polskę jako zieloną wyspę kraj który się szybko rozwija tymczasem rzeczywistość jest zupełnie inna a widać ją także dzisiaj przed polskim parlamentem próbuje się rząd próbuje zagonić polaków do pracy aż do śmierci przedłużać wiek emerytalny czyliczyli sytuacja gospodarcza polski w tym wypadku jest przedstawiana już zupełnie inaczej pakiet klimatyczny i protokół z kioto jak się zgadzają fachowcy od gospodarki jest szkodliwy dla krajów które są na dorobku a polska właśnie jest takim krajem'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test |
| ----- | ----- | ---- |
| dataset | 6622 | 130 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
### Contributions
[Needs More Information] | jimregan/clarinpl_sejmsenat | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language": ["pl"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["other", "automatic-speech-recognition"], "task_ids": []} | 2023-01-22T13:37:24+00:00 | [] | [
"pl"
] | TAGS
#task_categories-other #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-other #region-us
| Dataset Card for ClarinPL Sejm/Senat Speech Corpus
==================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: CLARIN-PL mowa
* Repository:
* Paper: System for Automatic Transcription of Sessions of the Polish Senate
* Leaderboard: [Paperswithcode Leaderboard]
* Point of Contact:
### Dataset Summary
A collection of 97 hours of parliamentary speeches published on the ClarinPL website.
### Supported Tasks and Leaderboards
### Languages
The audio is in Polish.
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.
An example from the dataset is:
### Data Fields
* file: A path to the downloaded audio file in .wav format.
* text: the transcription of the audio file.
* speaker\_id: The ID of the speaker of the audio.
### Data Splits
Train: dataset, Test: 6622
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nA collection of 97 hours of parliamentary speeches published on the ClarinPL website.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe audio is in Polish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:",
"### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* text: the transcription of the audio file.\n* speaker\\_id: The ID of the speaker of the audio.",
"### Data Splits\n\n\nTrain: dataset, Test: 6622\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-other #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-other #region-us \n",
"### Dataset Summary\n\n\nA collection of 97 hours of parliamentary speeches published on the ClarinPL website.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe audio is in Polish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:",
"### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* text: the transcription of the audio file.\n* speaker\\_id: The ID of the speaker of the audio.",
"### Data Splits\n\n\nTrain: dataset, Test: 6622\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
f306efab67c654660955f251fa7fa3f7d687cae1 |
# Dataset Card for ClarinPL Studio Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Kaldi Baseline](https://github.com/danijel3/ClarinStudioKaldi)
- **Paper:** [Polish Read Speech Corpus for Speech Tools and Services](https://arxiv.org/abs/1706.00245)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Danijel Koržinek](https://github.com/danijel3/)
### Dataset Summary
The corpus consists of 317 speakers recorded in 554
sessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of
the audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words
from a vocabulary of size 46361.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/333ddc746f2df1e1d19b44986992d4cbe28710fde81d533a220e755ee6c5c519/audio/SES0001/rich001.wav',
'id': 'SES0001_rich001',
'speaker_id': 'SPK0001',
'text': 'drożdże dżip gwożdżenie ozimina wędzarz rdzeń wędzonka ingerować kładzenie jutrzenka'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test | Valid |
| ----- | ----- | ---- | ----- |
| dataset | 11222 | 1362 | 1229 |
## Dataset Creation
### Curation Rationale
The purpose of this segment of the project was to develop specific tools that would allow for automatic and semi-automatic processing of large quantities of acoustic speech data. Another purpose of the corpus was to serve as a reference for studies in phonetics and pronunciation.
### Source Data
#### Initial Data Collection and Normalization
The corpus was recorded in a studio environment using two microphones: a high-quality studio microphone and a typical consumer audio headset.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CLARIN PUB+BY+INF+NORED](https://mowa.clarin-pl.eu/korpusy/LICENSE)
### Citation Information
```
@article{korvzinek2017polish,
title={Polish read speech corpus for speech tools and services},
author={Kor{\v{z}}inek, Danijel and Marasek, Krzysztof and Brocki, {\L}ukasz and Wo{\l}k, Krzysztof},
journal={arXiv preprint arXiv:1706.00245},
year={2017}
}
```
### Contributions
[Needs More Information]
| jimregan/clarinpl_studio | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:other",
"arxiv:1706.00245",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language": ["pl"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["other", "automatic-speech-recognition"], "task_ids": []} | 2023-01-21T12:27:08+00:00 | [
"1706.00245"
] | [
"pl"
] | TAGS
#task_categories-other #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-other #arxiv-1706.00245 #region-us
| Dataset Card for ClarinPL Studio Speech Corpus
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: CLARIN-PL mowa
* Repository: Kaldi Baseline
* Paper: Polish Read Speech Corpus for Speech Tools and Services
* Leaderboard: [Paperswithcode Leaderboard]
* Point of Contact: Danijel Koržinek
### Dataset Summary
The corpus consists of 317 speakers recorded in 554
sessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of
the audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words
from a vocabulary of size 46361.
### Supported Tasks and Leaderboards
### Languages
The audio is in Polish.
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.
An example from the dataset is:
### Data Fields
* file: A path to the downloaded audio file in .wav format.
* text: the transcription of the audio file.
* speaker\_id: The ID of the speaker of the audio.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
The purpose of this segment of the project was to develop specific tools that would allow for automatic and semi-automatic processing of large quantities of acoustic speech data. Another purpose of the corpus was to serve as a reference for studies in phonetics and pronunciation.
### Source Data
#### Initial Data Collection and Normalization
The corpus was recorded in a studio environment using two microphones: a high-quality studio microphone and a typical consumer audio headset.
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
CLARIN PUB+BY+INF+NORED
### Contributions
| [
"### Dataset Summary\n\n\nThe corpus consists of 317 speakers recorded in 554\nsessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of\nthe audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words\nfrom a vocabulary of size 46361.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe audio is in Polish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:",
"### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* text: the transcription of the audio file.\n* speaker\\_id: The ID of the speaker of the audio.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe purpose of this segment of the project was to develop specific tools that would allow for automatic and semi-automatic processing of large quantities of acoustic speech data. Another purpose of the corpus was to serve as a reference for studies in phonetics and pronunciation.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe corpus was recorded in a studio environment using two microphones: a high-quality studio microphone and a typical consumer audio headset.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCLARIN PUB+BY+INF+NORED",
"### Contributions"
] | [
"TAGS\n#task_categories-other #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-other #arxiv-1706.00245 #region-us \n",
"### Dataset Summary\n\n\nThe corpus consists of 317 speakers recorded in 554\nsessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of\nthe audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words\nfrom a vocabulary of size 46361.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe audio is in Polish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:",
"### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* text: the transcription of the audio file.\n* speaker\\_id: The ID of the speaker of the audio.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe purpose of this segment of the project was to develop specific tools that would allow for automatic and semi-automatic processing of large quantities of acoustic speech data. Another purpose of the corpus was to serve as a reference for studies in phonetics and pronunciation.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe corpus was recorded in a studio environment using two microphones: a high-quality studio microphone and a typical consumer audio headset.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCLARIN PUB+BY+INF+NORED",
"### Contributions"
] |
2aa1a929e2f3ed32b7012eaa35f7e4cbc2d462a6 |
# Dataset Card for Augmented-GLUE-SST2
Automatically augmented data from train split of SST-2 dataset using conditional text generation approach.
Code used to generate this file will be soon available at https://github.com/IntelLabs/nlp-architect.
| jmamou/augmented-glue-sst2 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en-US"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "extended": ["original"]} | 2022-07-17T11:25:34+00:00 | [] | [
"en-US"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #license-unknown #region-us
|
# Dataset Card for Augmented-GLUE-SST2
Automatically augmented data from train split of SST-2 dataset using conditional text generation approach.
Code used to generate this file will be soon available at URL
| [
"# Dataset Card for Augmented-GLUE-SST2\n\nAutomatically augmented data from train split of SST-2 dataset using conditional text generation approach.\nCode used to generate this file will be soon available at URL"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #license-unknown #region-us \n",
"# Dataset Card for Augmented-GLUE-SST2\n\nAutomatically augmented data from train split of SST-2 dataset using conditional text generation approach.\nCode used to generate this file will be soon available at URL"
] |
0db57b32c35d3fa23ca1a647a102d9722863fbe2 |
# Dataset Card for ICC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [Jón Friðrik Daðason](mailto:[email protected])
### Dataset Summary
The Icelandic Crawled Corpus (ICC) contains approximately 930M tokens which have been scraped from a selection of Icelandic websites, including news sites, government websites and forums. The scraped text is presented in its original form, unannotated, untokenized and without deduplication.
### Supported Tasks and Leaderboards
The ICC is primarily intended for use in training language models. It can be combined with other corpora, such as the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) and the Icelandic portion of the [mC4](https://huggingface.co/datasets/mc4) corpus.
### Languages
This corpus contains text in Icelandic, scraped from a variety of online sources.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each scraped item consists of two fields:
* **url**: The source URL of the scraped text.
* **text**: The scraped text.
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Although this corpus consists entirely of text collected from publicly available websites, it may contain some examples of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This corpus was created by Jón Friðrik Daðason, during work done at the [Language and Voice Lab](https://lvl.ru.is/) at [Reykjavik University](https://www.ru.is/).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0
International License. Any text, HTML page links, information, metadata or
other materials in this work may be subject to separate terms and
conditions between you and the owners of such content.
If you are a copyright owner or an agent thereof and believe that any
content in this work infringes upon your copyrights, you may submit a
notification with the following information:
* Your full name and information reasonably sufficient to permit us to
contact you, such as mailing address, phone number and an email address.
* Identification of the copyrighted work you claim has been infringed.
* Identification of the material you claim is infringing and should be
removed, and information reasonably sufficient to permit us to locate
the material.
### Citation Information
N/A
### Contributions
Thanks to [@jonfd](https://github.com/jonfd) for adding this dataset.
| jonfd/ICC | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["is"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "ICC"} | 2022-10-22T14:15:16+00:00 | [] | [
"is"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-Icelandic #license-cc-by-4.0 #region-us
|
# Dataset Card for ICC
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Point of Contact: Jón Friðrik Daðason
### Dataset Summary
The Icelandic Crawled Corpus (ICC) contains approximately 930M tokens which have been scraped from a selection of Icelandic websites, including news sites, government websites and forums. The scraped text is presented in its original form, unannotated, untokenized and without deduplication.
### Supported Tasks and Leaderboards
The ICC is primarily intended for use in training language models. It can be combined with other corpora, such as the Icelandic Gigaword Corpus and the Icelandic portion of the mC4 corpus.
### Languages
This corpus contains text in Icelandic, scraped from a variety of online sources.
## Dataset Structure
### Data Instances
### Data Fields
Each scraped item consists of two fields:
* url: The source URL of the scraped text.
* text: The scraped text.
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Although this corpus consists entirely of text collected from publicly available websites, it may contain some examples of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
This corpus was created by Jón Friðrik Daðason, during work done at the Language and Voice Lab at Reykjavik University.
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0
International License. Any text, HTML page links, information, metadata or
other materials in this work may be subject to separate terms and
conditions between you and the owners of such content.
If you are a copyright owner or an agent thereof and believe that any
content in this work infringes upon your copyrights, you may submit a
notification with the following information:
* Your full name and information reasonably sufficient to permit us to
contact you, such as mailing address, phone number and an email address.
* Identification of the copyrighted work you claim has been infringed.
* Identification of the material you claim is infringing and should be
removed, and information reasonably sufficient to permit us to locate
the material.
N/A
### Contributions
Thanks to @jonfd for adding this dataset.
| [
"# Dataset Card for ICC",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: Jón Friðrik Daðason",
"### Dataset Summary\n\nThe Icelandic Crawled Corpus (ICC) contains approximately 930M tokens which have been scraped from a selection of Icelandic websites, including news sites, government websites and forums. The scraped text is presented in its original form, unannotated, untokenized and without deduplication.",
"### Supported Tasks and Leaderboards\n\nThe ICC is primarily intended for use in training language models. It can be combined with other corpora, such as the Icelandic Gigaword Corpus and the Icelandic portion of the mC4 corpus.",
"### Languages\n\nThis corpus contains text in Icelandic, scraped from a variety of online sources.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nEach scraped item consists of two fields:\n* url: The source URL of the scraped text.\n* text: The scraped text.",
"### Data Splits\n\nN/A",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nAlthough this corpus consists entirely of text collected from publicly available websites, it may contain some examples of personal or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThis corpus was created by Jón Friðrik Daðason, during work done at the Language and Voice Lab at Reykjavik University.\n\nThis project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.",
"### Licensing Information\n\n This work is licensed under a Creative Commons Attribution 4.0\n International License. Any text, HTML page links, information, metadata or\n other materials in this work may be subject to separate terms and\n conditions between you and the owners of such content.\n\n If you are a copyright owner or an agent thereof and believe that any\n content in this work infringes upon your copyrights, you may submit a\n notification with the following information:\n * Your full name and information reasonably sufficient to permit us to\n contact you, such as mailing address, phone number and an email address.\n * Identification of the copyrighted work you claim has been infringed.\n * Identification of the material you claim is infringing and should be\n removed, and information reasonably sufficient to permit us to locate\n the material.\n\n\n\nN/A",
"### Contributions\n\nThanks to @jonfd for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-Icelandic #license-cc-by-4.0 #region-us \n",
"# Dataset Card for ICC",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: Jón Friðrik Daðason",
"### Dataset Summary\n\nThe Icelandic Crawled Corpus (ICC) contains approximately 930M tokens which have been scraped from a selection of Icelandic websites, including news sites, government websites and forums. The scraped text is presented in its original form, unannotated, untokenized and without deduplication.",
"### Supported Tasks and Leaderboards\n\nThe ICC is primarily intended for use in training language models. It can be combined with other corpora, such as the Icelandic Gigaword Corpus and the Icelandic portion of the mC4 corpus.",
"### Languages\n\nThis corpus contains text in Icelandic, scraped from a variety of online sources.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nEach scraped item consists of two fields:\n* url: The source URL of the scraped text.\n* text: The scraped text.",
"### Data Splits\n\nN/A",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nAlthough this corpus consists entirely of text collected from publicly available websites, it may contain some examples of personal or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThis corpus was created by Jón Friðrik Daðason, during work done at the Language and Voice Lab at Reykjavik University.\n\nThis project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.",
"### Licensing Information\n\n This work is licensed under a Creative Commons Attribution 4.0\n International License. Any text, HTML page links, information, metadata or\n other materials in this work may be subject to separate terms and\n conditions between you and the owners of such content.\n\n If you are a copyright owner or an agent thereof and believe that any\n content in this work infringes upon your copyrights, you may submit a\n notification with the following information:\n * Your full name and information reasonably sufficient to permit us to\n contact you, such as mailing address, phone number and an email address.\n * Identification of the copyrighted work you claim has been infringed.\n * Identification of the material you claim is infringing and should be\n removed, and information reasonably sufficient to permit us to locate\n the material.\n\n\n\nN/A",
"### Contributions\n\nThanks to @jonfd for adding this dataset."
] |
503eee0894f308dbd1d74c1b4ecf4cfc99dd43f9 |
MultiDoGo dialog dataset:
- paper: https://aclanthology.org/D19-1460/
- git repo: https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset
*Abstract*
The need for high-quality, large-scale, goal-oriented dialogue datasets continues to grow as virtual assistants become increasingly wide-spread. However, publicly available datasets useful for this area are limited either in their size, linguistic diversity, domain coverage, or annotation granularity. In this paper, we present strategies toward curating and annotating large scale goal oriented dialogue data. We introduce the MultiDoGO dataset to overcome these limitations. With a total of over 81K dialogues harvested across six domains, MultiDoGO is over 8 times the size of MultiWOZ, the other largest comparable dialogue dataset currently available to the public. Over 54K of these harvested conversations are annotated for intent classes and slot labels. We adopt a Wizard-of-Oz approach wherein a crowd-sourced worker (the “customer”) is paired with a trained annotator (the “agent”). The data curation process was controlled via biases to ensure a diversity in dialogue flows following variable dialogue policies. We provide distinct class label tags for agents vs. customer utterances, along with applicable slot labels. We also compare and contrast our strategies on annotation granularity, i.e. turn vs. sentence level. Furthermore, we compare and contrast annotations curated by leveraging professional annotators vs the crowd. We believe our strategies for eliciting and annotating such a dialogue dataset scales across modalities and domains and potentially languages in the future. To demonstrate the efficacy of our devised strategies we establish neural baselines for classification on the agent and customer utterances as well as slot labeling for each domain.
## Licensing information
Community Data License Agreement – Permissive, Version 1.0. | jpcorb20/multidogo | [
"task_categories:text-classification",
"task_categories:other",
"task_ids:intent-classification",
"task_ids:dialogue-modeling",
"task_ids:slot-filling",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10k<n<100k",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10k<n<100k"], "source_datasets": ["original"], "task_categories": ["text-classification", "sequence-modeling", "structure-prediction", "other"], "task_ids": ["intent-classification", "dialogue-modeling", "slot-filling", "named-entity-recognition", "other-other-my-task-description"], "pretty_name": "multidogo"} | 2022-10-20T17:33:00+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-other #task_ids-intent-classification #task_ids-dialogue-modeling #task_ids-slot-filling #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10k<n<100k #source_datasets-original #language-English #license-other #region-us
|
MultiDoGo dialog dataset:
- paper: URL
- git repo: URL
*Abstract*
The need for high-quality, large-scale, goal-oriented dialogue datasets continues to grow as virtual assistants become increasingly wide-spread. However, publicly available datasets useful for this area are limited either in their size, linguistic diversity, domain coverage, or annotation granularity. In this paper, we present strategies toward curating and annotating large scale goal oriented dialogue data. We introduce the MultiDoGO dataset to overcome these limitations. With a total of over 81K dialogues harvested across six domains, MultiDoGO is over 8 times the size of MultiWOZ, the other largest comparable dialogue dataset currently available to the public. Over 54K of these harvested conversations are annotated for intent classes and slot labels. We adopt a Wizard-of-Oz approach wherein a crowd-sourced worker (the “customer”) is paired with a trained annotator (the “agent”). The data curation process was controlled via biases to ensure a diversity in dialogue flows following variable dialogue policies. We provide distinct class label tags for agents vs. customer utterances, along with applicable slot labels. We also compare and contrast our strategies on annotation granularity, i.e. turn vs. sentence level. Furthermore, we compare and contrast annotations curated by leveraging professional annotators vs the crowd. We believe our strategies for eliciting and annotating such a dialogue dataset scales across modalities and domains and potentially languages in the future. To demonstrate the efficacy of our devised strategies we establish neural baselines for classification on the agent and customer utterances as well as slot labeling for each domain.
## Licensing information
Community Data License Agreement – Permissive, Version 1.0. | [
"## Licensing information\n\nCommunity Data License Agreement – Permissive, Version 1.0."
] | [
"TAGS\n#task_categories-text-classification #task_categories-other #task_ids-intent-classification #task_ids-dialogue-modeling #task_ids-slot-filling #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10k<n<100k #source_datasets-original #language-English #license-other #region-us \n",
"## Licensing information\n\nCommunity Data License Agreement – Permissive, Version 1.0."
] |
3883ffebf0733836fbf325f0b5b90648c06a3099 | # The "Crime Facts" of "Offenses of Fraudulence" in Judicial Yuan Verdicts Dataset
This data set is based on the judgments of "Offenses of Fraudulence" cases published by the Judicial Yuan. The data range of the dataset is from January 1, 2011, to December 31, 2021. 74,823 pieces of original data (judgments and rulings) were collected. We only took the contents of the "criminal facts" field of the judgment. This dataset is divided into three parts. The training dataset has 59,858 verdicts, accounting for about 80% of the original data. The remaining 20% is allocated 10% to the verification (7,482 verdicts) and 10% to the test (7,483 verdicts). "Criminal facts" have been Chinese word segmented. If word segmentation is not needed, please merge it yourself.
# 司法院「詐欺罪」判決書「犯罪事實」資料集
本資料集是以司法院公開之「詐欺」案件判決書做成之資料集。資料集之資料範圍從100年1月1日至110年12月31日,所蒐集到的原始資料共有 74823 篇(判決以及裁定),我們只取判決書的「犯罪事實」欄位內容,並把這原始的資料分成三份,用於訓練的資料集有59858篇,約佔原始資料的80%,剩下的20%,則是各分配10%給驗證集(7482篇),10%給測試集(7483篇)。「犯罪事實」已經經過斷詞,如果不需要斷詞,請自行合併。 | jslin09/Fraud_Case_Verdicts | [
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"legal",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["zh"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "task_categories": ["text-generation"], "tags": ["legal"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.csv"}, {"split": "test", "path": "test.csv"}, {"split": "validate", "path": "validate.csv"}]}]} | 2024-01-17T09:55:37+00:00 | [] | [
"zh"
] | TAGS
#task_categories-text-generation #size_categories-100M<n<1B #language-Chinese #license-apache-2.0 #legal #region-us
| # The "Crime Facts" of "Offenses of Fraudulence" in Judicial Yuan Verdicts Dataset
This data set is based on the judgments of "Offenses of Fraudulence" cases published by the Judicial Yuan. The data range of the dataset is from January 1, 2011, to December 31, 2021. 74,823 pieces of original data (judgments and rulings) were collected. We only took the contents of the "criminal facts" field of the judgment. This dataset is divided into three parts. The training dataset has 59,858 verdicts, accounting for about 80% of the original data. The remaining 20% is allocated 10% to the verification (7,482 verdicts) and 10% to the test (7,483 verdicts). "Criminal facts" have been Chinese word segmented. If word segmentation is not needed, please merge it yourself.
# 司法院「詐欺罪」判決書「犯罪事實」資料集
本資料集是以司法院公開之「詐欺」案件判決書做成之資料集。資料集之資料範圍從100年1月1日至110年12月31日,所蒐集到的原始資料共有 74823 篇(判決以及裁定),我們只取判決書的「犯罪事實」欄位內容,並把這原始的資料分成三份,用於訓練的資料集有59858篇,約佔原始資料的80%,剩下的20%,則是各分配10%給驗證集(7482篇),10%給測試集(7483篇)。「犯罪事實」已經經過斷詞,如果不需要斷詞,請自行合併。 | [
"# The \"Crime Facts\" of \"Offenses of Fraudulence\" in Judicial Yuan Verdicts Dataset\n\nThis data set is based on the judgments of \"Offenses of Fraudulence\" cases published by the Judicial Yuan. The data range of the dataset is from January 1, 2011, to December 31, 2021. 74,823 pieces of original data (judgments and rulings) were collected. We only took the contents of the \"criminal facts\" field of the judgment. This dataset is divided into three parts. The training dataset has 59,858 verdicts, accounting for about 80% of the original data. The remaining 20% is allocated 10% to the verification (7,482 verdicts) and 10% to the test (7,483 verdicts). \"Criminal facts\" have been Chinese word segmented. If word segmentation is not needed, please merge it yourself.",
"# 司法院「詐欺罪」判決書「犯罪事實」資料集\n\n本資料集是以司法院公開之「詐欺」案件判決書做成之資料集。資料集之資料範圍從100年1月1日至110年12月31日,所蒐集到的原始資料共有 74823 篇(判決以及裁定),我們只取判決書的「犯罪事實」欄位內容,並把這原始的資料分成三份,用於訓練的資料集有59858篇,約佔原始資料的80%,剩下的20%,則是各分配10%給驗證集(7482篇),10%給測試集(7483篇)。「犯罪事實」已經經過斷詞,如果不需要斷詞,請自行合併。"
] | [
"TAGS\n#task_categories-text-generation #size_categories-100M<n<1B #language-Chinese #license-apache-2.0 #legal #region-us \n",
"# The \"Crime Facts\" of \"Offenses of Fraudulence\" in Judicial Yuan Verdicts Dataset\n\nThis data set is based on the judgments of \"Offenses of Fraudulence\" cases published by the Judicial Yuan. The data range of the dataset is from January 1, 2011, to December 31, 2021. 74,823 pieces of original data (judgments and rulings) were collected. We only took the contents of the \"criminal facts\" field of the judgment. This dataset is divided into three parts. The training dataset has 59,858 verdicts, accounting for about 80% of the original data. The remaining 20% is allocated 10% to the verification (7,482 verdicts) and 10% to the test (7,483 verdicts). \"Criminal facts\" have been Chinese word segmented. If word segmentation is not needed, please merge it yourself.",
"# 司法院「詐欺罪」判決書「犯罪事實」資料集\n\n本資料集是以司法院公開之「詐欺」案件判決書做成之資料集。資料集之資料範圍從100年1月1日至110年12月31日,所蒐集到的原始資料共有 74823 篇(判決以及裁定),我們只取判決書的「犯罪事實」欄位內容,並把這原始的資料分成三份,用於訓練的資料集有59858篇,約佔原始資料的80%,剩下的20%,則是各分配10%給驗證集(7482篇),10%給測試集(7483篇)。「犯罪事實」已經經過斷詞,如果不需要斷詞,請自行合併。"
] |
b56484636d458e72c094ef81c6e85b3a695ee7e4 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
## Dataset Description
This is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags
- **Point of Contact: [@ju-bezdek](https://github.com/ju-bezdek) **
### Supported Tasks and Leaderboards
NER
labels:
- 0: O
- 1: B-PER
- 2: I-PER
- 3: B-ORG
- 4: I-ORG
- 5: B-LOC
- 6: I-LOC
- 7: B-MISC
- 8: I-MISC
### Languages
sk
## Dataset Structure
### Data Splits
train, test, val
## Dataset Creation
### Source Data
https://huggingface.co/datasets/conll2003
### Annotations
#### Annotation process
- Machine Translation
- Machine pairing tags with reverse translation, and hardcoded rules (including phrase regex matching etc.)
- Manual annotation of records that couldn't be automatically matched
| ju-bezdek/conll2003-SK-NER | [
"task_categories:other",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:sk",
"license:unknown",
"structure-prediction",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["found"], "language": ["sk"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|conll2003"], "task_categories": ["other"], "task_ids": ["named-entity-recognition", "part-of-speech"], "pretty_name": "conll-2003-sk-ner", "tags": ["structure-prediction"]} | 2023-03-21T08:13:05+00:00 | [] | [
"sk"
] | TAGS
#task_categories-other #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|conll2003 #language-Slovak #license-unknown #structure-prediction #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- Table of Contents
- Dataset Description
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Splits
- Dataset Creation
- Source Data
- Annotations
- Annotation process
## Dataset Description
This is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags
- Point of Contact: @ju-bezdek
### Supported Tasks and Leaderboards
NER
labels:
- 0: O
- 1: B-PER
- 2: I-PER
- 3: B-ORG
- 4: I-ORG
- 5: B-LOC
- 6: I-LOC
- 7: B-MISC
- 8: I-MISC
### Languages
sk
## Dataset Structure
### Data Splits
train, test, val
## Dataset Creation
### Source Data
URL
### Annotations
#### Annotation process
- Machine Translation
- Machine pairing tags with reverse translation, and hardcoded rules (including phrase regex matching etc.)
- Manual annotation of records that couldn't be automatically matched
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)\n - Table of Contents\n - Dataset Description\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Splits\n - Dataset Creation\n - Source Data\n - Annotations\n - Annotation process",
"## Dataset Description\nThis is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags\n\n- Point of Contact: @ju-bezdek",
"### Supported Tasks and Leaderboards\n\nNER\n\nlabels:\n\n- 0: O\n- 1: B-PER\n- 2: I-PER\n- 3: B-ORG\n- 4: I-ORG\n- 5: B-LOC\n- 6: I-LOC\n- 7: B-MISC\n- 8: I-MISC",
"### Languages\n\nsk",
"## Dataset Structure",
"### Data Splits\n\ntrain, test, val",
"## Dataset Creation",
"### Source Data\nURL",
"### Annotations",
"#### Annotation process\n\n- Machine Translation\n- Machine pairing tags with reverse translation, and hardcoded rules (including phrase regex matching etc.)\n- Manual annotation of records that couldn't be automatically matched"
] | [
"TAGS\n#task_categories-other #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|conll2003 #language-Slovak #license-unknown #structure-prediction #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)\n - Table of Contents\n - Dataset Description\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Splits\n - Dataset Creation\n - Source Data\n - Annotations\n - Annotation process",
"## Dataset Description\nThis is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags\n\n- Point of Contact: @ju-bezdek",
"### Supported Tasks and Leaderboards\n\nNER\n\nlabels:\n\n- 0: O\n- 1: B-PER\n- 2: I-PER\n- 3: B-ORG\n- 4: I-ORG\n- 5: B-LOC\n- 6: I-LOC\n- 7: B-MISC\n- 8: I-MISC",
"### Languages\n\nsk",
"## Dataset Structure",
"### Data Splits\n\ntrain, test, val",
"## Dataset Creation",
"### Source Data\nURL",
"### Annotations",
"#### Annotation process\n\n- Machine Translation\n- Machine pairing tags with reverse translation, and hardcoded rules (including phrase regex matching etc.)\n- Manual annotation of records that couldn't be automatically matched"
] |
968eb67fdb0314e80ae9222cd2f60077db7dd4f5 |
## ReactionGIF
> From https://github.com/bshmueli/ReactionGIF

___
## Excerpt from original repo readme
ReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions.
To find out more about ReactionGIF,
check out our ACL 2021 paper:
* Shmueli, Ray and Ku, [Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter](https://arxiv.org/abs/2105.09967)
## Citation
If you use our dataset, kindly cite the paper using the following BibTex entry:
```bibtex
@misc{shmueli2021happy,
title={Happy Dance, Slow Clap: Using Reaction {GIFs} to Predict Induced Affect on {Twitter}},
author={Boaz Shmueli and Soumya Ray and Lun-Wei Ku},
year={2021},
eprint={2105.09967},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| julien-c/reactiongif | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2105.09967",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "paperswithcode_id": "reactiongif"} | 2022-09-20T11:10:26+00:00 | [
"2105.09967"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2105.09967 #region-us
|
## ReactionGIF
> From URL
!gif
___
## Excerpt from original repo readme
ReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions.
To find out more about ReactionGIF,
check out our ACL 2021 paper:
* Shmueli, Ray and Ku, Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter
If you use our dataset, kindly cite the paper using the following BibTex entry:
| [
"## ReactionGIF\n\n> From URL\n\n!gif\n\n\n___",
"## Excerpt from original repo readme\n\nReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions. \n\nTo find out more about ReactionGIF, \ncheck out our ACL 2021 paper:\n\n* Shmueli, Ray and Ku, Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter\n\n\nIf you use our dataset, kindly cite the paper using the following BibTex entry:"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2105.09967 #region-us \n",
"## ReactionGIF\n\n> From URL\n\n!gif\n\n\n___",
"## Excerpt from original repo readme\n\nReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions. \n\nTo find out more about ReactionGIF, \ncheck out our ACL 2021 paper:\n\n* Shmueli, Ray and Ku, Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter\n\n\nIf you use our dataset, kindly cite the paper using the following BibTex entry:"
] |
194343254d70c104a7a923e971c57954316b138e | # AutoNLP Dataset for project: song-lyrics-demo
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project song-lyrics-demo.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 2,
"text": "[Intro: Method Man w/ sample] + (Sunny valentine). We got butter (8X). (The gun'll go the gun'll go.... The gun'll go...). [Raekwon]. Aiyo one thing for sure keep you of all. Keep a nice crib fly away keep to the point. Keep niggaz outta ya face who snakes. Keep bitches in they place keep the mac in a special place. Keep moving for papes keep cool keep doing what you doing. Keep it fly keep me in the crates. Cuz I will erase shit on the real note you'se a waste. It's right here for you I will lace you. Rip you and brace you put a nice W up on ya face. Word to mother you could get chased. It's nothing to taste blood on a thug if he gotta go. All I know is we be giving grace. This is a place from where we make tapes. We make 'em everywhere still in all we be making base. Y'all be making paste these little niggaz they be making shapes. Our shit is art yours is traced. [Chorus: Sunny Valentine]. This is the way that we rolling in the streets. You know when we roll we be packing that heat. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go.... [Method Man]. This is Poverty Island man these animals don't run. Slums where the ambulance don't come. Who got the best base? Fiends waiting to smoke some. Approach something ask him where he getting that coke from. My dudes hug blocks like samurai shogun. Cuz no V and no ones equalling no fun. Who want a treat they know huh? Body to go numb. My woman need funds plus her hair and her toes done. It is what it is though you fuck with the kid flow. That make it hard to get dough the harder to get gold. Harder the piff blow harder when it snow. The pinky and the wrist glow this here what we live for. Get gwop then get low but first thought. We gotta get the work off the gift and the curse boss. Yeah see I'm the shit yo the dirt in the fit no. Hustling from the get-go the motto is get more. [Chorus]. [Masta Killa]. We was quiet flashy brothers strapped all along. With the dirty .38 long twelve hour shift gate. Took case state to state you think he won't hold his weight?. Put ya money on the plate and watch it get scrapped. We get ape up in that club off that juice and Henn. And it's a no win situation fucking with them. You mean like Ewing at the front at the rim finger roll a Dutch. Million dollar stages touched techs gauges bust. Trust no one the lone shogun rugged Timb boot stomper. Damaging lyrical mass destruction launcher. Nothing can calm the quakeage when I break kid. Peace to my brothers up north doing state bids. [Chorus]. [Chorus 2: Sunny Valentine]. Whoa... this is the way we be rolling in the club. You know when we roll we be packing .32 snubs. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. [Outro: sample to fade]. We got butter..."
},
{
"target": 4,
"text": "[Sean Paul:]. Aye. It's Sean Paul 'long side. The mandem called Jay Sean. Fi di gal dem. Tellin' 'em again what we tell 'em. [Jay Sean:]. Pass me a drink to the left yeah. Said her name was Delilah. And I'm like \"you should come my way\". I already surrender. Damn girl that body's fire. You gon' remember my name. (She should give it up definite). You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. In the morning we gon' do it again wake up. I'mma do it like we just broke up and made up. Get up on top of me and work up a sweat work up a sweat. See we can do it any type of way that you want. I'm thinking maybe you're the right kind of wrong. I'm saying baby you won't ever forget my love. You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. [Sean Paul:]. Girl mi wan' figure hundred hundred and fifty. Love how you move you know that I'm with it. Perfect size I know that you fit it. Just let me hit it you know mi not quit it. Pon di Dl like Cassie and Diddy. Mi na wound a mi watch we like Sin City. Full time mi run da ting mi tall legend. If you don't come gimme dat would I be offended my girl. Come here down wan' see something me want in life and then waste time. A you a mi pree every day baby full time when ya de pon on mi mind. So mi wine if you give it to me baby girl so we can play. Stick to the ting now I am your king my girl this is what we say. [Jay Sean:]. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=6, names=['Dance', 'Heavy Metal', 'Hip Hop', 'Indie', 'Pop', 'Rock'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 48493 |
| valid | 5389 |
| juliensimon/autonlp-data-song-lyrics-demo | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T08:50:45+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoNLP Dataset for project: song-lyrics-demo
=============================================
Table of content
----------------
* Dataset Description
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project song-lyrics-demo.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
4b0770d80c127db8eb5f8b80784978324c91217f | # AutoNLP Dataset for project: song-lyrics
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project song-lyrics.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 2,
"text": "[Intro: Method Man w/ sample] + (Sunny valentine). We got butter (8X). (The gun'll go the gun'll go.... The gun'll go...). [Raekwon]. Aiyo one thing for sure keep you of all. Keep a nice crib fly away keep to the point. Keep niggaz outta ya face who snakes. Keep bitches in they place keep the mac in a special place. Keep moving for papes keep cool keep doing what you doing. Keep it fly keep me in the crates. Cuz I will erase shit on the real note you'se a waste. It's right here for you I will lace you. Rip you and brace you put a nice W up on ya face. Word to mother you could get chased. It's nothing to taste blood on a thug if he gotta go. All I know is we be giving grace. This is a place from where we make tapes. We make 'em everywhere still in all we be making base. Y'all be making paste these little niggaz they be making shapes. Our shit is art yours is traced. [Chorus: Sunny Valentine]. This is the way that we rolling in the streets. You know when we roll we be packing that heat. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go.... [Method Man]. This is Poverty Island man these animals don't run. Slums where the ambulance don't come. Who got the best base? Fiends waiting to smoke some. Approach something ask him where he getting that coke from. My dudes hug blocks like samurai shogun. Cuz no V and no ones equalling no fun. Who want a treat they know huh? Body to go numb. My woman need funds plus her hair and her toes done. It is what it is though you fuck with the kid flow. That make it hard to get dough the harder to get gold. Harder the piff blow harder when it snow. The pinky and the wrist glow this here what we live for. Get gwop then get low but first thought. We gotta get the work off the gift and the curse boss. Yeah see I'm the shit yo the dirt in the fit no. Hustling from the get-go the motto is get more. [Chorus]. [Masta Killa]. We was quiet flashy brothers strapped all along. With the dirty .38 long twelve hour shift gate. Took case state to state you think he won't hold his weight?. Put ya money on the plate and watch it get scrapped. We get ape up in that club off that juice and Henn. And it's a no win situation fucking with them. You mean like Ewing at the front at the rim finger roll a Dutch. Million dollar stages touched techs gauges bust. Trust no one the lone shogun rugged Timb boot stomper. Damaging lyrical mass destruction launcher. Nothing can calm the quakeage when I break kid. Peace to my brothers up north doing state bids. [Chorus]. [Chorus 2: Sunny Valentine]. Whoa... this is the way we be rolling in the club. You know when we roll we be packing .32 snubs. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. [Outro: sample to fade]. We got butter..."
},
{
"target": 4,
"text": "[Sean Paul:]. Aye. It's Sean Paul 'long side. The mandem called Jay Sean. Fi di gal dem. Tellin' 'em again what we tell 'em. [Jay Sean:]. Pass me a drink to the left yeah. Said her name was Delilah. And I'm like \"you should come my way\". I already surrender. Damn girl that body's fire. You gon' remember my name. (She should give it up definite). You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. In the morning we gon' do it again wake up. I'mma do it like we just broke up and made up. Get up on top of me and work up a sweat work up a sweat. See we can do it any type of way that you want. I'm thinking maybe you're the right kind of wrong. I'm saying baby you won't ever forget my love. You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. [Sean Paul:]. Girl mi wan' figure hundred hundred and fifty. Love how you move you know that I'm with it. Perfect size I know that you fit it. Just let me hit it you know mi not quit it. Pon di Dl like Cassie and Diddy. Mi na wound a mi watch we like Sin City. Full time mi run da ting mi tall legend. If you don't come gimme dat would I be offended my girl. Come here down wan' see something me want in life and then waste time. A you a mi pree every day baby full time when ya de pon on mi mind. So mi wine if you give it to me baby girl so we can play. Stick to the ting now I am your king my girl this is what we say. [Jay Sean:]. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=6, names=['Dance', 'Heavy Metal', 'Hip Hop', 'Indie', 'Pop', 'Rock'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 48493 |
| valid | 5389 |
| juliensimon/autonlp-data-song-lyrics | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T08:50:51+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoNLP Dataset for project: song-lyrics
========================================
Table of content
----------------
* Dataset Description
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project song-lyrics.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
d7c03d1f921ac85c3731e8ab256889c495bf36aa | # FewGLUE_32dev
This repository contains the FewGLUE_32dev dataset, an extension of the [FewGLUE](https://github.com/timoschick/fewglue), which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in [previous work](https://arxiv.org/abs/2012.15723) that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.
### Data Format
The data files follow the exact same format as [SuperGLUE task files](https://super.gluebenchmark.com/tasks).
### Structure
For each SuperGLUE task `T`, the directory `FewGLUE_32dev/T` contains the 32-sample-dev file (`dev32.jsonl`), which consists of 32 examples for few-shot validation.
| juny116/few_glue | [
"arxiv:2012.15723",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-13T04:37:37+00:00 | [
"2012.15723"
] | [] | TAGS
#arxiv-2012.15723 #region-us
| # FewGLUE_32dev
This repository contains the FewGLUE_32dev dataset, an extension of the FewGLUE, which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in previous work that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.
### Data Format
The data files follow the exact same format as SuperGLUE task files.
### Structure
For each SuperGLUE task 'T', the directory 'FewGLUE_32dev/T' contains the 32-sample-dev file ('URL'), which consists of 32 examples for few-shot validation.
| [
"# FewGLUE_32dev\n\nThis repository contains the FewGLUE_32dev dataset, an extension of the FewGLUE, which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in previous work that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.",
"### Data Format\n\nThe data files follow the exact same format as SuperGLUE task files.",
"### Structure\n\nFor each SuperGLUE task 'T', the directory 'FewGLUE_32dev/T' contains the 32-sample-dev file ('URL'), which consists of 32 examples for few-shot validation."
] | [
"TAGS\n#arxiv-2012.15723 #region-us \n",
"# FewGLUE_32dev\n\nThis repository contains the FewGLUE_32dev dataset, an extension of the FewGLUE, which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in previous work that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.",
"### Data Format\n\nThe data files follow the exact same format as SuperGLUE task files.",
"### Structure\n\nFor each SuperGLUE task 'T', the directory 'FewGLUE_32dev/T' contains the 32-sample-dev file ('URL'), which consists of 32 examples for few-shot validation."
] |
9a2b5f9fe33bf2ef9cc1b19cdb532574299d6d71 | This dataset was gathered from the [Google Fact Checker API](https://toolbox.google.com/factcheck/explorer), using an automatic web scraper. 10,000 facts were pulled, but for the sake of simplicity, only ones were the ratings were singular words "false" or "true", were kept, which filtered it down to ~3000 fact checks, with about 90% of the facts being false.
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | justinqbui/covid_fact_checked_google_api | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-13T00:51:50+00:00 | [] | [] | TAGS
#region-us
| This dataset was gathered from the Google Fact Checker API, using an automatic web scraper. 10,000 facts were pulled, but for the sake of simplicity, only ones were the ratings were singular words "false" or "true", were kept, which filtered it down to ~3000 fact checks, with about 90% of the facts being false.
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | [] | [
"TAGS\n#region-us \n"
] |
3b6b4bf045e9f17a84c6e8df92cb9d290d36e500 | This dataset was gathered by using an automated web scraper that scraped [polifact covid fact checker](https://www.politifact.com/coronavirus/). This dataset contains three columns, the text, the rating given by polifact (half-true, full-flop, pants-fire, barely-true true, mostly-true, and false), and the adjusted rating.
The adjusted rating was created by mapping the raw rating given by polifact
```
true -> true
mostly-true -> true
half-true -> misleading
barely-true -> misleading
false -> false
pants-fire -> false
full-flop -> false
```
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | justinqbui/covid_fact_checked_polifact | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-13T00:33:36+00:00 | [] | [] | TAGS
#region-us
| This dataset was gathered by using an automated web scraper that scraped polifact covid fact checker. This dataset contains three columns, the text, the rating given by polifact (half-true, full-flop, pants-fire, barely-true true, mostly-true, and false), and the adjusted rating.
The adjusted rating was created by mapping the raw rating given by polifact
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | [] | [
"TAGS\n#region-us \n"
] |
755454d31bf8cdd1dc7e52e7c63d37d3a33f2069 | Just for test. The copy of the dataset https://www.kaggle.com/dataclusterlabs/domestic-house-windows-dataset | k0t1k/test | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-19T16:45:26+00:00 | [] | [] | TAGS
#region-us
| Just for test. The copy of the dataset URL | [] | [
"TAGS\n#region-us \n"
] |
4f51527df44a7f7f915bee494f1129915118d0e1 | # CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
CORD dataset is cloned from [clovaai](https://github.com/clovaai/cord) GitHub repo
- Box coordinates are normalized against image width/height
- Labels with very few occurrences are replaced with O:
```
replacing_labels = ['menu.etc', 'menu.itemsubtotal',
'menu.sub_etc', 'menu.sub_unitprice',
'menu.vatyn', 'void_menu.nm',
'void_menu.price', 'sub_total.othersvc_price']
```
Check for more info [Sparrow](https://github.com/katanaml/sparrow)
## Citation
### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
```
@article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
```
### Post-OCR parsing: building simple and robust parser via BIO tagging
```
@article{hwang2019post,
title={Post-OCR parsing: building simple and robust parser via BIO tagging},
author={Hwang, Wonseok and Kim, Seonghyeon and Yim, Jinyeong and Seo, Minjoon and Park, Seunghyun and Park, Sungrae and Lee, Junyeop and Lee, Bado and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
``` | katanaml/cord | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-03-06T15:02:45+00:00 | [] | [] | TAGS
#region-us
| # CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
CORD dataset is cloned from clovaai GitHub repo
- Box coordinates are normalized against image width/height
- Labels with very few occurrences are replaced with O:
Check for more info Sparrow
### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
### Post-OCR parsing: building simple and robust parser via BIO tagging
| [
"# CORD: A Consolidated Receipt Dataset for Post-OCR Parsing\n\nCORD dataset is cloned from clovaai GitHub repo\n\n- Box coordinates are normalized against image width/height\n- Labels with very few occurrences are replaced with O:\n\n\n\nCheck for more info Sparrow",
"### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing",
"### Post-OCR parsing: building simple and robust parser via BIO tagging"
] | [
"TAGS\n#region-us \n",
"# CORD: A Consolidated Receipt Dataset for Post-OCR Parsing\n\nCORD dataset is cloned from clovaai GitHub repo\n\n- Box coordinates are normalized against image width/height\n- Labels with very few occurrences are replaced with O:\n\n\n\nCheck for more info Sparrow",
"### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing",
"### Post-OCR parsing: building simple and robust parser via BIO tagging"
] |
314b0e9b26b114e9731e645439c75a5e93ca21f6 | https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c | katoensp/VR-OP | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-03-30T14:54:47+00:00 | [] | [] | TAGS
#region-us
| URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL | [] | [
"TAGS\n#region-us \n"
] |
65abe73d128fe38c1da174718ecef300f8e204c0 | A cleaned version of MC4 dataset for Sinhala, config is a direct adaptation of MC4 original processing script. | keshan/clean-si-mc4 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-14T09:14:11+00:00 | [] | [] | TAGS
#region-us
| A cleaned version of MC4 dataset for Sinhala, config is a direct adaptation of MC4 original processing script. | [] | [
"TAGS\n#region-us \n"
] |
d8458d504dd9f497ef5a009976c253c97e6270a0 | This data set contains multi-speaker high quality transcribed audio data for Sinhalese. The data set consists of wave files and the transcriptions of the audio files.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Sri Lanka.
See [LICENCE.txt](https://www.openslr.org/resources/30/LICENSE.txt) file for license information.
If you use this data in publications, please cite it as follows:
```
@inproceedings{Sodimana2018,
author={Keshan Sodimana and Pasindu {De Silva} and Supheakmungkol Sarin and Oddur Kjartansson and Martin Jansche and Knot Pipatsrisawat and Linne Ha},
title={{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese}},
year=2018,
booktitle={Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages},
pages={66--70},
doi={10.21437/SLTU.2018-14},
url={http://dx.doi.org/10.21437/SLTU.2018-14}
}
``` | keshan/multispeaker-tts-sinhala | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-04T14:39:30+00:00 | [] | [] | TAGS
#region-us
| This data set contains multi-speaker high quality transcribed audio data for Sinhalese. The data set consists of wave files and the transcriptions of the audio files.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Sri Lanka.
See URL file for license information.
If you use this data in publications, please cite it as follows:
| [] | [
"TAGS\n#region-us \n"
] |
a35806ef4a6f4f79a13fc09b82e81a346ff8272f | https://github.com/google-research-datasets/wit
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset.
WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages.
```
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
``` | keshan/wit-dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-07T17:15:42+00:00 | [] | [] | TAGS
#region-us
| URL
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset.
WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages.
| [] | [
"TAGS\n#region-us \n"
] |
d5dd7720cc49cc604a9817302c7175f627406537 | # Models Trained On ManyTypes4TypeScript
- **[CodeBERT]**(https://huggingface.co/kevinjesse/codebert-MT4TS)
- **[GraphCodeBERT]**(https://huggingface.co/kevinjesse/graphcodebert-MT4TS)
- **[CodeBERTa]**(https://huggingface.co/kevinjesse/codeberta-MT4TS)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Dataset:** [https://doi.org/10.5281/zenodo.6387001](https://doi.org/10.5281/zenodo.6387001)
- **PapersWithCode:** [https://paperswithcode.com/sota/type-prediction-on-manytypes4typescript](https://paperswithcode.com/sota/type-prediction-on-manytypes4typescript)
### Dataset Summary
ManyTypes4TypeScript type inference dataset, available at the DOI link below. [](https://doi.org/10.5281/zenodo.6387001)
Given a line of source code, the task is to identify types that correspond with the tokens of code. We treat this as a tagging task similar to NER and POS where the model must predict a structural property of code i.e types. This is a classification task where the labels are the top occurring types in the training dataset. The size type vocabulary can be changed with the scripts found on Github.
### Supported Tasks and Leaderboards
- `multi-class-classification`: The dataset can be used to train a model for predicting types across a sequence.
### Languages
- TypeScript
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"tokens": ["import", "{", "Component", ",", "ChangeDetectorRef", "}", "from", "'@angular/core'", ";", "import", "{", "Router", "}", "from", "'@angular/router'", ";", "import", "{", "MenuController", "}", "from", "'@ionic/angular'", ";", "import", "{", "Storage", "}", "from", "'@ionic/storage'", ";", "import", "Swiper", "from", "'swiper'", ";", "@", "Component", "(", "{", "selector", ":", "'page-tutorial'", ",", "templateUrl", ":", "'tutorial.html'", ",", "styleUrls", ":", "[", "'./tutorial.scss'", "]", ",", "}", ")", "export", "class", "TutorialPage", "{", "showSkip", "=", "true", ";", "private", "slides", ":", "Swiper", ";", "constructor", "(", "public", "menu", ",", "public", "router", ",", "public", "storage", ",", "private", "cd", ")", "{", "}", "startApp", "(", ")", "{", "this", ".", "router", ".", "navigateByUrl", "(", "'/app/tabs/schedule'", ",", "{", "replaceUrl", ":", "true", "}", ")", ".", "then", "(", "(", ")", "=>", "this", ".", "storage", ".", "set", "(", "'ion_did_tutorial'", ",", "true", ")", ")", ";", "}", "setSwiperInstance", "(", "swiper", ")", "{", "this", ".", "slides", "=", "swiper", ";", "}", "onSlideChangeStart", "(", ")", "{", "this", ".", "showSkip", "=", "!", "this", ".", "slides", ".", "isEnd", ";", "this", ".", "cd", ".", "detectChanges", "(", ")", ";", "}", "ionViewWillEnter", "(", ")", "{", "this", ".", "storage", ".", "get", "(", "'ion_did_tutorial'", ")", ".", "then", "(", "res", "=>", "{", "if", "(", "res", "===", "true", ")", "{", "this", ".", "router", ".", "navigateByUrl", "(", "'/app/tabs/schedule'", ",", "{", "replaceUrl", ":", "true", "}", ")", ";", "}", "}", ")", ";", "this", ".", "menu", ".", "enable", "(", "false", ")", ";", "}", "ionViewDidLeave", "(", ")", "{", "this", ".", "menu", ".", "enable", "(", "true", ")", ";", "}", "}"],
"labels": [null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "MenuController", null, null, "Router", null, null, "Storage", null, null, "ChangeDetectorRef", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "Swiper", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null],
"url": "https://github.com/ionic-team/ionic-conference-app",
"path": "ionic-conference-app/src/app/pages/tutorial/tutorial.ts",
"commit_hash": "34d97d29369377a2f0173a2958de1ee0dadb8a6e",
"file": "tutorial.ts"}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
|field name. | type | description |
|------------|-------------|--------------------------------------------|
|tokens |list[string] | Sequence of tokens (word tokenization) |
|labels |list[string] | A list of corresponding types |
|url |string | Repository URL |
|path |string | Original file path that contains this code |
|commit_hash |string | Commit identifier in the original project |
|file |string | File name |
### Data Splits
| name | train |validation| test |
|---------:|---------:|---------:|--------:|
|projects | 75.00% | 12.5% | 12.5% |
|files | 90.53% | 4.43% | 5.04% |
|sequences | 91.95% | 3.71% | 4.34% |
|types | 95.33% | 2.21% | 2.46% |
##Types by the Numbers
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
Human annotated types in optionally typed languages and the compiler inferred annotations.
#### Annotation process
#### Who are the annotators?
Developers and TypeScript Compiler.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/kevinjesse
### Licensing Information
Creative Commons 4.0 (CC) license
### Citation Information
```
``` | kevinjesse/ManyTypes4TypeScript | [
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:code",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found", "machine-generated"], "language_creators": ["found"], "language": ["code"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": ["type-inference"], "pretty_name": "ManyTypes4TypeScript", "language_details": "TypeScript"} | 2022-10-22T07:35:33+00:00 | [] | [
"code"
] | TAGS
#annotations_creators-found #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-code #license-cc-by-4.0 #region-us
| Models Trained On ManyTypes4TypeScript
======================================
* [CodeBERT](URL
* [GraphCodeBERT](URL
* [CodeBERTa](URL
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Dataset: URL
* PapersWithCode: URL
### Dataset Summary
ManyTypes4TypeScript type inference dataset, available at the DOI link below. 
field name.: labels, type: list[string], description: A list of corresponding types
field name.: url, type: string, description: Repository URL
field name.: path, type: string, description: Original file path that contains this code
field name.: commit\_hash, type: string, description: Commit identifier in the original project
field name.: file, type: string, description: File name
### Data Splits
##Types by the Numbers
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
Human annotated types in optionally typed languages and the compiler inferred annotations.
#### Annotation process
#### Who are the annotators?
Developers and TypeScript Compiler.
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
URL
### Licensing Information
Creative Commons 4.0 (CC) license
| [
"### Dataset Summary\n\n\nManyTypes4TypeScript type inference dataset, available at the DOI link below. \nfield name.: labels, type: list[string], description: A list of corresponding types\nfield name.: url, type: string, description: Repository URL\nfield name.: path, type: string, description: Original file path that contains this code\nfield name.: commit\\_hash, type: string, description: Commit identifier in the original project\nfield name.: file, type: string, description: File name",
"### Data Splits",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations\n\n\nHuman annotated types in optionally typed languages and the compiler inferred annotations.",
"#### Annotation process",
"#### Who are the annotators?\n\n\nDevelopers and TypeScript Compiler.",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nURL",
"### Licensing Information\n\n\nCreative Commons 4.0 (CC) license"
] | [
"TAGS\n#annotations_creators-found #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-code #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nManyTypes4TypeScript type inference dataset, available at the DOI link below. \nfield name.: labels, type: list[string], description: A list of corresponding types\nfield name.: url, type: string, description: Repository URL\nfield name.: path, type: string, description: Original file path that contains this code\nfield name.: commit\\_hash, type: string, description: Commit identifier in the original project\nfield name.: file, type: string, description: File name",
"### Data Splits",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations\n\n\nHuman annotated types in optionally typed languages and the compiler inferred annotations.",
"#### Annotation process",
"#### Who are the annotators?\n\n\nDevelopers and TypeScript Compiler.",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nURL",
"### Licensing Information\n\n\nCreative Commons 4.0 (CC) license"
] |
30b869bd3b4e62823247bdda5b1d17b9aa0b47fc | For identifying personifications | kevinlu1248/personificationgen | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-02T04:03:44+00:00 | [] | [] | TAGS
#region-us
| For identifying personifications | [] | [
"TAGS\n#region-us \n"
] |
b96cea6dd58782036d00b1b878f60470347a77f2 | # AI Stage MRC task
## Version Info
### v4.1.1
- v3.2.3데이터 (train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
- train_aug_punctuation에 있음
- v4.1.0의 오류 해결
### v4.1.0
- v3.2.2데이터(train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
- train_data_aug에 있음
- answers 잘못 labeling된 데이터
### v4.0.1
- punctuation추가한 데이터셋, both train and validation
- answers type 정상
### v4.0.0
- punctuation추가한 데이터셋, only train
- answers type 오류
### v3.2.3
- `v3.2.2`에서 잘못된 [ANSWER] 위치 수정
### v3.2.2
- `v3.2.1`에서 special token([TITLE]) 제거
### v3.2.1
- `v3.2.0`에서 special token([ANSWER]) 추가
### v3.2.0
- `v1.3.1`에서 special tokens([TITLE], #) 추가
### v3.1.0
- `v3.0.0`에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v3.0.0
- `v1.0.0`에서 태욱님 answer, sentence split 토큰 추가
### v2.1.1
- `v2.1.0`에서 `v3.2.3`의 augmentation 데이터와 concat
- bt_context_extractive_final폴더에 train, validation사용
### v2.1.0
- extractive모델을 위한 pororo context augmentation
- context내에 answer가 유일한 데이터만 증강, answer위치 조정 완료
- context_bt_for_extracive폴더에 train, validation 추가
### v2.0.1
- `v2.0.0`에서 context내 answer가 손상된 데이터 제거
### v2.0.0
- 채은님 context backtranslation 추가한 데이터셋
### v1.6.4
- `v1.6.3`에서 `train_dataset_curri` 폴더 내 구성 변경
- `train_level_1` & `train_level_2`-> `train_level_1`
- `train_level_3` -> `train_level_2`
- `train_level_#`을 모두 합친 `train_total`
### v1.6.3
- `v1.6.2`에서 `train_dataset_curri` 폴더 추가, 샘플별 스코어링하여 `level0` ~ `level3`으로 구성
- 사용 데이터셋 : `train`, `train_perm01`, `train_perm02`, `train_perm04`, `train_mask_2`, `train_hard_mask`, `pororo_aug_ver2_len_context_easy`, `pororo_aug_ver2_len_context_normal`, `pororo_aug_ver2_len_context_hard`
### v1.6.2
- `v1.6.1`에서 `train_dataset` 폴더에 `train_mask_2`, `train_hard_mask` 데이터 추가
### v1.6.1
- `v1.6.0`에서 `train_dataset` 폴더에
- `train_pororo_aug_ver2`의 context length 기준으로 curriculum-learning 데이터셋 추가
- easy : `len < 673`
- normal : `673 <= len < 935`
- hard : `935 <= len`
- `v1.4.1`의 업데이트 데이터셋 반영
### v1.6.0
- `v1.3.2`에서 `train_dataset` 폴더에 permutation ratio 0.1, 0.2, 0.4의 sentence permutation 데이터 추가
### v1.5.0
- `v1.4.1`에서 헷갈리는 단어, 날짜 정보 Masking datsets 추가
### v1.4.4
- `v1.4.1`에서 증강 데이터(train, valid pororo ver1 포함) concat, shuffled concat 추가
### v1.4.3
- `v1.4.1`에서 증강 데이터(train, valid pororo ver1 제외) concat, shuffled concat 추가
### v1.4.2
- `v1.4.1`에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v1.4.1
- 대웅님께서 공유해주신 질문유형을 반영하여 기존의 질문을 7개에서 45개로 늘려 pororo aug 적용하여 pororo aug ver2 추가
### v1.3.2
- 'v1.3.1'에 'train_dataset_aeda'에 preprocessing 이 누락 되어 처리
### v1.3.1
- `v.1.3.0`에 `train_dataset_aug` 폴더 추가(question에 대한 조사 제거, Back Translation, AEDA, pororo aug ver1을 concatenate함)
### v1.3.0
- `v1.2.0`에서 `wiki_documents.json`을 pororo aug를 활용해 50,531건의 증강 데이터 추가
### v1.2.0
- `v1.1.0`에서 question에 대한 조사 제거, Back Translation, AEDA Augmentation 추가(pororo aug엔 적용하지 않음)
### v1.1.0
- `v1.0.0`에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v1.0.0
- `v0.1.1`에서 context에 전처리
### v0.2.2
- `train_dataset`의 `train`, `validation` 셋에서 문제 및 정답오류 수정
### v0.2.1
- `train_pororo_aug`, `validation_pororo_aug`에도 동일한 summary 추가
- `context_bullet`에서 발견된 오류 수정(`context`와 관련 없는 문장이 생성되는 오류)
### v0.2.0
- 대웅님 pororo context summary 추가한 데이터셋
### v0.1.1
- 영재님 pororo augmenation 추가한 데이터셋
- `train_dataset`의 `train`, `validation` 셋에서 문제 및 정답오류 수정
### v0.1.0
- 영재님 pororo augmenation 추가한 데이터셋
### v0.0.0
- 대회에서 제공해주신 기본 데이터셋
## LICENSE
- CC-BY-2.0
- 모든 저작권은 AI Stage에게 있습니다!
- https://stages.ai/
| kiyoung2/aistage-mrc | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-04T06:32:08+00:00 | [] | [] | TAGS
#region-us
| # AI Stage MRC task
## Version Info
### v4.1.1
- v3.2.3데이터 (train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
- train_aug_punctuation에 있음
- v4.1.0의 오류 해결
### v4.1.0
- v3.2.2데이터(train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
- train_data_aug에 있음
- answers 잘못 labeling된 데이터
### v4.0.1
- punctuation추가한 데이터셋, both train and validation
- answers type 정상
### v4.0.0
- punctuation추가한 데이터셋, only train
- answers type 오류
### v3.2.3
- 'v3.2.2'에서 잘못된 [ANSWER] 위치 수정
### v3.2.2
- 'v3.2.1'에서 special token([TITLE]) 제거
### v3.2.1
- 'v3.2.0'에서 special token([ANSWER]) 추가
### v3.2.0
- 'v1.3.1'에서 special tokens([TITLE], #) 추가
### v3.1.0
- 'v3.0.0'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v3.0.0
- 'v1.0.0'에서 태욱님 answer, sentence split 토큰 추가
### v2.1.1
- 'v2.1.0'에서 'v3.2.3'의 augmentation 데이터와 concat
- bt_context_extractive_final폴더에 train, validation사용
### v2.1.0
- extractive모델을 위한 pororo context augmentation
- context내에 answer가 유일한 데이터만 증강, answer위치 조정 완료
- context_bt_for_extracive폴더에 train, validation 추가
### v2.0.1
- 'v2.0.0'에서 context내 answer가 손상된 데이터 제거
### v2.0.0
- 채은님 context backtranslation 추가한 데이터셋
### v1.6.4
- 'v1.6.3'에서 'train_dataset_curri' 폴더 내 구성 변경
- 'train_level_1' & 'train_level_2'-> 'train_level_1'
- 'train_level_3' -> 'train_level_2'
- 'train_level_#'을 모두 합친 'train_total'
### v1.6.3
- 'v1.6.2'에서 'train_dataset_curri' 폴더 추가, 샘플별 스코어링하여 'level0' ~ 'level3'으로 구성
- 사용 데이터셋 : 'train', 'train_perm01', 'train_perm02', 'train_perm04', 'train_mask_2', 'train_hard_mask', 'pororo_aug_ver2_len_context_easy', 'pororo_aug_ver2_len_context_normal', 'pororo_aug_ver2_len_context_hard'
### v1.6.2
- 'v1.6.1'에서 'train_dataset' 폴더에 'train_mask_2', 'train_hard_mask' 데이터 추가
### v1.6.1
- 'v1.6.0'에서 'train_dataset' 폴더에
- 'train_pororo_aug_ver2'의 context length 기준으로 curriculum-learning 데이터셋 추가
- easy : 'len < 673'
- normal : '673 <= len < 935'
- hard : '935 <= len'
- 'v1.4.1'의 업데이트 데이터셋 반영
### v1.6.0
- 'v1.3.2'에서 'train_dataset' 폴더에 permutation ratio 0.1, 0.2, 0.4의 sentence permutation 데이터 추가
### v1.5.0
- 'v1.4.1'에서 헷갈리는 단어, 날짜 정보 Masking datsets 추가
### v1.4.4
- 'v1.4.1'에서 증강 데이터(train, valid pororo ver1 포함) concat, shuffled concat 추가
### v1.4.3
- 'v1.4.1'에서 증강 데이터(train, valid pororo ver1 제외) concat, shuffled concat 추가
### v1.4.2
- 'v1.4.1'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v1.4.1
- 대웅님께서 공유해주신 질문유형을 반영하여 기존의 질문을 7개에서 45개로 늘려 pororo aug 적용하여 pororo aug ver2 추가
### v1.3.2
- 'v1.3.1'에 'train_dataset_aeda'에 preprocessing 이 누락 되어 처리
### v1.3.1
- 'v.1.3.0'에 'train_dataset_aug' 폴더 추가(question에 대한 조사 제거, Back Translation, AEDA, pororo aug ver1을 concatenate함)
### v1.3.0
- 'v1.2.0'에서 'wiki_documents.json'을 pororo aug를 활용해 50,531건의 증강 데이터 추가
### v1.2.0
- 'v1.1.0'에서 question에 대한 조사 제거, Back Translation, AEDA Augmentation 추가(pororo aug엔 적용하지 않음)
### v1.1.0
- 'v1.0.0'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v1.0.0
- 'v0.1.1'에서 context에 전처리
### v0.2.2
- 'train_dataset'의 'train', 'validation' 셋에서 문제 및 정답오류 수정
### v0.2.1
- 'train_pororo_aug', 'validation_pororo_aug'에도 동일한 summary 추가
- 'context_bullet'에서 발견된 오류 수정('context'와 관련 없는 문장이 생성되는 오류)
### v0.2.0
- 대웅님 pororo context summary 추가한 데이터셋
### v0.1.1
- 영재님 pororo augmenation 추가한 데이터셋
- 'train_dataset'의 'train', 'validation' 셋에서 문제 및 정답오류 수정
### v0.1.0
- 영재님 pororo augmenation 추가한 데이터셋
### v0.0.0
- 대회에서 제공해주신 기본 데이터셋
## LICENSE
- CC-BY-2.0
- 모든 저작권은 AI Stage에게 있습니다!
- URL
| [
"# AI Stage MRC task",
"## Version Info",
"### v4.1.1\n- v3.2.3데이터 (train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation\n- train_aug_punctuation에 있음\n- v4.1.0의 오류 해결",
"### v4.1.0\n- v3.2.2데이터(train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation\n- train_data_aug에 있음\n- answers 잘못 labeling된 데이터",
"### v4.0.1\n- punctuation추가한 데이터셋, both train and validation\n- answers type 정상",
"### v4.0.0\n- punctuation추가한 데이터셋, only train \n- answers type 오류",
"### v3.2.3\n- 'v3.2.2'에서 잘못된 [ANSWER] 위치 수정",
"### v3.2.2\n- 'v3.2.1'에서 special token([TITLE]) 제거",
"### v3.2.1\n- 'v3.2.0'에서 special token([ANSWER]) 추가",
"### v3.2.0\n- 'v1.3.1'에서 special tokens([TITLE], #) 추가",
"### v3.1.0\n- 'v3.0.0'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가",
"### v3.0.0\n- 'v1.0.0'에서 태욱님 answer, sentence split 토큰 추가",
"### v2.1.1\n- 'v2.1.0'에서 'v3.2.3'의 augmentation 데이터와 concat\n- bt_context_extractive_final폴더에 train, validation사용",
"### v2.1.0\n- extractive모델을 위한 pororo context augmentation\n- context내에 answer가 유일한 데이터만 증강, answer위치 조정 완료\n- context_bt_for_extracive폴더에 train, validation 추가",
"### v2.0.1\n- 'v2.0.0'에서 context내 answer가 손상된 데이터 제거",
"### v2.0.0\n- 채은님 context backtranslation 추가한 데이터셋",
"### v1.6.4\n- 'v1.6.3'에서 'train_dataset_curri' 폴더 내 구성 변경\n - 'train_level_1' & 'train_level_2'-> 'train_level_1'\n - 'train_level_3' -> 'train_level_2'\n - 'train_level_#'을 모두 합친 'train_total'",
"### v1.6.3\n- 'v1.6.2'에서 'train_dataset_curri' 폴더 추가, 샘플별 스코어링하여 'level0' ~ 'level3'으로 구성\n- 사용 데이터셋 : 'train', 'train_perm01', 'train_perm02', 'train_perm04', 'train_mask_2', 'train_hard_mask', 'pororo_aug_ver2_len_context_easy', 'pororo_aug_ver2_len_context_normal', 'pororo_aug_ver2_len_context_hard'",
"### v1.6.2\n- 'v1.6.1'에서 'train_dataset' 폴더에 'train_mask_2', 'train_hard_mask' 데이터 추가",
"### v1.6.1\n- 'v1.6.0'에서 'train_dataset' 폴더에\n - 'train_pororo_aug_ver2'의 context length 기준으로 curriculum-learning 데이터셋 추가\n - easy : 'len < 673'\n - normal : '673 <= len < 935'\n - hard : '935 <= len'\n - 'v1.4.1'의 업데이트 데이터셋 반영",
"### v1.6.0\n- 'v1.3.2'에서 'train_dataset' 폴더에 permutation ratio 0.1, 0.2, 0.4의 sentence permutation 데이터 추가",
"### v1.5.0\n- 'v1.4.1'에서 헷갈리는 단어, 날짜 정보 Masking datsets 추가",
"### v1.4.4\n- 'v1.4.1'에서 증강 데이터(train, valid pororo ver1 포함) concat, shuffled concat 추가",
"### v1.4.3\n- 'v1.4.1'에서 증강 데이터(train, valid pororo ver1 제외) concat, shuffled concat 추가",
"### v1.4.2\n- 'v1.4.1'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가",
"### v1.4.1\n- 대웅님께서 공유해주신 질문유형을 반영하여 기존의 질문을 7개에서 45개로 늘려 pororo aug 적용하여 pororo aug ver2 추가",
"### v1.3.2\n- 'v1.3.1'에 'train_dataset_aeda'에 preprocessing 이 누락 되어 처리",
"### v1.3.1\n- 'v.1.3.0'에 'train_dataset_aug' 폴더 추가(question에 대한 조사 제거, Back Translation, AEDA, pororo aug ver1을 concatenate함)",
"### v1.3.0\n- 'v1.2.0'에서 'wiki_documents.json'을 pororo aug를 활용해 50,531건의 증강 데이터 추가",
"### v1.2.0\n- 'v1.1.0'에서 question에 대한 조사 제거, Back Translation, AEDA Augmentation 추가(pororo aug엔 적용하지 않음)",
"### v1.1.0\n- 'v1.0.0'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가",
"### v1.0.0\n- 'v0.1.1'에서 context에 전처리",
"### v0.2.2\n- 'train_dataset'의 'train', 'validation' 셋에서 문제 및 정답오류 수정",
"### v0.2.1\n- 'train_pororo_aug', 'validation_pororo_aug'에도 동일한 summary 추가\n- 'context_bullet'에서 발견된 오류 수정('context'와 관련 없는 문장이 생성되는 오류)",
"### v0.2.0\n- 대웅님 pororo context summary 추가한 데이터셋",
"### v0.1.1\n- 영재님 pororo augmenation 추가한 데이터셋\n- 'train_dataset'의 'train', 'validation' 셋에서 문제 및 정답오류 수정",
"### v0.1.0\n- 영재님 pororo augmenation 추가한 데이터셋",
"### v0.0.0\n- 대회에서 제공해주신 기본 데이터셋",
"## LICENSE\n- CC-BY-2.0\n- 모든 저작권은 AI Stage에게 있습니다!\n- URL"
] | [
"TAGS\n#region-us \n",
"# AI Stage MRC task",
"## Version Info",
"### v4.1.1\n- v3.2.3데이터 (train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation\n- train_aug_punctuation에 있음\n- v4.1.0의 오류 해결",
"### v4.1.0\n- v3.2.2데이터(train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation\n- train_data_aug에 있음\n- answers 잘못 labeling된 데이터",
"### v4.0.1\n- punctuation추가한 데이터셋, both train and validation\n- answers type 정상",
"### v4.0.0\n- punctuation추가한 데이터셋, only train \n- answers type 오류",
"### v3.2.3\n- 'v3.2.2'에서 잘못된 [ANSWER] 위치 수정",
"### v3.2.2\n- 'v3.2.1'에서 special token([TITLE]) 제거",
"### v3.2.1\n- 'v3.2.0'에서 special token([ANSWER]) 추가",
"### v3.2.0\n- 'v1.3.1'에서 special tokens([TITLE], #) 추가",
"### v3.1.0\n- 'v3.0.0'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가",
"### v3.0.0\n- 'v1.0.0'에서 태욱님 answer, sentence split 토큰 추가",
"### v2.1.1\n- 'v2.1.0'에서 'v3.2.3'의 augmentation 데이터와 concat\n- bt_context_extractive_final폴더에 train, validation사용",
"### v2.1.0\n- extractive모델을 위한 pororo context augmentation\n- context내에 answer가 유일한 데이터만 증강, answer위치 조정 완료\n- context_bt_for_extracive폴더에 train, validation 추가",
"### v2.0.1\n- 'v2.0.0'에서 context내 answer가 손상된 데이터 제거",
"### v2.0.0\n- 채은님 context backtranslation 추가한 데이터셋",
"### v1.6.4\n- 'v1.6.3'에서 'train_dataset_curri' 폴더 내 구성 변경\n - 'train_level_1' & 'train_level_2'-> 'train_level_1'\n - 'train_level_3' -> 'train_level_2'\n - 'train_level_#'을 모두 합친 'train_total'",
"### v1.6.3\n- 'v1.6.2'에서 'train_dataset_curri' 폴더 추가, 샘플별 스코어링하여 'level0' ~ 'level3'으로 구성\n- 사용 데이터셋 : 'train', 'train_perm01', 'train_perm02', 'train_perm04', 'train_mask_2', 'train_hard_mask', 'pororo_aug_ver2_len_context_easy', 'pororo_aug_ver2_len_context_normal', 'pororo_aug_ver2_len_context_hard'",
"### v1.6.2\n- 'v1.6.1'에서 'train_dataset' 폴더에 'train_mask_2', 'train_hard_mask' 데이터 추가",
"### v1.6.1\n- 'v1.6.0'에서 'train_dataset' 폴더에\n - 'train_pororo_aug_ver2'의 context length 기준으로 curriculum-learning 데이터셋 추가\n - easy : 'len < 673'\n - normal : '673 <= len < 935'\n - hard : '935 <= len'\n - 'v1.4.1'의 업데이트 데이터셋 반영",
"### v1.6.0\n- 'v1.3.2'에서 'train_dataset' 폴더에 permutation ratio 0.1, 0.2, 0.4의 sentence permutation 데이터 추가",
"### v1.5.0\n- 'v1.4.1'에서 헷갈리는 단어, 날짜 정보 Masking datsets 추가",
"### v1.4.4\n- 'v1.4.1'에서 증강 데이터(train, valid pororo ver1 포함) concat, shuffled concat 추가",
"### v1.4.3\n- 'v1.4.1'에서 증강 데이터(train, valid pororo ver1 제외) concat, shuffled concat 추가",
"### v1.4.2\n- 'v1.4.1'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가",
"### v1.4.1\n- 대웅님께서 공유해주신 질문유형을 반영하여 기존의 질문을 7개에서 45개로 늘려 pororo aug 적용하여 pororo aug ver2 추가",
"### v1.3.2\n- 'v1.3.1'에 'train_dataset_aeda'에 preprocessing 이 누락 되어 처리",
"### v1.3.1\n- 'v.1.3.0'에 'train_dataset_aug' 폴더 추가(question에 대한 조사 제거, Back Translation, AEDA, pororo aug ver1을 concatenate함)",
"### v1.3.0\n- 'v1.2.0'에서 'wiki_documents.json'을 pororo aug를 활용해 50,531건의 증강 데이터 추가",
"### v1.2.0\n- 'v1.1.0'에서 question에 대한 조사 제거, Back Translation, AEDA Augmentation 추가(pororo aug엔 적용하지 않음)",
"### v1.1.0\n- 'v1.0.0'에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가",
"### v1.0.0\n- 'v0.1.1'에서 context에 전처리",
"### v0.2.2\n- 'train_dataset'의 'train', 'validation' 셋에서 문제 및 정답오류 수정",
"### v0.2.1\n- 'train_pororo_aug', 'validation_pororo_aug'에도 동일한 summary 추가\n- 'context_bullet'에서 발견된 오류 수정('context'와 관련 없는 문장이 생성되는 오류)",
"### v0.2.0\n- 대웅님 pororo context summary 추가한 데이터셋",
"### v0.1.1\n- 영재님 pororo augmenation 추가한 데이터셋\n- 'train_dataset'의 'train', 'validation' 셋에서 문제 및 정답오류 수정",
"### v0.1.0\n- 영재님 pororo augmenation 추가한 데이터셋",
"### v0.0.0\n- 대회에서 제공해주신 기본 데이터셋",
"## LICENSE\n- CC-BY-2.0\n- 모든 저작권은 AI Stage에게 있습니다!\n- URL"
] |
c62321036e5647db5767ecaff139912b554dc938 |
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). | kmfoda/booksum | [
"license:bsd-3-clause",
"arxiv:2105.08209",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": ["bsd-3-clause"], "train-eval-index": [{"config": "kmfoda--booksum", "task": "summarization", "task_id": "summarization", "splits": {"eval_split": "test"}, "col_mapping": {"chapter": "text", "summary_text": "target"}}]} | 2022-11-30T12:03:43+00:00 | [
"2105.08209"
] | [] | TAGS
#license-bsd-3-clause #arxiv-2105.08209 #region-us
|
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: Wojciech Kryściński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, Dragomir Radev
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- paper by SalesForce Research
- GitHub repo
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. Citation
2. Legal Note
3. License
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the BSD-3 License (see 'URL' for details). | [
"# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization\nAuthors: Wojciech Kryściński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, Dragomir Radev",
"## Introduction\nThe majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases. \nWhile relevant, such datasets will offer limited challenges for future generations of text summarization systems.\nWe address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.\nOur dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.\nThe domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.\nTo facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.",
"## Links\n\n- paper by SalesForce Research\n- GitHub repo\n\n<p align=\"center\"><img src=\"misc/book_sumv4.png\"></p>",
"## Table of Contents\n\n1. Citation\n2. Legal Note\n3. License",
"## Legal Note\nBy downloading or using the resources, including any code or scripts, shared in this code\nrepository, you hereby agree to the following terms, and your use of the resources is conditioned\non and subject to these terms.\n1. You may only use the scripts shared in this code repository for research purposes. You\nmay not use or allow others to use the scripts for any other purposes and other uses are\nexpressly prohibited.\n2. You will comply with all terms and conditions, and are responsible for obtaining all\nrights, related to the services you access and the data you collect.\n3. We do not make any representations or warranties whatsoever regarding the sources from\nwhich data is collected. Furthermore, we are not liable for any damage, loss or expense of\nany kind arising from or relating to your use of the resources shared in this code\nrepository or the data collected, regardless of whether such liability is based in tort,\ncontract or otherwise.",
"## License\nThe code is released under the BSD-3 License (see 'URL' for details)."
] | [
"TAGS\n#license-bsd-3-clause #arxiv-2105.08209 #region-us \n",
"# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization\nAuthors: Wojciech Kryściński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, Dragomir Radev",
"## Introduction\nThe majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases. \nWhile relevant, such datasets will offer limited challenges for future generations of text summarization systems.\nWe address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.\nOur dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.\nThe domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.\nTo facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.",
"## Links\n\n- paper by SalesForce Research\n- GitHub repo\n\n<p align=\"center\"><img src=\"misc/book_sumv4.png\"></p>",
"## Table of Contents\n\n1. Citation\n2. Legal Note\n3. License",
"## Legal Note\nBy downloading or using the resources, including any code or scripts, shared in this code\nrepository, you hereby agree to the following terms, and your use of the resources is conditioned\non and subject to these terms.\n1. You may only use the scripts shared in this code repository for research purposes. You\nmay not use or allow others to use the scripts for any other purposes and other uses are\nexpressly prohibited.\n2. You will comply with all terms and conditions, and are responsible for obtaining all\nrights, related to the services you access and the data you collect.\n3. We do not make any representations or warranties whatsoever regarding the sources from\nwhich data is collected. Furthermore, we are not liable for any damage, loss or expense of\nany kind arising from or relating to your use of the resources shared in this code\nrepository or the data collected, regardless of whether such liability is based in tort,\ncontract or otherwise.",
"## License\nThe code is released under the BSD-3 License (see 'URL' for details)."
] |
fb7b55ab3e4cfaab691a7f33316421799e1cc2ef | Wikigold with IOB tags
| knilakshan20/wikigold | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-09T18:45:00+00:00 | [] | [] | TAGS
#region-us
| Wikigold with IOB tags
| [] | [
"TAGS\n#region-us \n"
] |
90cc464bf21bd49eba977b2b7a56590038c1c19a | # Beethoven Sonatas Dataset
Beethoven is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset was originally introduced in the SampleRNN paper by Mehri et al. (2017) and download details from the original paper can be found at https://github.com/soroushmehr/sampleRNN_ICLR2017/tree/master/datasets/music. Here, we provide a more convenient download of a processed version of the dataset in order to standardize future use.
We include two versions of the dataset:
- `beethoven.zip` is a zip file containing 4328 8-second audio clips sampled at 16kHz. These were generated by first joining all the piano sonatas, and then splitting the track into 8-second chunks. This data can also be used with the https://github.com/HazyResearch/state-spaces repository to reproduce SaShiMi results, and was the dataset used in the paper.
- `beethoven_raw.zip` contains the raw audio tracks, sampled at 16kHz.
We recommend (and follow) the following train-validation-test split for the audio files in `beethoven.zip` (we attempted to recreate the splits from the SampleRNN work as closely as possible):
- `0.wav` to `3807.wav` for training
- `3808.wav` to `4067.wav` for validation
- `4068.wav` to `4327.wav` for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@inproceedings{mehri2017samplernn,
title={SampleRNN: An Unconditional End-to-End Neural Audio Generation Model},
author={Mehri, Soroush and Kumar, Kundan and Gulrajani, Ishaan and Kumar, Rithesh and Jain, Shubham and Sotelo, Jose and Courville, Aaron and Bengio, Yoshua},
booktitle={International Conference on Learning Representations},
year={2017}
}
``` | krandiash/beethoven | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-22T03:25:50+00:00 | [] | [] | TAGS
#region-us
| # Beethoven Sonatas Dataset
Beethoven is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset was originally introduced in the SampleRNN paper by Mehri et al. (2017) and download details from the original paper can be found at URL Here, we provide a more convenient download of a processed version of the dataset in order to standardize future use.
We include two versions of the dataset:
- 'URL' is a zip file containing 4328 8-second audio clips sampled at 16kHz. These were generated by first joining all the piano sonatas, and then splitting the track into 8-second chunks. This data can also be used with the URL repository to reproduce SaShiMi results, and was the dataset used in the paper.
- 'beethoven_raw.zip' contains the raw audio tracks, sampled at 16kHz.
We recommend (and follow) the following train-validation-test split for the audio files in 'URL' (we attempted to recreate the splits from the SampleRNN work as closely as possible):
- '0.wav' to 'URL' for training
- 'URL' to 'URL' for validation
- 'URL' to 'URL' for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
| [
"# Beethoven Sonatas Dataset\n\nBeethoven is a raw audio waveform dataset used in the paper \"It's Raw! Audio Generation with State-Space Models\". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.\n\nThe dataset was originally introduced in the SampleRNN paper by Mehri et al. (2017) and download details from the original paper can be found at URL Here, we provide a more convenient download of a processed version of the dataset in order to standardize future use.\n\n\nWe include two versions of the dataset:\n- 'URL' is a zip file containing 4328 8-second audio clips sampled at 16kHz. These were generated by first joining all the piano sonatas, and then splitting the track into 8-second chunks. This data can also be used with the URL repository to reproduce SaShiMi results, and was the dataset used in the paper.\n- 'beethoven_raw.zip' contains the raw audio tracks, sampled at 16kHz.\n\nWe recommend (and follow) the following train-validation-test split for the audio files in 'URL' (we attempted to recreate the splits from the SampleRNN work as closely as possible):\n\n- '0.wav' to 'URL' for training\n- 'URL' to 'URL' for validation\n- 'URL' to 'URL' for testing\n\nYou can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:"
] | [
"TAGS\n#region-us \n",
"# Beethoven Sonatas Dataset\n\nBeethoven is a raw audio waveform dataset used in the paper \"It's Raw! Audio Generation with State-Space Models\". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.\n\nThe dataset was originally introduced in the SampleRNN paper by Mehri et al. (2017) and download details from the original paper can be found at URL Here, we provide a more convenient download of a processed version of the dataset in order to standardize future use.\n\n\nWe include two versions of the dataset:\n- 'URL' is a zip file containing 4328 8-second audio clips sampled at 16kHz. These were generated by first joining all the piano sonatas, and then splitting the track into 8-second chunks. This data can also be used with the URL repository to reproduce SaShiMi results, and was the dataset used in the paper.\n- 'beethoven_raw.zip' contains the raw audio tracks, sampled at 16kHz.\n\nWe recommend (and follow) the following train-validation-test split for the audio files in 'URL' (we attempted to recreate the splits from the SampleRNN work as closely as possible):\n\n- '0.wav' to 'URL' for training\n- 'URL' to 'URL' for validation\n- 'URL' to 'URL' for testing\n\nYou can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:"
] |
fe62f33d2af5db6f01e504ec1f360da7df9692e8 | # SC09 Dataset
SC09 is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.
We include an `sc09.zip` file that contains:
- folders `zero` through `nine`, each containing audio files sampled at 16kHz corresponding to utterances for the digit
- `validation_list.txt` containing the list of validation utterances
- `testing_list.txt` containing the list of testing utterances
- the original `LICENSE` file
We split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in `validation_list.txt` and `testing_list.txt`.
We also include a `sc09_quantized.zip` file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.
You can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@inproceedings{donahue2019adversarial,
title={Adversarial Audio Synthesis},
author={Donahue, Chris and McAuley, Julian and Puckette, Miller},
booktitle={International Conference on Learning Representations},
year={2019}
}
@article{Warden2018SpeechCA,
title={Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition},
author={Pete Warden},
journal={ArXiv},
year={2018},
volume={abs/1804.03209}
}
``` | krandiash/sc09 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-22T03:26:11+00:00 | [] | [] | TAGS
#region-us
| # SC09 Dataset
SC09 is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.
We include an 'URL' file that contains:
- folders 'zero' through 'nine', each containing audio files sampled at 16kHz corresponding to utterances for the digit
- 'validation_list.txt' containing the list of validation utterances
- 'testing_list.txt' containing the list of testing utterances
- the original 'LICENSE' file
We split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in 'validation_list.txt' and 'testing_list.txt'.
We also include a 'sc09_quantized.zip' file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.
You can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:
| [
"# SC09 Dataset\n\nSC09 is a raw audio waveform dataset used in the paper \"It's Raw! Audio Generation with State-Space Models\". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.\n\nWe include an 'URL' file that contains:\n- folders 'zero' through 'nine', each containing audio files sampled at 16kHz corresponding to utterances for the digit\n- 'validation_list.txt' containing the list of validation utterances\n- 'testing_list.txt' containing the list of testing utterances\n- the original 'LICENSE' file\n\nWe split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in 'validation_list.txt' and 'testing_list.txt'.\n\nWe also include a 'sc09_quantized.zip' file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.\n\nYou can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:"
] | [
"TAGS\n#region-us \n",
"# SC09 Dataset\n\nSC09 is a raw audio waveform dataset used in the paper \"It's Raw! Audio Generation with State-Space Models\". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.\n\nWe include an 'URL' file that contains:\n- folders 'zero' through 'nine', each containing audio files sampled at 16kHz corresponding to utterances for the digit\n- 'validation_list.txt' containing the list of validation utterances\n- 'testing_list.txt' containing the list of testing utterances\n- the original 'LICENSE' file\n\nWe split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in 'validation_list.txt' and 'testing_list.txt'.\n\nWe also include a 'sc09_quantized.zip' file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.\n\nYou can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:"
] |
f1f42e8f692f0b5352c07efb93091d6a2453e2b0 | # YouTubeMix Dataset
YouTubeMix is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset uses the audio track from https://www.youtube.com/watch?v=EhO_MrRfftU,
and was originally used in the SampleRNN GitHub repository from the Deep Sound Project (https://github.com/deepsound-project/samplernn-pytorch).
_Please note that download and use of this data should be for academic and research purposes only, in order to constitute fair use under US copyright law. We take no responsibility for any copyright infringements that take place by users who download and use this data._
We include two versions of the dataset:
- `youtubemix.zip` is a zip file containing 241 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. This is provided for use with the https://github.com/HazyResearch/state-spaces repository to reproduce SaShiMi results, and was the dataset used in the paper.
- `raw.wav` is the raw audio track from the YouTube video, sampled at 44.1kHz.
We recommend (and follow) the following train-validation-test split for the audio files in `youtubemix.zip`:
- `out000.wav` to `out211.wav` for training
- `out212.wav` to `out225.wav` for validation
- `out226.wav` to `out240.wav` for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@misc{deepsound,
author = {DeepSound},
title = {SampleRNN},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/deepsound-project/samplernn-pytorch}},
}
``` | krandiash/youtubemix | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-22T03:26:01+00:00 | [] | [] | TAGS
#region-us
| # YouTubeMix Dataset
YouTubeMix is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset uses the audio track from URL
and was originally used in the SampleRNN GitHub repository from the Deep Sound Project (URL
_Please note that download and use of this data should be for academic and research purposes only, in order to constitute fair use under US copyright law. We take no responsibility for any copyright infringements that take place by users who download and use this data._
We include two versions of the dataset:
- 'URL' is a zip file containing 241 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. This is provided for use with the URL repository to reproduce SaShiMi results, and was the dataset used in the paper.
- 'URL' is the raw audio track from the YouTube video, sampled at 44.1kHz.
We recommend (and follow) the following train-validation-test split for the audio files in 'URL':
- 'URL' to 'URL' for training
- 'URL' to 'URL' for validation
- 'URL' to 'URL' for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
| [
"# YouTubeMix Dataset\n\nYouTubeMix is a raw audio waveform dataset used in the paper \"It's Raw! Audio Generation with State-Space Models\". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.\n\nThe dataset uses the audio track from URL\nand was originally used in the SampleRNN GitHub repository from the Deep Sound Project (URL\n\n_Please note that download and use of this data should be for academic and research purposes only, in order to constitute fair use under US copyright law. We take no responsibility for any copyright infringements that take place by users who download and use this data._\n\n\n\nWe include two versions of the dataset:\n- 'URL' is a zip file containing 241 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. This is provided for use with the URL repository to reproduce SaShiMi results, and was the dataset used in the paper.\n- 'URL' is the raw audio track from the YouTube video, sampled at 44.1kHz.\n\nWe recommend (and follow) the following train-validation-test split for the audio files in 'URL':\n- 'URL' to 'URL' for training\n- 'URL' to 'URL' for validation\n- 'URL' to 'URL' for testing\n\nYou can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:"
] | [
"TAGS\n#region-us \n",
"# YouTubeMix Dataset\n\nYouTubeMix is a raw audio waveform dataset used in the paper \"It's Raw! Audio Generation with State-Space Models\". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.\n\nThe dataset uses the audio track from URL\nand was originally used in the SampleRNN GitHub repository from the Deep Sound Project (URL\n\n_Please note that download and use of this data should be for academic and research purposes only, in order to constitute fair use under US copyright law. We take no responsibility for any copyright infringements that take place by users who download and use this data._\n\n\n\nWe include two versions of the dataset:\n- 'URL' is a zip file containing 241 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. This is provided for use with the URL repository to reproduce SaShiMi results, and was the dataset used in the paper.\n- 'URL' is the raw audio track from the YouTube video, sampled at 44.1kHz.\n\nWe recommend (and follow) the following train-validation-test split for the audio files in 'URL':\n- 'URL' to 'URL' for training\n- 'URL' to 'URL' for validation\n- 'URL' to 'URL' for testing\n\nYou can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:"
] |
c03311f799a8599b310cf2a5f43ee8a1f86cfd1f |
# Dataset Card for kudo-research/mustc-en-es-text-only
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ict.fbk.eu/must-c-release-v1-2/](https://ict.fbk.eu/must-c-release-v1-2/)
- **Repository:** n/a
- **Paper:** [MuST-C: A multilingual corpus for end-to-end speech translation](https://www.sciencedirect.com/science/article/abs/pii/S0885230820300887)
- **Leaderboard:** n/a
- **Point of Contact:** Roldano Cattoni <[email protected]>; Marco Turchi <[email protected]>
### Dataset Summary
This dataset is a selection of text only (English-Spanish) from the MuST-C corpus.
MuST-C is a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 14 languages (Arabic, Chinese, Czech, Dutch, French, German, Italian, Persian, Portuguese, Romanian, Russian, Spanish, Turkish and Vietnamese).
For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for machine-translation.
[More Information Needed]
### Languages
- en-US
- es-ES
## Dataset Structure
### Data Instances
Dataset example:
```
{
"translation": {
"en": "I'll tell you one quick story to illustrate what that's been like for me.",
"es": "Les diré una rápida historia para ilustrar lo que ha sido para mí."
}
}
```
### Data Fields
The fields are:
- `translation`: an object containing two items, constructed as key-value pairs:
- language code (key)
- text (value)
### Data Splits
More Information Needed...
| | Tain | Valid | Test |
|-------------------------|---------|-------|------|
| Input Sentences | 265,625 | 1316 | 2502 |
| Average Sentence Length | n/a | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
TED Talks
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
FBK - Fondazione Bruno Kessler, Trento, Italy
- Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi
### Licensing Information
- TED talks are copyrighted by TED Conference LLC and licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 4.0
(cfr. https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
- the MuST-C corpus is released under the same Creative Commons
Attribution-NonCommercial-NoDerivs 4.0 License.
### Citation Information
Bibtex reference:
```
@article{CATTONI2021101155,
title = {MuST-C: A multilingual corpus for end-to-end speech translation},
journal = {Computer Speech & Language},
volume = {66},
pages = {101155},
year = {2021},
issn = {0885-2308},
doi = {https://doi.org/10.1016/j.csl.2020.101155},
url = {https://www.sciencedirect.com/science/article/pii/S0885230820300887},
author = {Roldano Cattoni and Mattia Antonino {Di Gangi} and Luisa Bentivogli and Matteo Negri and Marco Turchi},
keywords = {Spoken language translation, Multilingual corpus},
abstract = {End-to-end spoken language translation (SLT) has recently gained popularity thanks to the advancement of sequence to sequence learning in its two parent tasks: automatic speech recognition (ASR) and machine translation (MT). However, research in the field has to confront with the scarcity of publicly available corpora to train data-hungry neural networks. Indeed, while traditional cascade solutions can build on sizable ASR and MT training data for a variety of languages, the available SLT corpora suitable for end-to-end training are few, typically small and of limited language coverage. We contribute to fill this gap by presenting MuST-C, a large and freely available Multilingual Speech Translation Corpus built from English TED Talks. Its unique features include: i) language coverage and diversity (from English into 14 languages from different families), ii) size (at least 237 hours of transcribed recordings per language, 430 on average), iii) variety of topics and speakers, and iv) data quality. Besides describing the corpus creation methodology and discussing the outcomes of empirical and manual quality evaluations, we present baseline results computed with strong systems on each language direction covered by MuST-C.}
}```
[DOI available here](https://doi.org/10.1016/j.csl.2020.101155)
### Contributions
Thanks to [@dblandan](https://github.com/dblandan) for adding this dataset.
| kudo-research/mustc-en-es-text-only | [
"annotations_creators:other",
"language_creators:other",
"multilinguality:translation",
"size_categories:unknown",
"language:en",
"language:es",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en", "es"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation"], "pretty_name": "must-c_en-es_text-only", "language_bcp47": ["en-US", "es-ES"]} | 2022-10-22T07:40:43+00:00 | [] | [
"en",
"es"
] | TAGS
#annotations_creators-other #language_creators-other #multilinguality-translation #size_categories-unknown #language-English #language-Spanish #license-cc-by-nc-nd-4.0 #region-us
| Dataset Card for kudo-research/mustc-en-es-text-only
====================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
- Initial Data Collection and Normalization
- Who are the source language producers?
+ Annotations
- Annotation process
- Who are the annotators?
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: n/a
* Paper: MuST-C: A multilingual corpus for end-to-end speech translation
* Leaderboard: n/a
* Point of Contact: Roldano Cattoni [cattoni@URL](mailto:cattoni@URL); Marco Turchi [turchi@URL](mailto:turchi@URL)
### Dataset Summary
This dataset is a selection of text only (English-Spanish) from the MuST-C corpus.
MuST-C is a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 14 languages (Arabic, Chinese, Czech, Dutch, French, German, Italian, Persian, Portuguese, Romanian, Russian, Spanish, Turkish and Vietnamese).
For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.
### Supported Tasks and Leaderboards
* 'machine-translation': The dataset can be used to train a model for machine-translation.
### Languages
* en-US
* es-ES
Dataset Structure
-----------------
### Data Instances
Dataset example:
### Data Fields
The fields are:
* 'translation': an object containing two items, constructed as key-value pairs:
+ language code (key)
+ text (value)
### Data Splits
...
Dataset Creation
----------------
### Curation Rationale
### Source Data
TED Talks
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
FBK - Fondazione Bruno Kessler, Trento, Italy
* Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi
### Licensing Information
* TED talks are copyrighted by TED Conference LLC and licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 4.0
(cfr. URL
* the MuST-C corpus is released under the same Creative Commons
Attribution-NonCommercial-NoDerivs 4.0 License.
Bibtex reference:
DOI available here
### Contributions
Thanks to @dblandan for adding this dataset.
| [
"### Dataset Summary\n\n\nThis dataset is a selection of text only (English-Spanish) from the MuST-C corpus.\n\n\nMuST-C is a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 14 languages (Arabic, Chinese, Czech, Dutch, French, German, Italian, Persian, Portuguese, Romanian, Russian, Spanish, Turkish and Vietnamese).\nFor each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.",
"### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for machine-translation.",
"### Languages\n\n\n* en-US\n* es-ES\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nDataset example:",
"### Data Fields\n\n\nThe fields are:\n\n\n* 'translation': an object containing two items, constructed as key-value pairs:\n\t+ language code (key)\n\t+ text (value)",
"### Data Splits\n\n\n...\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data\n\n\nTED Talks",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nFBK - Fondazione Bruno Kessler, Trento, Italy\n\n\n* Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi",
"### Licensing Information\n\n\n* TED talks are copyrighted by TED Conference LLC and licensed under a\nCreative Commons Attribution-NonCommercial-NoDerivs 4.0\n(cfr. URL\n* the MuST-C corpus is released under the same Creative Commons\nAttribution-NonCommercial-NoDerivs 4.0 License.\n\n\nBibtex reference:\n\n\nDOI available here",
"### Contributions\n\n\nThanks to @dblandan for adding this dataset."
] | [
"TAGS\n#annotations_creators-other #language_creators-other #multilinguality-translation #size_categories-unknown #language-English #language-Spanish #license-cc-by-nc-nd-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis dataset is a selection of text only (English-Spanish) from the MuST-C corpus.\n\n\nMuST-C is a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 14 languages (Arabic, Chinese, Czech, Dutch, French, German, Italian, Persian, Portuguese, Romanian, Russian, Spanish, Turkish and Vietnamese).\nFor each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.",
"### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for machine-translation.",
"### Languages\n\n\n* en-US\n* es-ES\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nDataset example:",
"### Data Fields\n\n\nThe fields are:\n\n\n* 'translation': an object containing two items, constructed as key-value pairs:\n\t+ language code (key)\n\t+ text (value)",
"### Data Splits\n\n\n...\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data\n\n\nTED Talks",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nFBK - Fondazione Bruno Kessler, Trento, Italy\n\n\n* Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi",
"### Licensing Information\n\n\n* TED talks are copyrighted by TED Conference LLC and licensed under a\nCreative Commons Attribution-NonCommercial-NoDerivs 4.0\n(cfr. URL\n* the MuST-C corpus is released under the same Creative Commons\nAttribution-NonCommercial-NoDerivs 4.0 License.\n\n\nBibtex reference:\n\n\nDOI available here",
"### Contributions\n\n\nThanks to @dblandan for adding this dataset."
] |
c38ca7464e9934d9a49f88b3f60f5ad63b245465 | # Filtered WIT, an Image-Text Dataset.
A reliable Dataset to run Image-Text models.
You can find WIT, Wikipedia Image Text Dataset, [here](https://github.com/google-research-datasets/wit)
Data was taken from [dalle-mini/wit](https://huggingface.co/datasets/dalle-mini/wit)
## Author
- [Aarush Katta](https://github.com/ARKseal)
## Data Structure
The data is stored as tars, containing 10,000 samples per tar.
The parquets contain the metadata of each tar, which was crated using [this script](https://huggingface.co/datasets/laion/filtered-wit/blob/main/wit_create_meta.py)
Each tar contains a `.jpg`, `.txt`, and `.json`.
The image is stored in `.jpg`, the caption in `.txt.` and the metadata in `.json`
The preferred method to read the data is [WebDataset](https://github.com/webdataset/webdataset)
Here's an example:
```python
import webdataset as wds
dataset = wds.WebDataset('data/00000.tar').to_tuple('txt', 'jpg', 'json')
for text, image, meta in dataset:
print(
text[:50],
image[:50],
meta[:50]
)
```
## Filteration
Each sample has 8 possible captions which were compared to the image using [CLIP ViT-B32](https://arxiv.org/abs/2103.00020)
The text was encoded using [multilingual CLIP text encoder](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
Each possible caption was compared to the encoded image using Cosine Similarity
and kept if the sim was greater than `0.26`
Then the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.
The script used is [filter_wit.py](https://huggingface.co/datasets/laion/filtered-wit/blob/main/filter_wit.py)
| laion/filtered-wit | [
"arxiv:2103.00020",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-29T22:12:01+00:00 | [
"2103.00020"
] | [] | TAGS
#arxiv-2103.00020 #region-us
| # Filtered WIT, an Image-Text Dataset.
A reliable Dataset to run Image-Text models.
You can find WIT, Wikipedia Image Text Dataset, here
Data was taken from dalle-mini/wit
## Author
- Aarush Katta
## Data Structure
The data is stored as tars, containing 10,000 samples per tar.
The parquets contain the metadata of each tar, which was crated using this script
Each tar contains a '.jpg', '.txt', and '.json'.
The image is stored in '.jpg', the caption in '.txt.' and the metadata in '.json'
The preferred method to read the data is WebDataset
Here's an example:
## Filteration
Each sample has 8 possible captions which were compared to the image using CLIP ViT-B32
The text was encoded using multilingual CLIP text encoder
Each possible caption was compared to the encoded image using Cosine Similarity
and kept if the sim was greater than '0.26'
Then the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.
The script used is filter_wit.py
| [
"# Filtered WIT, an Image-Text Dataset.\nA reliable Dataset to run Image-Text models.\n\nYou can find WIT, Wikipedia Image Text Dataset, here\nData was taken from dalle-mini/wit",
"## Author\n - Aarush Katta",
"## Data Structure\nThe data is stored as tars, containing 10,000 samples per tar.\nThe parquets contain the metadata of each tar, which was crated using this script\nEach tar contains a '.jpg', '.txt', and '.json'.\nThe image is stored in '.jpg', the caption in '.txt.' and the metadata in '.json'\nThe preferred method to read the data is WebDataset\nHere's an example:",
"## Filteration\nEach sample has 8 possible captions which were compared to the image using CLIP ViT-B32\nThe text was encoded using multilingual CLIP text encoder\nEach possible caption was compared to the encoded image using Cosine Similarity\nand kept if the sim was greater than '0.26'\nThen the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.\nThe script used is filter_wit.py"
] | [
"TAGS\n#arxiv-2103.00020 #region-us \n",
"# Filtered WIT, an Image-Text Dataset.\nA reliable Dataset to run Image-Text models.\n\nYou can find WIT, Wikipedia Image Text Dataset, here\nData was taken from dalle-mini/wit",
"## Author\n - Aarush Katta",
"## Data Structure\nThe data is stored as tars, containing 10,000 samples per tar.\nThe parquets contain the metadata of each tar, which was crated using this script\nEach tar contains a '.jpg', '.txt', and '.json'.\nThe image is stored in '.jpg', the caption in '.txt.' and the metadata in '.json'\nThe preferred method to read the data is WebDataset\nHere's an example:",
"## Filteration\nEach sample has 8 possible captions which were compared to the image using CLIP ViT-B32\nThe text was encoded using multilingual CLIP text encoder\nEach possible caption was compared to the encoded image using Cosine Similarity\nand kept if the sim was greater than '0.26'\nThen the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.\nThe script used is filter_wit.py"
] |
025f445e318a00406362710c57217bbef69aec6f |
# Dataset Card for Science Fiction TV Show Plots Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Format](#format)
- [Using the Dataset with Hugging Face](#call-scifi)
- [Original Dataset Structure](#dataset-structure)
- [Files in _OriginalStoriesSeparated_ Directory](#original-stories)
- [Additional Information](#additional-information)
- [Citation](#citation)
- [Licensing](#licensing)
## Dataset Description
A collection of long-running (80+ episodes) science fiction TV show plot synopses, scraped from Fandom.com wikis. Collected Nov 2017. Each episode is considered a "story".
Contains plot summaries from:
- Babylon 5 (https://babylon5.fandom.com/wiki/Main_Page) - 84 stories
- Doctor Who (https://tardis.fandom.com/wiki/Doctor_Who_Wiki) - 311 stories
- Doctor Who spin-offs - 95 stories
- Farscape (https://farscape.fandom.com/wiki/Farscape_Encyclopedia_Project:Main_Page) - 90 stories
- Fringe (https://fringe.fandom.com/wiki/FringeWiki) - 87 stories
- Futurama (https://futurama.fandom.com/wiki/Futurama_Wiki) - 87 stories
- Stargate (https://stargate.fandom.com/wiki/Stargate_Wiki) - 351 stories
- Star Trek (https://memory-alpha.fandom.com/wiki/Star_Trek) - 701 stories
- Star Wars books (https://starwars.fandom.com/wiki/Main_Page) - 205 stories, each book is a story
- Star Wars Rebels (https://starwarsrebels.fandom.com/wiki/Main_page) - 65 stories
- X-Files (https://x-files.fandom.com/wiki/Main_Page) - 200 stories
Total: 2276 stories
Dataset is "eventified" and generalized (see LJ Martin, P Ammanabrolu, X Wang, W Hancock, S Singh, B Harrison, and MO Riedl. Event Representations for Automated Story Generation with Deep Neural Nets, Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), 2018. for details on these processes.) and split into train-test-validation sets—separated by story so that full stories will stay together—for converting events into full sentences.
---
### Format
| Dataset Split | Number of Stories in Split | Number of Sentences in Split |
| ------------- |--------------------------- |----------------------------- |
| Train | 1737 | 257,108 |
| Validation | 194 | 32,855 |
| Test | 450 | 30,938 |
#### Using the Dataset with Hugging Face
```
from datasets import load_dataset
#download and load the data
dataset = load_dataset('lara-martin/Scifi_TV_Shows')
#you can then get the individual splits
train = dataset['train']
test = dataset['test']
validation = dataset['validation']
```
Each split has 7 attributes (explained in more detail in the next section):
```
>>> print(train)
Dataset({
features: ['story_num', 'story_line', 'event', 'gen_event', 'sent', 'gen_sent', 'entities'],
num_rows: 257108
})
```
---
## Original Dataset Structure
* File names: scifi-val.txt, scifi-test.txt, & scifi-train.txt
* Each sentence of the stories are split into smaller sentences and the events are extracted.
* Each line of the file contains information about a single sentence, delimited by "|||". Each line contains, in order:
* The story number
* The line number (within the story)
* 5-tuple events in a list (subject, verb, direct object, modifier noun, preposition); e.g.,
``
[[u'Voyager', u'run', 'EmptyParameter', u'deuterium', u'out'], [u'Voyager', u'force', u'go', 'EmptyParameter', 'EmptyParameter'], [u'Voyager', u'go', 'EmptyParameter', u'mode', u'into']]
``
* generalized 5-tuple events in a list; events are generalized using WordNet and VerbNet; e.g.,
``
[['<VESSEL>0', 'function-105.2.1', 'EmptyParameter', "Synset('atom.n.01')", u'out'], ['<VESSEL>0', 'urge-58.1-1', u'escape-51.1-1', 'EmptyParameter', 'EmptyParameter'], ['<VESSEL>0', u'escape-51.1-1', 'EmptyParameter', "Synset('statistic.n.01')", u'into']]
``
* original sentence (These sentences are split to contain fewer events per sentence. For the full original sentence, see the OriginalStoriesSeparated directory.); e.g.,
``
The USS Voyager is running out of deuterium as a fuel and is forced to go into Gray mode.
``
* generalized sentence; only nouns are generalized (using WordNet); e.g.,
``
the <VESSEL>0 is running out of Synset('atom.n.01') as a Synset('matter.n.03') and is forced to go into Synset('horse.n.01') Synset('statistic.n.01').
``
* a dictionary of numbered entities by tag within the _entire story_ (e.g. the second entity in the "<ORGANIZATION>" list in the dictionary would be <ORGANIZATION>1 in the story above—index starts at 0); e.g.,
``
{'<ORGANIZATION>': ['seven of nine', 'silver blood'], '<LOCATION>': ['sickbay', 'astrometrics', 'paris', 'cavern', 'vorik', 'caves'], '<DATE>': ['an hour ago', 'now'], '<MISC>': ['selected works', 'demon class', 'electromagnetic', 'parises', 'mimetic'], '<DURATION>': ['less than a week', 'the past four years', 'thirty seconds', 'an hour', 'two hours'], '<NUMBER>': ['two', 'dozen', '14', '15'], '<ORDINAL>': ['first'], '<PERSON>': ['tom paris', 'harry kim', 'captain kathryn janeway', 'tuvok', 'chakotay', 'jirex', 'neelix', 'the doctor', 'seven', 'ensign kashimuro nozawa', 'green', 'lt jg elanna torres', 'ensign vorik'], '<VESSEL>': ['uss voyager', 'starfleet']}
``
### Files in _OriginalStoriesSeparated_ Directory
* Contains unedited, unparsed original stories scraped from the respective Fandom wikis.
* Each line is a story with sentences space-separated. After each story, there is a <EOS> tag on a new line.
* There is one file for each of the 11 domains listed above.
* These are currently not set up to be called through the Hugging Face API and must be extracted from the zip directly.
---
## Additional Information
### Citation
```
@inproceedings{Ammanabrolu2020AAAI,
title={Story Realization: Expanding Plot Events into Sentences},
author={Prithviraj Ammanabrolu and Ethan Tien and Wesley Cheung and Zhaochen Luo and William Ma and Lara J. Martin and Mark O. Riedl},
journal={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2020},
volume={34},
number={05},
url={https://ojs.aaai.org//index.php/AAAI/article/view/6232}
}
```
---
### Licensing
The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/ | lara-martin/Scifi_TV_Shows | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"story",
"storytelling",
"creative",
"summaries",
"TV",
"scifi",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation"], "pretty_name": "Scifi TV Shows", "tags": ["story", "storytelling", "creative", "summaries", "TV", "scifi"]} | 2024-02-08T20:57:46+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #story #storytelling #creative #summaries #TV #scifi #region-us
| Dataset Card for Science Fiction TV Show Plots Corpus
=====================================================
Table of Contents
-----------------
* Dataset Description
+ Format
- Using the Dataset with Hugging Face
* Original Dataset Structure
+ Files in *OriginalStoriesSeparated* Directory
* Additional Information
+ Citation
+ Licensing
Dataset Description
-------------------
A collection of long-running (80+ episodes) science fiction TV show plot synopses, scraped from URL wikis. Collected Nov 2017. Each episode is considered a "story".
Contains plot summaries from:
* Babylon 5 (URL - 84 stories
* Doctor Who (URL - 311 stories
* Doctor Who spin-offs - 95 stories
* Farscape (URL - 90 stories
* Fringe (URL - 87 stories
* Futurama (URL - 87 stories
* Stargate (URL - 351 stories
* Star Trek (URL - 701 stories
* Star Wars books (URL - 205 stories, each book is a story
* Star Wars Rebels (URL - 65 stories
* X-Files (URL - 200 stories
Total: 2276 stories
Dataset is "eventified" and generalized (see LJ Martin, P Ammanabrolu, X Wang, W Hancock, S Singh, B Harrison, and MO Riedl. Event Representations for Automated Story Generation with Deep Neural Nets, Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), 2018. for details on these processes.) and split into train-test-validation sets—separated by story so that full stories will stay together—for converting events into full sentences.
---
### Format
Dataset Split: Train, Number of Stories in Split: 1737, Number of Sentences in Split: 257,108
Dataset Split: Validation, Number of Stories in Split: 194, Number of Sentences in Split: 32,855
Dataset Split: Test, Number of Stories in Split: 450, Number of Sentences in Split: 30,938
#### Using the Dataset with Hugging Face
Each split has 7 attributes (explained in more detail in the next section):
---
Original Dataset Structure
--------------------------
* File names: URL, URL, & URL
* Each sentence of the stories are split into smaller sentences and the events are extracted.
* Each line of the file contains information about a single sentence, delimited by "|||". Each line contains, in order:
+ The story number
+ The line number (within the story)
+ 5-tuple events in a list (subject, verb, direct object, modifier noun, preposition); e.g.,
''
[[u'Voyager', u'run', 'EmptyParameter', u'deuterium', u'out'], [u'Voyager', u'force', u'go', 'EmptyParameter', 'EmptyParameter'], [u'Voyager', u'go', 'EmptyParameter', u'mode', u'into']]
''
+ generalized 5-tuple events in a list; events are generalized using WordNet and VerbNet; e.g.,
''
[['0', 'function-105.2.1', 'EmptyParameter', "Synset('atom.n.01')", u'out'], ['0', 'urge-58.1-1', u'escape-51.1-1', 'EmptyParameter', 'EmptyParameter'], ['0', u'escape-51.1-1', 'EmptyParameter', "Synset('statistic.n.01')", u'into']]
''
+ original sentence (These sentences are split to contain fewer events per sentence. For the full original sentence, see the OriginalStoriesSeparated directory.); e.g.,
''
The USS Voyager is running out of deuterium as a fuel and is forced to go into Gray mode.
''
+ generalized sentence; only nouns are generalized (using WordNet); e.g.,
''
the 0 is running out of Synset('atom.n.01') as a Synset('matter.n.03') and is forced to go into Synset('horse.n.01') Synset('statistic.n.01').
''
+ a dictionary of numbered entities by tag within the *entire story* (e.g. the second entity in the "<ORGANIZATION>" list in the dictionary would be <ORGANIZATION>1 in the story above—index starts at 0); e.g.,
''
{'': ['seven of nine', 'silver blood'], '': ['sickbay', 'astrometrics', 'paris', 'cavern', 'vorik', 'caves'], '': ['an hour ago', 'now'], '': ['selected works', 'demon class', 'electromagnetic', 'parises', 'mimetic'], '': ['less than a week', 'the past four years', 'thirty seconds', 'an hour', 'two hours'], '': ['two', 'dozen', '14', '15'], '': ['first'], '': ['tom paris', 'harry kim', 'captain kathryn janeway', 'tuvok', 'chakotay', 'jirex', 'neelix', 'the doctor', 'seven', 'ensign kashimuro nozawa', 'green', 'lt jg elanna torres', 'ensign vorik'], '': ['uss voyager', 'starfleet']}
''
### Files in *OriginalStoriesSeparated* Directory
* Contains unedited, unparsed original stories scraped from the respective Fandom wikis.
* Each line is a story with sentences space-separated. After each story, there is a <EOS> tag on a new line.
* There is one file for each of the 11 domains listed above.
* These are currently not set up to be called through the Hugging Face API and must be extracted from the zip directly.
---
Additional Information
----------------------
---
### Licensing
The Creative Commons Attribution 4.0 International License. URL
| [
"### Format\n\n\nDataset Split: Train, Number of Stories in Split: 1737, Number of Sentences in Split: 257,108\nDataset Split: Validation, Number of Stories in Split: 194, Number of Sentences in Split: 32,855\nDataset Split: Test, Number of Stories in Split: 450, Number of Sentences in Split: 30,938",
"#### Using the Dataset with Hugging Face\n\n\nEach split has 7 attributes (explained in more detail in the next section):\n\n\n\n\n---\n\n\nOriginal Dataset Structure\n--------------------------\n\n\n* File names: URL, URL, & URL\n* Each sentence of the stories are split into smaller sentences and the events are extracted.\n* Each line of the file contains information about a single sentence, delimited by \"|||\". Each line contains, in order:\n\t+ The story number\n\t+ The line number (within the story)\n\t+ 5-tuple events in a list (subject, verb, direct object, modifier noun, preposition); e.g.,\n\t''\n\t[[u'Voyager', u'run', 'EmptyParameter', u'deuterium', u'out'], [u'Voyager', u'force', u'go', 'EmptyParameter', 'EmptyParameter'], [u'Voyager', u'go', 'EmptyParameter', u'mode', u'into']]\n\t''\n\t+ generalized 5-tuple events in a list; events are generalized using WordNet and VerbNet; e.g.,\n\t''\n\t[['0', 'function-105.2.1', 'EmptyParameter', \"Synset('atom.n.01')\", u'out'], ['0', 'urge-58.1-1', u'escape-51.1-1', 'EmptyParameter', 'EmptyParameter'], ['0', u'escape-51.1-1', 'EmptyParameter', \"Synset('statistic.n.01')\", u'into']]\n\t''\n\t+ original sentence (These sentences are split to contain fewer events per sentence. For the full original sentence, see the OriginalStoriesSeparated directory.); e.g.,\n\t''\n\tThe USS Voyager is running out of deuterium as a fuel and is forced to go into Gray mode.\n\t''\n\t+ generalized sentence; only nouns are generalized (using WordNet); e.g.,\n\t''\n\tthe 0 is running out of Synset('atom.n.01') as a Synset('matter.n.03') and is forced to go into Synset('horse.n.01') Synset('statistic.n.01').\n\t''\n\t+ a dictionary of numbered entities by tag within the *entire story* (e.g. the second entity in the \"<ORGANIZATION>\" list in the dictionary would be <ORGANIZATION>1 in the story above—index starts at 0); e.g.,\n\t''\n\t{'': ['seven of nine', 'silver blood'], '': ['sickbay', 'astrometrics', 'paris', 'cavern', 'vorik', 'caves'], '': ['an hour ago', 'now'], '': ['selected works', 'demon class', 'electromagnetic', 'parises', 'mimetic'], '': ['less than a week', 'the past four years', 'thirty seconds', 'an hour', 'two hours'], '': ['two', 'dozen', '14', '15'], '': ['first'], '': ['tom paris', 'harry kim', 'captain kathryn janeway', 'tuvok', 'chakotay', 'jirex', 'neelix', 'the doctor', 'seven', 'ensign kashimuro nozawa', 'green', 'lt jg elanna torres', 'ensign vorik'], '': ['uss voyager', 'starfleet']}\n\t''",
"### Files in *OriginalStoriesSeparated* Directory\n\n\n* Contains unedited, unparsed original stories scraped from the respective Fandom wikis.\n* Each line is a story with sentences space-separated. After each story, there is a <EOS> tag on a new line.\n* There is one file for each of the 11 domains listed above.\n* These are currently not set up to be called through the Hugging Face API and must be extracted from the zip directly.\n\n\n\n\n---\n\n\nAdditional Information\n----------------------\n\n\n\n\n---",
"### Licensing\n\n\nThe Creative Commons Attribution 4.0 International License. URL"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #story #storytelling #creative #summaries #TV #scifi #region-us \n",
"### Format\n\n\nDataset Split: Train, Number of Stories in Split: 1737, Number of Sentences in Split: 257,108\nDataset Split: Validation, Number of Stories in Split: 194, Number of Sentences in Split: 32,855\nDataset Split: Test, Number of Stories in Split: 450, Number of Sentences in Split: 30,938",
"#### Using the Dataset with Hugging Face\n\n\nEach split has 7 attributes (explained in more detail in the next section):\n\n\n\n\n---\n\n\nOriginal Dataset Structure\n--------------------------\n\n\n* File names: URL, URL, & URL\n* Each sentence of the stories are split into smaller sentences and the events are extracted.\n* Each line of the file contains information about a single sentence, delimited by \"|||\". Each line contains, in order:\n\t+ The story number\n\t+ The line number (within the story)\n\t+ 5-tuple events in a list (subject, verb, direct object, modifier noun, preposition); e.g.,\n\t''\n\t[[u'Voyager', u'run', 'EmptyParameter', u'deuterium', u'out'], [u'Voyager', u'force', u'go', 'EmptyParameter', 'EmptyParameter'], [u'Voyager', u'go', 'EmptyParameter', u'mode', u'into']]\n\t''\n\t+ generalized 5-tuple events in a list; events are generalized using WordNet and VerbNet; e.g.,\n\t''\n\t[['0', 'function-105.2.1', 'EmptyParameter', \"Synset('atom.n.01')\", u'out'], ['0', 'urge-58.1-1', u'escape-51.1-1', 'EmptyParameter', 'EmptyParameter'], ['0', u'escape-51.1-1', 'EmptyParameter', \"Synset('statistic.n.01')\", u'into']]\n\t''\n\t+ original sentence (These sentences are split to contain fewer events per sentence. For the full original sentence, see the OriginalStoriesSeparated directory.); e.g.,\n\t''\n\tThe USS Voyager is running out of deuterium as a fuel and is forced to go into Gray mode.\n\t''\n\t+ generalized sentence; only nouns are generalized (using WordNet); e.g.,\n\t''\n\tthe 0 is running out of Synset('atom.n.01') as a Synset('matter.n.03') and is forced to go into Synset('horse.n.01') Synset('statistic.n.01').\n\t''\n\t+ a dictionary of numbered entities by tag within the *entire story* (e.g. the second entity in the \"<ORGANIZATION>\" list in the dictionary would be <ORGANIZATION>1 in the story above—index starts at 0); e.g.,\n\t''\n\t{'': ['seven of nine', 'silver blood'], '': ['sickbay', 'astrometrics', 'paris', 'cavern', 'vorik', 'caves'], '': ['an hour ago', 'now'], '': ['selected works', 'demon class', 'electromagnetic', 'parises', 'mimetic'], '': ['less than a week', 'the past four years', 'thirty seconds', 'an hour', 'two hours'], '': ['two', 'dozen', '14', '15'], '': ['first'], '': ['tom paris', 'harry kim', 'captain kathryn janeway', 'tuvok', 'chakotay', 'jirex', 'neelix', 'the doctor', 'seven', 'ensign kashimuro nozawa', 'green', 'lt jg elanna torres', 'ensign vorik'], '': ['uss voyager', 'starfleet']}\n\t''",
"### Files in *OriginalStoriesSeparated* Directory\n\n\n* Contains unedited, unparsed original stories scraped from the respective Fandom wikis.\n* Each line is a story with sentences space-separated. After each story, there is a <EOS> tag on a new line.\n* There is one file for each of the 11 domains listed above.\n* These are currently not set up to be called through the Hugging Face API and must be extracted from the zip directly.\n\n\n\n\n---\n\n\nAdditional Information\n----------------------\n\n\n\n\n---",
"### Licensing\n\n\nThe Creative Commons Attribution 4.0 International License. URL"
] |
fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 |
# PAC - Polish Abusive Clauses Dataset
''I have read and agree to the terms and conditions'' is one of the biggest lies on the Internet. Consumers rarely read the contracts they are required to accept. We conclude agreements over the Internet daily. But do we know the content of these agreements? Do we check potential unfair statements? On the Internet, we probably skip most of the Terms and Conditions. However, we must remember that we have concluded many more contracts. Imagine that we want to buy a house, a car, send our kids to the nursery, open a bank account, or many more. In all these situations, you will need to conclude the contract, but there is a high probability that you will not read the entire agreement with proper understanding. European consumer law aims to prevent businesses from using so-called ''unfair contractual terms'' in their unilaterally drafted contracts, requiring consumers to accept.
Our dataset treats ''unfair contractual term'' as the equivalent of an abusive clause. It could be defined as a clause that is unilaterally imposed by one of the contract's parties, unequally affecting the other, or creating a situation of imbalance between the duties and rights of the parties.
On the EU and at the national such as the Polish levels, agencies cannot check possible agreements by hand. Hence, we took the first step to evaluate the possibility of accelerating this process. We created a dataset and machine learning models to automate potentially abusive clauses detection partially. Consumer protection organizations and agencies can use these resources to make their work more effective and efficient. Moreover, consumers can automatically analyze contracts and understand what they agree upon.
## Tasks (input, output and metrics)
Abusive Clauses Detection
**Input** ('*text'* column): text of agreement
**Output** ('*label'* column): binary label (`BEZPIECZNE_POSTANOWIENIE_UMOWNE`: correct agreement statement, `KLAUZULA_ABUZYWNA`: abusive clause)
**Domain**: legal agreement
**Measurements**: Accuracy, F1 Macro
**Example***:*
Input: *`Wszelka korespondencja wysyłana przez Pożyczkodawcę na adres zamieszkania podany w umowie oraz na e-mail zostaje uznana za skutecznie doręczoną. Zmiana adresu e-mail oraz adresu zamieszkania musi być dostarczona do Pożyczkodawcy osobiście`*
Input (translated by DeepL): *`All correspondence sent by the Lender to the residential address provided in the agreement and to the e-mail address shall be deemed effectively delivered. Change of e-mail address and residential address must be delivered to the Lender in person`*
Output: `KLAUZULA_ABUZYWNA` (abusive clause)
## Data splits
| Subset | Cardinality (sentences) |
| ----------- | ----------------------: |
| train | 4284 |
| dev | 1519 |
| test | 3453 |
## Class distribution
`BEZPIECZNE_POSTANOWIENIE_UMOWNE` - means correct agreement statement.
`KLAUZULA_ABUZYWNA` informs us about abusive clause.
| Class | train | dev | test |
|:--------------------------------|--------:|-------------:|-------:|
| BEZPIECZNE_POSTANOWIENIE_UMOWNE | 0.5458 | 0.3002 | 0.6756 |
| KLAUZULA_ABUZYWNA | 0.4542 | 0.6998 | 0.3244 |
## License
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Citation
```bibtex
@inproceedings{NEURIPS2022_890b206e,
author = {Augustyniak, Lukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and Szymczak, Adrian and Janz, Arkadiusz and Szyma\'{n}ski, Piotr and W\k{a}troba, Marcin and Morzy, Miko\l aj and Kajdanowicz, Tomasz and Piasecki, Maciej},
booktitle = {Advances in Neural Information Processing Systems},
editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
pages = {21805--21818},
publisher = {Curran Associates, Inc.},
title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/890b206ebb79e550f3988cb8db936f42-Paper-Datasets_and_Benchmarks.pdf},
volume = {35},
year = {2022}
}
``` | laugustyniak/abusive-clauses-pl | [
"task_categories:text-classification",
"annotations_creators:hired_annotators",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10<n<10K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["hired_annotators"], "language_creators": ["found"], "language": ["pl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10<n<10K"], "task_categories": ["text-classification"], "task_ids": ["text-classification"], "pretty_name": "Polish-Abusive-Clauses"} | 2023-03-29T09:46:49+00:00 | [] | [
"pl"
] | TAGS
#task_categories-text-classification #annotations_creators-hired_annotators #language_creators-found #multilinguality-monolingual #size_categories-10<n<10K #language-Polish #license-cc-by-nc-sa-4.0 #region-us
| PAC - Polish Abusive Clauses Dataset
====================================
''I have read and agree to the terms and conditions'' is one of the biggest lies on the Internet. Consumers rarely read the contracts they are required to accept. We conclude agreements over the Internet daily. But do we know the content of these agreements? Do we check potential unfair statements? On the Internet, we probably skip most of the Terms and Conditions. However, we must remember that we have concluded many more contracts. Imagine that we want to buy a house, a car, send our kids to the nursery, open a bank account, or many more. In all these situations, you will need to conclude the contract, but there is a high probability that you will not read the entire agreement with proper understanding. European consumer law aims to prevent businesses from using so-called ''unfair contractual terms'' in their unilaterally drafted contracts, requiring consumers to accept.
Our dataset treats ''unfair contractual term'' as the equivalent of an abusive clause. It could be defined as a clause that is unilaterally imposed by one of the contract's parties, unequally affecting the other, or creating a situation of imbalance between the duties and rights of the parties.
On the EU and at the national such as the Polish levels, agencies cannot check possible agreements by hand. Hence, we took the first step to evaluate the possibility of accelerating this process. We created a dataset and machine learning models to automate potentially abusive clauses detection partially. Consumer protection organizations and agencies can use these resources to make their work more effective and efficient. Moreover, consumers can automatically analyze contracts and understand what they agree upon.
Tasks (input, output and metrics)
---------------------------------
Abusive Clauses Detection
Input ('*text'* column): text of agreement
Output ('*label'* column): binary label ('BEZPIECZNE\_POSTANOWIENIE\_UMOWNE': correct agreement statement, 'KLAUZULA\_ABUZYWNA': abusive clause)
Domain: legal agreement
Measurements: Accuracy, F1 Macro
Example\*:\*
Input: *'Wszelka korespondencja wysyłana przez Pożyczkodawcę na adres zamieszkania podany w umowie oraz na e-mail zostaje uznana za skutecznie doręczoną. Zmiana adresu e-mail oraz adresu zamieszkania musi być dostarczona do Pożyczkodawcy osobiście'*
Input (translated by DeepL): *'All correspondence sent by the Lender to the residential address provided in the agreement and to the e-mail address shall be deemed effectively delivered. Change of e-mail address and residential address must be delivered to the Lender in person'*
Output: 'KLAUZULA\_ABUZYWNA' (abusive clause)
Data splits
-----------
Class distribution
------------------
'BEZPIECZNE\_POSTANOWIENIE\_UMOWNE' - means correct agreement statement.
'KLAUZULA\_ABUZYWNA' informs us about abusive clause.
License
-------
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
| [] | [
"TAGS\n#task_categories-text-classification #annotations_creators-hired_annotators #language_creators-found #multilinguality-monolingual #size_categories-10<n<10K #language-Polish #license-cc-by-nc-sa-4.0 #region-us \n"
] |
fbf9bb8761bafeb5d7e158901446da58f6a71d9c |
# Dataset Card for German Legal Sentences
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lavis-nlp.github.io/german_legal_sentences/
- **Repository:** https://github.com/lavis-nlp/german_legal_sentences
- **Paper:** coming soon
- **Leaderboard:**
- **Point of Contact:** [Marco Wrzalik](mailto:[email protected])
### Dataset Summary
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342).
### Supported Tasks and Leaderboards
The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:
| Method | MRR@10 | MAP@200 | Recall@200 |
|-----------------------------------|---------:|-----------:|------------:|
| BM25 - default `(k1=1.2; b=0.75)` | 25.7 | 17.6 | 42.9 |
| BM25 - tuned `(k1=0.47; b=0.97)` | 26.2 | 18.1 | 43.3 |
| [CoRT](https://arxiv.org/abs/2010.10252) | 31.2 | 21.4 | 56.2 |
| [CoRT + BM25](https://arxiv.org/abs/2010.10252) | 32.1 | 22.1 | 67.1 |
In addition, we want to support a *Citation Recommendation* task in the future.
If you wish to contribute evaluation measures or give any suggestion or critique, please write an [e-mail](mailto:[email protected]).
### Languages
This dataset contains texts from the specific domain of German court decisions.
## Dataset Structure
### Data Instances
```
{'query.doc_id': 28860,
'query.ref_ids': [6215, 248, 248],
'query.sent_id': 304863,
'query.text': 'Zudem ist zu berücksichtigen , dass die Vollverzinsung nach '
'[REF] i. V. m. [REF] gleichermaßen zugunsten wie zulasten des '
'Steuerpflichtigen wirkt , sodass bei einer Überzahlung durch '
'den Steuerpflichtigen der Staat dem Steuerpflichtigen neben '
'der Erstattung ebenfalls den entstandenen potentiellen Zins- '
'und Liquiditätsnachteil in der pauschalierten Höhe des [REF] '
'zu ersetzen hat , unabhängig davon , in welcher Höhe dem '
'Berechtigten tatsächlich Zinsen entgangen sind .',
'related.doc_id': 56348,
'related.ref_ids': [248, 6215, 62375],
'related.sent_id': 558646,
'related.text': 'Ferner ist zu berücksichtigen , dass der Zinssatz des [REF] '
'im Rahmen des [REF] sowohl für Steuernachforderung wie auch '
'für Steuererstattungen und damit gleichermaßen zugunsten wie '
'zulasten des Steuerpflichtigen wirkt , Vgl. BVerfG , '
'Nichtannahmebeschluss vom [DATE] [REF] , juris , mit der '
'Folge , dass auch Erstattungsansprüche unabhängig davon , ob '
'und in welcher Höhe dem Berechtigten tatsächlich Zinsen '
'entgangen sind , mit monatlich 0,0 % verzinst werden .'}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The documents we take from [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342) are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g `[REF321]`). At the same time we parse dates and replace them with the date tag `[DATE]`. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
We use [SoMaJo](https://github.com/tsproisl/SoMaJo) to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.
#### Who are the source language producers?
The source language originates in the context of German court proceedings.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
The source documents are already public and anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Coming soon!
### Contributions
Thanks to [@mwrzalik](https://github.com/mwrzalik) for adding this dataset. | lavis-nlp/german_legal_sentences | [
"task_categories:text-retrieval",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n>1M",
"source_datasets:original",
"language:de",
"license:unknown",
"arxiv:2005.13342",
"arxiv:2010.10252",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["de"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n>1M"], "source_datasets": ["original"], "task_categories": ["text-retrieval", "text-scoring"], "task_ids": ["semantic-similarity-scoring", "text-retrieval-other-example-based-retrieval"]} | 2022-10-20T17:34:19+00:00 | [
"2005.13342",
"2010.10252"
] | [
"de"
] | TAGS
#task_categories-text-retrieval #task_ids-semantic-similarity-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-n>1M #source_datasets-original #language-German #license-unknown #arxiv-2005.13342 #arxiv-2010.10252 #region-us
| Dataset Card for German Legal Sentences
=======================================
Table of Contents
-----------------
* [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: coming soon
* Leaderboard:
* Point of Contact: Marco Wrzalik
### Dataset Summary
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by Open Legal Data (URL
### Supported Tasks and Leaderboards
The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:
In addition, we want to support a *Citation Recommendation* task in the future.
If you wish to contribute evaluation measures or give any suggestion or critique, please write an e-mail.
### Languages
This dataset contains texts from the specific domain of German court decisions.
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The documents we take from Open Legal Data (URL are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g '[REF321]'). At the same time we parse dates and replace them with the date tag '[DATE]'. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
We use SoMaJo to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.
#### Who are the source language producers?
The source language originates in the context of German court proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
The source documents are already public and anonymized.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
Coming soon!
### Contributions
Thanks to @mwrzalik for adding this dataset.
| [
"### Dataset Summary\n\n\nGerman Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by Open Legal Data (URL",
"### Supported Tasks and Leaderboards\n\n\nThe main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:\n\n\n\nIn addition, we want to support a *Citation Recommendation* task in the future.\n\n\nIf you wish to contribute evaluation measures or give any suggestion or critique, please write an e-mail.",
"### Languages\n\n\nThis dataset contains texts from the specific domain of German court decisions.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe documents we take from Open Legal Data (URL are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, \"§211 Absatz 1 des Strafgesetzbuches\" is normalized to \"§ 211 Abs. 1 StGB\". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g '[REF321]'). At the same time we parse dates and replace them with the date tag '[DATE]'. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.\n\n\nWe use SoMaJo to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.",
"#### Who are the source language producers?\n\n\nThe source language originates in the context of German court proceedings.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nThe annotations are machine-generated.",
"### Personal and Sensitive Information\n\n\nThe source documents are already public and anonymized.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nWith this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nComing soon!",
"### Contributions\n\n\nThanks to @mwrzalik for adding this dataset."
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-semantic-similarity-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-n>1M #source_datasets-original #language-German #license-unknown #arxiv-2005.13342 #arxiv-2010.10252 #region-us \n",
"### Dataset Summary\n\n\nGerman Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by Open Legal Data (URL",
"### Supported Tasks and Leaderboards\n\n\nThe main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:\n\n\n\nIn addition, we want to support a *Citation Recommendation* task in the future.\n\n\nIf you wish to contribute evaluation measures or give any suggestion or critique, please write an e-mail.",
"### Languages\n\n\nThis dataset contains texts from the specific domain of German court decisions.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe documents we take from Open Legal Data (URL are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, \"§211 Absatz 1 des Strafgesetzbuches\" is normalized to \"§ 211 Abs. 1 StGB\". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g '[REF321]'). At the same time we parse dates and replace them with the date tag '[DATE]'. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.\n\n\nWe use SoMaJo to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.",
"#### Who are the source language producers?\n\n\nThe source language originates in the context of German court proceedings.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nThe annotations are machine-generated.",
"### Personal and Sensitive Information\n\n\nThe source documents are already public and anonymized.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nWith this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nComing soon!",
"### Contributions\n\n\nThanks to @mwrzalik for adding this dataset."
] |
Subsets and Splits