metadata
language:
- en
tags:
- huggingartists
- lyrics
Dataset Card for "huggingartists/big-russian-boss"
Table of Contents
- Dataset Description
- How to use
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
- About
Dataset Description
- Homepage: https://github.com/AlekseyKorshuk/huggingartists
- Repository: https://github.com/AlekseyKorshuk/huggingartists
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of the generated dataset: 0.52183 MB
Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available here.
Supported Tasks and Leaderboards
Languages
en
How to use
How to load this dataset directly with the datasets library:
from datasets import load_dataset
dataset = load_dataset("huggingartists/big-russian-boss")
Dataset Structure
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
Data Fields
The data fields are the same among all splits.
text
: astring
feature.
Data Splits
train | validation | test |
---|---|---|
151 | - | - |
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/big-russian-boss")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
About
Built by Aleksey Korshuk
For more details, visit the project repository.