Datasets:
File size: 6,475 Bytes
f3bad19 dcb71db f3bad19 8b477b5 7393bce 804df9b 39c588f f779a42 39c588f 53e3e94 f779a42 107fa1f 25c746f 39c588f 53e3e94 f779a42 107fa1f 25c746f d23c839 7393bce d23c839 5767e79 43e58a9 5767e79 ba4f51d 962c210 dcb71db 891a66b d23c839 8b477b5 8bfdf6b 8b477b5 c85dae1 8b477b5 f78bf44 d017675 f78bf44 1282903 122ef43 b0df395 d872576 8b477b5 265a800 8b477b5 265a800 8b477b5 265a800 8b477b5 265a800 3c97952 265a800 ba23be3 265a800 2a620a4 1b71b3f 2a620a4 1b71b3f 2a620a4 1b71b3f 2a620a4 8b477b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 |
---
task_categories:
- automatic-speech-recognition
multilinguality:
- multilingual
language:
- en
- fr
- de
- es
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
pretty_name: 'JamALT: A Readability-Aware Lyrics Transcription Benchmark'
paperswithcode_id: jam-alt
dataset_info:
- config_name: all
features:
- name: name
dtype: string
- name: text
dtype: string
- name: language
dtype: string
- name: license_type
dtype: string
- name: audio
dtype: audio
splits:
- name: test
num_bytes: 409411912.0
num_examples: 79
download_size: 409150043
dataset_size: 409411912.0
- config_name: de
features:
- name: name
dtype: string
- name: text
dtype: string
- name: language
dtype: string
- name: license_type
dtype: string
- name: audio
dtype: audio
splits:
- name: test
num_bytes: 107962802.0
num_examples: 20
download_size: 107942102
dataset_size: 107962802.0
- config_name: en
features:
- name: name
dtype: string
- name: text
dtype: string
- name: language
dtype: string
- name: license_type
dtype: string
- name: audio
dtype: audio
splits:
- name: test
num_bytes: 105135091.0
num_examples: 20
download_size: 105041371
dataset_size: 105135091.0
- config_name: es
features:
- name: name
dtype: string
- name: text
dtype: string
- name: language
dtype: string
- name: license_type
dtype: string
- name: audio
dtype: audio
splits:
- name: test
num_bytes: 105024257.0
num_examples: 20
download_size: 104979012
dataset_size: 105024257.0
- config_name: fr
features:
- name: name
dtype: string
- name: text
dtype: string
- name: language
dtype: string
- name: license_type
dtype: string
- name: audio
dtype: audio
splits:
- name: test
num_bytes: 91289764.0
num_examples: 19
download_size: 91218543
dataset_size: 91289764.0
configs:
- config_name: all
data_files:
- split: test
path: parquet/all/test-*
default: true
- config_name: de
data_files:
- split: test
path: parquet/de/test-*
- config_name: en
data_files:
- split: test
path: parquet/en/test-*
- config_name: es
data_files:
- split: test
path: parquet/es/test-*
- config_name: fr
data_files:
- split: test
path: parquet/fr/test-*
---
# JamALT: A Readability-Aware Lyrics Transcription Benchmark
## Dataset description
* **Project page:** https://audioshake.github.io/jam-alt/
* **Source code:** https://github.com/audioshake/alt-eval
* **Paper (ISMIR 2024):** https://www.arxiv.org/abs/2408.06370
* **Extended abstract (ISMIR 2023 LBD):** https://arxiv.org/abs/2311.13987
JamALT is a revision of the [JamendoLyrics](https://github.com/f90/jamendolyrics) dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.
The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling, punctuation, and formatting.
The audio is identical to the JamendoLyrics dataset.
However, only 79 songs are included, as one of the 20 French songs (`La_Fin_des_Temps_-_BuzzBonBon`) has been removed due to concerns about potentially harmful content.
**Note:** The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly.
See the [project website](https://audioshake.github.io/jam-alt/) for details.
## Loading the data
```python
from datasets import load_dataset
dataset = load_dataset("audioshake/jam-alt")["test"]
```
A subset is defined for each language (`en`, `fr`, `de`, `es`);
for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs.
By default, the dataset comes with audio. To skip loading the audio, use `with_audio=False`.
To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`.
Useful arguments to `datasets.Audio()` are:
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
- `decode=False` to skip decoding the audio and just get the MP3 file paths.
## Running the benchmark
The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval):
```python
from datasets import load_dataset
from alt_eval import compute_metrics
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```
For example, the following code can be used to evaluate Whisper:
```python
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it
model = whisper.load_model("tiny")
transcriptions = [
"\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```
Alternatively, if you already have transcriptions, you might prefer to skip loading the audio:
```python
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"]
```
## Citation
When using the benchmark, please cite [our paper](https://www.arxiv.org/abs/2408.06370) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744):
```bibtex
@misc{cifka-2024-jam-alt,
author = {Ond\v{r}ej C\'ifka and
Hendrik Schreiber and
Luke Miner and
Fabian-Robert St\"oter},
title = {Lyrics Transcription for Humans: A Readability-Aware Benchmark},
booktitle = {Proceedings of the 25th International Society for
Music Information Retrieval Conference},
year = 2024,
publisher = {ISMIR},
note = {to appear; preprint arXiv:2408.06370}
}
@inproceedings{durand-2023-contrastive,
author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages},
year={2023},
pages={1-5},
address={Rhodes Island, Greece},
doi={10.1109/ICASSP49357.2023.10096725}
}
``` |