|
--- |
|
language: |
|
- en |
|
license: cc-by-4.0 |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: start_time |
|
dtype: int32 |
|
- name: question |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 13432031 |
|
num_examples: 5802 |
|
download_size: 3860760 |
|
dataset_size: 13432031 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Maestro ABC Notation 25s Dataset |
|
|
|
## Dataset Summary |
|
|
|
This is based on V3.0.0 of the Maestro dataset. |
|
|
|
The **Maestro ABC Notation 25s Dataset** is a curated collection of question-and-answer pairs derived from short audio clips within the [MAESTRO dataset](https://magenta.tensorflow.org/datasets/maestro). Each entry in the dataset includes: |
|
|
|
- An `id` corresponding to the original audio file. |
|
- A `start_time` marking where the 25-second audio clip begins within the full track. |
|
- A `question` designed to prompt music transcription in ABC notation. |
|
- An `answer` that provides the transcription in ABC notation format. |
|
|
|
This dataset is crafted for training multi-modal audio-language models (such as [Spotify Llark](https://research.atspotify.com/2023/10/llark-a-multimodal-foundation-model-for-music/) and [Qwen2-Audio](https://github.com/QwenLM/Qwen2-Audio)) with a focus on music transcription tasks. The MIDI-to-ABC conversion is achieved with a modified script based on [this code](https://github.com/jwdj/EasyABC/blob/master/midi2abc.py). |
|
|
|
### Why ABC? |
|
|
|
The reasons for choosing this notation are: |
|
|
|
- It's a minimalist format for writing music |
|
- It's widely used and popular, language models already have good comprehension and know a lot about ABC notation. |
|
- It's flexible and can easily be extended to include tempo changes, time signature changes, additional playing styles like mentioned above, etc… |
|
|
|
### Dataset Modifications to ABC Format |
|
|
|
- Default octaves have been assigned to each instrument, using their most commonly played range. This reduces redundant octave notation. |
|
- For consistency, I excluded pieces that contain time signature changes or significant tempo variations (greater than 10 BPM). |
|
- All samples in this dataset contain active musical parts - sections with complete silence have been removed. |
|
|
|
## Licensing Information |
|
|
|
- **MAESTRO Dataset**: The audio files are sourced from the MAESTRO dataset, licensed under the Creative Commons Attribution Non-Commercial Share-Alike 4.0 license. Please refer to the [MAESTRO dataset page](https://magenta.tensorflow.org/datasets/maestro) for full licensing details. |
|
|
|
## Citation Information |
|
|
|
If you utilize this dataset, please cite it as follows: |
|
|
|
```bibtex |
|
@dataset{maestro_abc_notation_25s_2024, |
|
title={MAESTRO ABC Notation Dataset}, |
|
author={Jon Flynn}, |
|
year={2024}, |
|
howpublished={\url{https://huggingface.co/datasets/jonflynn/maestro_abc_notation_25s}}, |
|
note={ABC notation for the MAESTRO dataset split into 25-second segments}, |
|
} |
|
``` |
|
|
|
For the original MAESTRO dataset, please cite the following: |
|
|
|
```bibtex |
|
@inproceedings{hawthorne2018enabling, |
|
title={Enabling Factorized Piano Music Modeling and Generation with the {MAESTRO} Dataset}, |
|
author={Curtis Hawthorne and Andriy Stasyuk and Adam Roberts and Ian Simon and Cheng-Zhi Anna Huang and Sander Dieleman and Erich Elsen and Jesse Engel and Douglas Eck}, |
|
booktitle={International Conference on Learning Representations}, |
|
year={2019}, |
|
url={https://openreview.net/forum?id=r1lYRjC9F7}, |
|
} |
|
``` |