--- language: - en license: cc-by-4.0 dataset_info: features: - name: id dtype: string - name: start_time dtype: int32 - name: question dtype: string - name: question_type dtype: string - name: answer dtype: string splits: - name: train num_bytes: 1510405 num_examples: 1779 download_size: 114117 dataset_size: 1510405 configs: - config_name: default data_files: - split: train path: data/train-* --- # URMP ABC Notation 25s Dataset ## Dataset Summary The **URMP ABC Notation 25s Dataset** is a collection of question-and-answer pairs based on short audio clips from the [University of Rochester Multi-Modal Music Performance (URMP) dataset](https://labsites.rochester.edu/air/projects/URMP.html). Each entry in the dataset provides: - An `id` identifying the original audio file. - A `start_time` indicating where the audio clip begins within the full audio file. - A `question` generated to prompt music transcription via ABC notation. - An `answer` which is the ABC notation. This dataset is intended for training multi-modal audio-language models (like [Spotify Llark](https://research.atspotify.com/2023/10/llark-a-multimodal-foundation-model-for-music/) and [Qwen2-Audio](https://github.com/QwenLM/Qwen2-Audio)) on the task of music transcription. The code I used for converting the MIDI to ABC notation is based on [this script](https://github.com/jwdj/EasyABC/blob/master/midi2abc.py). **Why ABC?** The reasons for choosing this notation are: - It's a minimalist format for writing music - It's widely used and popular, language models already have good comprehension and know a lot about ABC notation. - It's flexible and can easily be extended to include tempo changes, time signature changes, additional playing styles like mentioned above, etc… **Dataset Modifications to ABC Format** - Default octaves have been assigned to each instrument, using their most commonly played range. This reduces redundant octave notation. - For consistency, I excluded pieces that contain time signature changes or significant tempo variations (greater than 10 BPM). - All samples in this dataset contain active musical parts - sections with complete silence have been removed. ## Licensing Information - **URMP Dataset:** The original audio files are part of the URMP dataset. Refer to the [URMP dataset license](https://labsites.rochester.edu/air/projects/URMP.html) for terms of use. ## Citation Information If you use this dataset, please cite it as follows: ```bibtex @dataset{urmp_abc_notation_25s_2024, title={URMP ABC Notation Dataset}, author={Jon Flynn}, year={2024}, howpublished={\url{https://huggingface.co/datasets/jonflynn/urmp_abc_notation_25s}}, note={ABC notation for the URMP dataset split into 25 second chunks}, } ``` Additionally, cite the original URMP dataset: ```bibtex @article{li2018creating, title={Creating a Multimodal Dataset for Tracking Human Motion and Kinematics in Music Performances}, author={Li, Chenyu and Xia, Wei and Akbari, Vahid and Duan, Zhiyao and Xu, Chenliang}, journal={arXiv preprint arXiv:1807.09365}, year={2018} } ``` --- **Additional Resources:** - **URMP Dataset Website:** [URMP Dataset](https://labsites.rochester.edu/air/projects/URMP.html) - **ABC Notation:** [ABC Notation Official Website](http://abcnotation.com/)