Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,364 Bytes
f95a9e6
e162f4a
 
f95a9e6
 
 
 
 
 
 
 
 
78add2a
 
f95a9e6
 
 
 
78add2a
 
 
 
f95a9e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0cd37c6
 
 
 
 
f95a9e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
language:
- en
license: cc-by-4.0
dataset_info:
  features:
  - name: id
    dtype: string
  - name: start_time
    dtype: int32
  - name: question
    dtype: string
  - name: question_type
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: train
    num_bytes: 1510405
    num_examples: 1779
  download_size: 114117
  dataset_size: 1510405
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# URMP ABC Notation 25s Dataset

## Dataset Summary

The **URMP ABC Notation 25s Dataset** is a collection of question-and-answer pairs based on short audio clips from the [University of Rochester Multi-Modal Music Performance (URMP) dataset](https://labsites.rochester.edu/air/projects/URMP.html). Each entry in the dataset provides:

- An `id` identifying the original audio file.
- A `start_time` indicating where the audio clip begins within the full audio file.
- A `question` generated to prompt music transcription via ABC notation.
- An `answer` which is the ABC notation.

This dataset is intended for training multi-modal audio-language models (like [Spotify Llark](https://research.atspotify.com/2023/10/llark-a-multimodal-foundation-model-for-music/) and [Qwen2-Audio](https://github.com/QwenLM/Qwen2-Audio)) on the task of music transcription. The code I used for converting the MIDI to ABC notation is based on [this script](https://github.com/jwdj/EasyABC/blob/master/midi2abc.py).

**Why ABC?**

The reasons for choosing this notation are:

- It's a minimalist format for writing music
- It's widely used and popular, language models already have good comprehension and know a lot about ABC notation.
- It's flexible and can easily be extended to include tempo changes, time signature changes, additional playing styles like mentioned above, etc…

**Dataset Modifications to ABC Format**
- Default octaves have been assigned to each instrument, using their most commonly played range. This reduces redundant octave notation.
- For consistency, I excluded pieces that contain time signature changes or significant tempo variations (greater than 10 BPM).
- All samples in this dataset contain active musical parts - sections with complete silence have been removed.

## Licensing Information

- **URMP Dataset:** The original audio files are part of the URMP dataset. Refer to the [URMP dataset license](https://labsites.rochester.edu/air/projects/URMP.html) for terms of use.

## Citation Information

If you use this dataset, please cite it as follows:

```bibtex
@dataset{urmp_abc_notation_25s_2024,
  title={URMP ABC Notation Dataset},
  author={Jon Flynn},
  year={2024},
  howpublished={\url{https://huggingface.co/datasets/jonflynn/urmp_abc_notation_25s}},
  note={ABC notation for the URMP dataset split into 25 second chunks},
}
```

Additionally, cite the original URMP dataset:

```bibtex
@article{li2018creating,
  title={Creating a Multimodal Dataset for Tracking Human Motion and Kinematics in Music Performances},
  author={Li, Chenyu and Xia, Wei and Akbari, Vahid and Duan, Zhiyao and Xu, Chenliang},
  journal={arXiv preprint arXiv:1807.09365},
  year={2018}
}
```

---

**Additional Resources:**

- **URMP Dataset Website:** [URMP Dataset](https://labsites.rochester.edu/air/projects/URMP.html)
- **ABC Notation:** [ABC Notation Official Website](http://abcnotation.com/)