Update README.md
Browse files
README.md
CHANGED
@@ -236,4 +236,192 @@ tags:
|
|
236 |
- music information retrieval
|
237 |
size_categories:
|
238 |
- 100K<n<1M
|
239 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
236 |
- music information retrieval
|
237 |
size_categories:
|
238 |
- 100K<n<1M
|
239 |
+
---
|
240 |
+
# Dataset Card for SynTheory
|
241 |
+
|
242 |
+
## Table of Contents
|
243 |
+
- [Dataset Description](#dataset-description)
|
244 |
+
- [Dataset Summary](#dataset-summary)
|
245 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
246 |
+
- [Languages](#languages)
|
247 |
+
- [How to use](#how-to-use)
|
248 |
+
- [Dataset Structure](#dataset-structure)
|
249 |
+
- [Data Instances](#data-instances)
|
250 |
+
- [Data Fields](#data-fields)
|
251 |
+
- [Data Splits](#data-splits)
|
252 |
+
- [Dataset Creation](#dataset-creation)
|
253 |
+
- [Curation Rationale](#curation-rationale)
|
254 |
+
- [Source Data](#source-data)
|
255 |
+
- [Annotations](#annotations)
|
256 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
257 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
258 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
259 |
+
- [Discussion of Biases](#discussion-of-biases)
|
260 |
+
- [Other Known Limitations](#other-known-limitations)
|
261 |
+
- [Additional Information](#additional-information)
|
262 |
+
- [Dataset Curators](#dataset-curators)
|
263 |
+
- [Licensing Information](#licensing-information)
|
264 |
+
- [Citation Information](#citation-information)
|
265 |
+
- [Contributions](#contributions)
|
266 |
+
|
267 |
+
## Dataset Description
|
268 |
+
|
269 |
+
- **Homepage:** [Do Music Generation Models Encode Music Theory?](https://brown-palm.github.io/music-theory/)
|
270 |
+
- **Repository:** [SynTheory](https://github.com/brown-palm/syntheory)
|
271 |
+
|
272 |
+
### Dataset Summary
|
273 |
+
|
274 |
+
SynTheory is a synthetic dataset of music theory concepts, specifically rhythmic (tempos and time signatures) and tonal (notes, intervals, scales, chords, and chord progressions).
|
275 |
+
|
276 |
+
Each of these 7 concepts has its own config.
|
277 |
+
|
278 |
+
`tempos` consist of 161 total integer tempos (`bpm`) ranging from 50 BPM to 210 BPM (inclusive), 5 percussive instrument types (`click_config_name`), and 5 random start time offsets (`offset_time`).
|
279 |
+
|
280 |
+
`time_signatures` consist of 8 time signatures (`time_signature`), 5 percussive instrument types (`click_config_name`), 10 random start time offsets (`offset_time`), and 3 reverb levels (`reverb_level`). The 8 time signatures are 2/2, 2/4, 3/4, 3/8, 4/4, 6/8, 9/8, and 12/8.
|
281 |
+
|
282 |
+
`notes` consist of 12 pitch classes (`root_note_name`), 9 octaves (`octave`), and 92 instrument types (`midi_program_name`). The 12 pitch classes are C, C#, D, D#, E, F, F#, G, G#, A, A# and B.
|
283 |
+
|
284 |
+
`intervals` consist of 12 interval sizes (`interval`), 12 root notes (`root_note_name`), 92 instrument types (`midi_program_name`), and 3 play styles (`play_style_name`). The 12 intervals are minor 2nd, Major 2nd, minor 3rd, Major 3rd, Perfect 4th, Tritone, Perfect 5th, minor 6th, Major 6th, minor 7th, Major 7th, and Perfect octave.
|
285 |
+
|
286 |
+
`scales` consist of 7 modes (`mode`), 12 root notes (`root_note_name`), 92 instrument types (`midi_program_name`), and 2 play styles (`play_style_name`). The 7 modes are Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian.
|
287 |
+
|
288 |
+
`chords` consist of 4 chord quality (`chord_type`), 3 inversions (`inversion`), 12 root notes (`root_note_name`), and 92 instrument types (`midi_program_name`). The 4 chord quality types are major, minor, augmented, and diminished. The 3 inversions are root position, first inversion, and second inversion.
|
289 |
+
|
290 |
+
`simple_progressions` consist of 19 chord progressions (`chord_progression`), 12 root notes (`key_note_name`), and 92 instrument types (`midi_program_name`). The 19 chord progressions consist of 10 chord progressions in major mode and 9 in natural minor mode. The major mode chord progressions are (I–IV–V–I), (I–IV–vi–V), (I–V–vi–IV), (I–vi–IV–V), (ii–V–I–Vi), (IV–I–V–Vi), (IV–V–iii–Vi), (V–IV–I–V), (V–vi–IV–I), and (vi–IV–I–V). The natural minor mode chord progressions are (i–ii◦–v–i), (i–III–iv–i), (i–iv–v–i), (i–VI–III–VII), (i–VI–VII–i), (i–VI–VII–III), (i–VII–VI–IV), (iv–VII–i–i), and (VII–vi–VII–i).
|
291 |
+
|
292 |
+
### Supported Tasks and Leaderboards
|
293 |
+
|
294 |
+
- `audio-classification`: This can be used towards music theory classification tasks.
|
295 |
+
- `feature-extraction`: Our samples can be fed into pretrained audio codecs to extract representations from the model, which can be further used for downstream MIR tasks.
|
296 |
+
|
297 |
+
### How to use
|
298 |
+
|
299 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
300 |
+
|
301 |
+
For example, to download the notes config, simply specify the corresponding language config name (i.e., "notes"):
|
302 |
+
```python
|
303 |
+
from datasets import load_dataset
|
304 |
+
|
305 |
+
notes = load_dataset("meganwei/syntheory", "notes")
|
306 |
+
```
|
307 |
+
|
308 |
+
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
309 |
+
```python
|
310 |
+
from datasets import load_dataset
|
311 |
+
|
312 |
+
notes = load_dataset("meganwei/syntheory", "notes", streaming=True)
|
313 |
+
|
314 |
+
print(next(iter(notes)))
|
315 |
+
```
|
316 |
+
|
317 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
|
318 |
+
|
319 |
+
Local:
|
320 |
+
|
321 |
+
```python
|
322 |
+
from datasets import load_dataset
|
323 |
+
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
324 |
+
from torch.utils.data import DataLoader
|
325 |
+
|
326 |
+
notes = load_dataset("meganwei/syntheory", "notes")
|
327 |
+
batch_sampler = BatchSampler(RandomSampler(notes), batch_size=32, drop_last=False)
|
328 |
+
dataloader = DataLoader(notes, batch_sampler=batch_sampler)
|
329 |
+
```
|
330 |
+
|
331 |
+
Streaming:
|
332 |
+
|
333 |
+
```python
|
334 |
+
from datasets import load_dataset
|
335 |
+
from torch.utils.data import DataLoader
|
336 |
+
|
337 |
+
notes = load_dataset("meganwei/syntheory", "notes", streaming=True)
|
338 |
+
dataloader = DataLoader(notes, batch_size=32)
|
339 |
+
```
|
340 |
+
|
341 |
+
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
|
342 |
+
|
343 |
+
### Example scripts
|
344 |
+
|
345 |
+
[More Information Needed]
|
346 |
+
|
347 |
+
## Dataset Structure
|
348 |
+
|
349 |
+
### Data Fields
|
350 |
+
|
351 |
+
[More Information Needed]
|
352 |
+
|
353 |
+
## Dataset Creation
|
354 |
+
|
355 |
+
### Curation Rationale
|
356 |
+
|
357 |
+
[More Information Needed]
|
358 |
+
|
359 |
+
### Source Data
|
360 |
+
|
361 |
+
#### Initial Data Collection and Normalization
|
362 |
+
|
363 |
+
[More Information Needed]
|
364 |
+
|
365 |
+
#### Who are the source language producers?
|
366 |
+
|
367 |
+
[More Information Needed]
|
368 |
+
|
369 |
+
### Annotations
|
370 |
+
|
371 |
+
#### Annotation process
|
372 |
+
|
373 |
+
[More Information Needed]
|
374 |
+
|
375 |
+
#### Who are the annotators?
|
376 |
+
|
377 |
+
[More Information Needed]
|
378 |
+
|
379 |
+
### Personal and Sensitive Information
|
380 |
+
|
381 |
+
[More Information Needed]
|
382 |
+
|
383 |
+
## Considerations for Using the Data
|
384 |
+
|
385 |
+
### Social Impact of Dataset
|
386 |
+
|
387 |
+
[More Information Needed]
|
388 |
+
|
389 |
+
### Discussion of Biases
|
390 |
+
|
391 |
+
[More Information Needed]
|
392 |
+
|
393 |
+
### Other Known Limitations
|
394 |
+
|
395 |
+
For the notes music theory concept, there are 9,936 distinct note configurations. However, our dataset contains 9,900 non-silent samples. The 36 silent samples at extreme registers are unvoiceable with our soundfont. With a more complete soundfont, all 9,936 configurations are realizable to audio.
|
396 |
+
|
397 |
+
## Additional Information
|
398 |
+
|
399 |
+
### Dataset Curators
|
400 |
+
|
401 |
+
[More Information Needed]
|
402 |
+
|
403 |
+
### Licensing Information
|
404 |
+
|
405 |
+
[More Information Needed]
|
406 |
+
|
407 |
+
### Citation Information
|
408 |
+
```bibtext
|
409 |
+
@inproceedings{Wei2024-music,
|
410 |
+
title={Do Music Generation Models Encode Music Theory?},
|
411 |
+
author={Wei, Megan and Freeman, Michael and Donahue, Chris and Sun, Chen},
|
412 |
+
booktitle={International Society for Music Information Retrieval},
|
413 |
+
year={2024}
|
414 |
+
}
|
415 |
+
```
|
416 |
+
|
417 |
+
### Data Statistics
|
418 |
+
|
419 |
+
| Concept | Number of Samples |
|
420 |
+
|--------------------|-------------------|
|
421 |
+
| Tempo | 4,025 |
|
422 |
+
| Time Signatures | 1,200 |
|
423 |
+
| Notes | 9,936 |
|
424 |
+
| Intervals | 39,744 |
|
425 |
+
| Scales | 15,456 |
|
426 |
+
| Chords | 13,248 |
|
427 |
+
| Chord Progressions | 20,976 |
|