meter2800 / README.md
pianistprogrammer's picture
updated readme
bdae059
metadata
pretty_name: meter2800
language:
  - en
tags:
  - audio
  - music-classification
  - meter-classification
  - multi-class-classification
  - multi-label-classification
license: mit
task_categories:
  - audio-classification
dataset_info:
  size_categories:
    - 1K<n<10K
  source_datasets:
    - gtzan
    - mag
    - own
    - fma
configs:
  - config_name: 2_classes
    default: true
    data_files:
      - split: train
        path: data_train_2_classes.csv
      - split: validation
        path: data_val_2_classes.csv
      - split: test
        path: data_test_2_classes.csv
  - config_name: 4_classes
    data_files:
      - split: train
        path: data_train_4_classes.csv
      - split: validation
        path: data_val_4_classes.csv
      - split: test
        path: data_test_4_classes.csv

Meter2800

Dataset for music time signature/ meter (rhythm) classification, combining tracks from GTZAN, MAG, OWN, and FMA.

Dataset Description

Meter2800 is a curated collection of 2,800 .wav music audio samples, each annotated with meter (and optionally alt_meter). It supports both:

  • 4-class classification (e.g., 4 genres),
  • 2-class classification (binary meter labeling).

Split into train/val/test sets with clear metadata in CSV.

Intended for music information retrieval tasks like rhythmic / structural analysis and genre prediction.

Supported Tasks and Usage

Load the dataset via the datasets library with automatic audio decoding:

from datasets import load_dataset, Audio

meter2800 = load_dataset("pianistprogrammer/meter2800", name="4_classes")

The output should look like this

DatasetDict({
    train: Dataset({
        features: ['filename', 'audio', 'label', 'meter', 'alt_meter'],
        num_rows: 1680
    })
    validation: Dataset({
        features: ['filename', 'audio', 'label', 'meter', 'alt_meter'],
        num_rows: 420
    })
    test: Dataset({
        features: ['filename', 'audio', 'label', 'meter', 'alt_meter'],
        num_rows: 700
    })
})
meter2800["train"][0]

A sample of the training set

{'filename': 'MAG/00553.wav',
 'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/.             73a5809e655e59c99bd79d00033b98b254ca3689f2b9e2c2eba55fe3894b7622/MAG/00553.wav',
  'array': array([ 2.87892180e-06, -1.07296364e-05, -3.22661945e-05, ...,
         -2.06501483e-13, -5.44009282e-15,  1.38777878e-14]),
  'sampling_rate': 16000},
 'label': 'three',
 'meter': '3',
 'alt_meter': '6'
 }

Each entry in the dataset contains:

  • filename: Path to the audio file.
  • label: Genre label (multi-class or binary, depending on split).
  • meter: Primary meter annotation (e.g., 4/4, 3/4).
  • alt_meter: Optional alternative meter annotation.
  • audio: Audio data as a NumPy array and its sampling rate.

The dataset is organized into the following splits:

  • train_4, val_4, test_4: For 4-class meter classification.
  • train_2, val_2, test_2: For 2-class (binary) meter classification.

All splits are provided as CSV files referencing the audio files in the corresponding folders (GTZAN/, MAG/, OWN/, FMA/).

Example row in a CSV file:


| filename                | label   | meter | alt_meter | 
|-------------------------|---------|-------|-----------|
| GTZAN/blues.00000.wav   | three   |   3   |    6      |


Meter2800/
β”œβ”€β”€ data.tar.gz // contains the audio data
β”œβ”€β”€ data_train_4_classes.csv
β”œβ”€β”€ data_val_4_classes.csv
β”œβ”€β”€ data_test_4_classes.csv
β”œβ”€β”€ data_train_2_classes.csv
β”œβ”€β”€ data_val_2_classes.csv
β”œβ”€β”€ data_test_2_classes.csv
└── README.md


@misc{meter2800_dataset,
  author       = {PianistProgrammer},
  title        = {{Meter2800}: A Dataset for Music time signature detection / Meter Classification},
  year         = {2025},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/pianistprogrammer/meter2800}
}

license: "CC0 1.0 Public Domain"