The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 4.89 GiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for the AMI dataset for speaker diarization

The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers.

Note: This dataset has been preprocessed using diarizers. It makes the dataset compatible with the diarizers library to fine-tune pyannote segmentation models.

Example Usage

from datasets import load_dataset
ds = load_dataset("diarizers-community/ami", "ihm")

print(ds)

gives:

DatasetDict({
    train: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 136
    })
    validation: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 18
    })
    test: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 16
    })
})

Dataset source

Citation

@article{article,
author = {Mccowan, Iain and Carletta, J and Kraaij, Wessel and Ashby, Simone and Bourban, S and Flynn, M and Guillemot, M and Hain, Thomas and Kadlec, J and Karaiskos, V and Kronenthal, M and Lathoud, Guillaume and Lincoln, Mike and Lisowska Masson, Agnes and Post, Wilfried and Reidsma, Dennis and Wellner, P},
year = {2005},
month = {01},
pages = {},
title = {The AMI meeting corpus},
journal = {Int'l. Conf. on Methods and Techniques in Behavioral Research}
}

Contribution

Thanks to @kamilakesbi and @sanchit-gandhi for adding this dataset.

Downloads last month
212

Models trained or fine-tuned on diarizers-community/ami

Collection including diarizers-community/ami