Datasets:
metadata
dataset_info:
features:
- name: keyword
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: translation
dtype: string
splits:
- name: train
num_bytes: 350177561.125
num_examples: 10925
download_size: 133270458
dataset_size: 350177561.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for Dataset Name
This is the Irish portion of the Spoken Words dataset (available at MLCommons/ml_spoken_words), with merged splits “train”, “validation”, and “test”, augmented with machine translation. The Irish sentences are automatically translated into English using Google Translation API.
Dataset Structure
Dataset({
features: ['keyword', 'audio', 'translation'],
num_rows: 10925
})
How to load the dataset
from datasets import load_dataset
dataset = load_dataset("SpokenWords-GA-EN-MTed",
split="train",
trust_remote_code=True
)
Citation
@inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}