license: apache-2.0
language:
- en
- zh
- ja
tags:
- audio
This repository introduces: π ShiftySpeech: A Large-Scale Synthetic Speech Dataset with Distribution Shifts
π₯ Key Features
- 3000+ hours of synthetic speech
- Diverse Distribution Shifts: The dataset spans 7 key distribution shifts, including:
- π Reading Style
- ποΈ Podcast
- π₯ YouTube
- π£οΈ Languages (Three different languages)
- π Demographics (including variations in age, accent, and gender)
- Multiple Speech Generation Systems: Includes data synthesized from various TTS models and vocoders.
π‘ Why We Built This Dataset
Driven by advances in self-supervised learning for speech, state-of-the-art synthetic speech detectors have achieved low error rates on popular benchmarks such as ASVspoof. However, prior benchmarks do not address the wide range of real-world variability in speech. Are reported error rates realistic in real-world conditions? To assess detector failure modes and robustness under controlled distribution shifts, we introduce ShiftySpeech, a benchmark with more than 3000 hours of synthetic speech from 7 domains, 6 TTS systems, 12 vocoders, and 3 languages.
βοΈ Usage
Ensure that you have soundfile or librosa installed for proper audio decoding:
pip install soundfile librosa
π Example: Loading the AISHELL Dataset Vocoded with APNet2
from datasets import load_dataset
dataset = load_dataset("ash56/ShiftySpeech", data_files={"data": f"Vocoders/apnet2/apnet2_aishell_flac.tar.gz"})["data"]
β οΈ Note: It is recommended to load data from a specific folder to avoid unnecessary memory usage.
π More Information
For detailed information on dataset sources and analysis, see our paper: Less is More for Synthetic Speech Detection in the Wild
You can also find the full implementation on GitHub
Citation
If you find this dataset useful, please cite our work:
@misc{garg2025syntheticspeechdetectionwild,
title={Less is More for Synthetic Speech Detection in the Wild},
author={Ashi Garg and Zexin Cai and Henry Li Xinyuan and Leibny Paola GarcΓa-Perera and Kevin Duh and Sanjeev Khudanpur and Matthew Wiesner and Nicholas Andrews},
year={2025},
eprint={2502.05674},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2502.05674},
}
βοΈ Contact
If you have any questions or comments about the resource, please feel free to reach out to us at: [email protected] or [email protected]