ShiftySpeech / README.md
ash56's picture
Update README.md
0e8649c verified
metadata
license: apache-2.0
language:
  - en
  - zh
  - ja
tags:
  - audio
  - synthetic-speech-detection

This repository introduces: πŸŒ€ ShiftySpeech: A Large-Scale Synthetic Speech Dataset with Distribution Shifts

πŸ”₯ Key Features

  • 3000+ hours of synthetic speech
  • Diverse Distribution Shifts: The dataset spans 7 key distribution shifts, including:
    • πŸ“– Reading Style
    • πŸŽ™οΈ Podcast
    • πŸŽ₯ YouTube
    • πŸ—£οΈ Languages (Three different languages)
    • 🌎 Demographics (including variations in age, accent, and gender)
  • Multiple Speech Generation Systems: Includes data synthesized from various TTS models and vocoders.

πŸ’‘ Why We Built This Dataset

Driven by advances in self-supervised learning for speech, state-of-the-art synthetic speech detectors have achieved low error rates on popular benchmarks such as ASVspoof. However, prior benchmarks do not address the wide range of real-world variability in speech. Are reported error rates realistic in real-world conditions? To assess detector failure modes and robustness under controlled distribution shifts, we introduce ShiftySpeech, a benchmark with more than 3000 hours of synthetic speech from 7 domains, 6 TTS systems, 12 vocoders, and 3 languages.

βš™οΈ Usage

Ensure that you have soundfile or librosa installed for proper audio decoding:

pip install soundfile librosa
πŸ“Œ Example: Loading the AISHELL Dataset Vocoded with APNet2
from datasets import load_dataset

dataset = load_dataset("ash56/ShiftySpeech", data_files={"data": f"Vocoders/apnet2/apnet2_aishell_flac.tar.gz"})["data"]

⚠️ Note: It is recommended to load data from a specific folder to avoid unnecessary memory usage.

The source datasets covered by different TTS and Vocoder systems are listed in tts.yaml and vocoders.yaml

πŸ“„ More Information

For detailed information on dataset sources and analysis, see our paper: Less is More for Synthetic Speech Detection in the Wild

You can also find the full implementation on GitHub

Citation

If you find this dataset useful, please cite our work:

@misc{garg2025syntheticspeechdetectionwild,
      title={Less is More for Synthetic Speech Detection in the Wild}, 
      author={Ashi Garg and Zexin Cai and Henry Li Xinyuan and Leibny Paola GarcΓ­a-Perera and Kevin Duh and Sanjeev Khudanpur and Matthew Wiesner and Nicholas Andrews},
      year={2025},
      eprint={2502.05674},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2502.05674}, 
}

βœ‰οΈ Contact

If you have any questions or comments about the resource, please feel free to reach out to us at: [email protected] or [email protected]