license: mit
size_categories:
- 100B<n<1T
Dataset Card for SpMis: Synthetic Spoken Misinformation Dataset
The SpMis Dataset is designed to facilitate research on detecting synthetic spoken misinformation. It includes 360,611 audio samples synthesized from over 1,000 speakers across five major topics: Politics, Medicine, Education, Laws, and Finance, with 8,681 samples labeled as misinformation.
Dataset Details
Dataset Description
This dataset contains synthetic spoken audio clips generated using state-of-the-art TTS models such as Amphion and OpenVoice v2, labeled to indicate whether the speech is genuine or misinformation. The dataset is designed to assist in the development of models capable of detecting both synthetic speech and misinformation.
Dataset Sources
- Repository: [Hugging Face Repo Link]
- Paper [optional]: https://arxiv.org/abs/2409.11308
Uses
Direct Use
The dataset is intended to be used for training and evaluating models in tasks:
- Misinformation detection: Identifying whether the spoken voice is intended to mislead.
Dataset Structure
The dataset includes:
- Audio files: 360,611 TTS-generated speech samples.
- Labels: Misinformation, ordinary speech, or synthesized celebrity speech.
- Metadata: Speaker identity, topic, duration, and language.
The dataset is divided into the following topics:
- Politics: 76,542 samples, 1,740 labeled as misinformation.
- Medicine: 21,836 samples, 740 labeled as misinformation.
- Education: 177,392 samples, 2,970 labeled as misinformation.
- Laws: 11,422 samples, 862 labeled as misinformation.
- Finance: 53,011 samples, 2,369 labeled as misinformation.
- Other: 20,408 samples with no misinformation labels.
Dataset Creation
Curation Rationale
The dataset was created to provide a resource for training models capable of detecting synthetic spoken misinformation, which is becoming an increasing threat in the era of deepfake technologies.
Source Data
Data Collection and Processing
The audio was generated using the Amphion and OpenVoice v2 TTS models, utilizing large-scale public corpora from various sources. The data was curated and processed to ensure a balance between topics and labeled misinformation.
Who are the source data producers?
The data was generated using synthetic voices, and no real-world speakers are associated with the content. All voices were created through TTS systems, using speaker embeddings derived from publicly available corpora.
Bias, Risks, and Limitations
The dataset may not fully represent all types of misinformation, and models trained on this dataset may be biased towards detecting synthetic voices generated by specific TTS systems.
Recommendations
We recommend using this dataset as part of a larger framework for misinformation detection. It should be combined with real-world data to improve generalization.
Citation
BibTeX:
@inproceedings{liu2024spmis,
title={SpMis: An Investigation of Synthetic Spoken Misinformation Detection},
author={Peizhuo Liu, Li Wang, Renqiang He, Haorui He, Lei Wang, Huadi Zheng, Jie Shi, Tong Xiao, Zhizheng Wu},
booktitle={Proceedings of SLT 2024},
year={2024},
}