Datasets:

Languages:
English
ArXiv:
License:
michaelhe commited on
Commit
5768a6d
·
verified ·
1 Parent(s): 61c70ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -3
README.md CHANGED
@@ -1,3 +1,116 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ size_categories:
4
+ - 100B<n<1T
5
+ ---
6
+ # Dataset Card for SpMis: Synthetic Spoken Misinformation Dataset
7
+
8
+ <!-- Provide a quick summary of the dataset. -->
9
+ The **SpMis Dataset** is designed to facilitate research on detecting synthetic spoken misinformation. It includes 360,611 audio samples synthesized from over 1,000 speakers across five major topics: Politics, Medicine, Education, Laws, and Finance, with 8,681 samples labeled as misinformation.
10
+
11
+ ## Dataset Details
12
+
13
+ ### Dataset Description
14
+
15
+ <!-- Provide a longer summary of what this dataset is. -->
16
+ This dataset contains synthetic spoken audio clips generated using state-of-the-art TTS models such as Amphion and OpenVoice v2, labeled to indicate whether the speech is genuine or misinformation. The dataset is designed to assist in the development of models capable of detecting both synthetic speech and misinformation.
17
+
18
+ - **Curated by:** Researchers from [Institution Names]
19
+ - **Funded by [optional]:** [More Information Needed]
20
+ - **Shared by [optional]:** [Institution Names or Individuals]
21
+ - **Language(s) (NLP):** English
22
+ - **License:** MIT License
23
+
24
+ ### Dataset Sources [optional]
25
+
26
+ <!-- Provide the basic links for the dataset. -->
27
+ - **Repository:** [Hugging Face Repo Link]
28
+ - **Paper [optional]:** [Paper Link, SLT 2024 Paper]
29
+
30
+ ## Uses
31
+
32
+ ### Direct Use
33
+
34
+ <!-- This section describes suitable use cases for the dataset. -->
35
+ The dataset is intended to be used for training and evaluating models in tasks like:
36
+ - **Synthetic speech detection**: Distinguishing human speech from TTS-generated speech.
37
+ - **Misinformation detection**: Identifying whether the spoken content is intended to deceive or mislead.
38
+ - **Speaker verification**: Verifying whether the speaker is correctly matched to the generated voice.
39
+
40
+ ### Out-of-Scope Use
41
+
42
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
43
+ This dataset should not be used for malicious purposes, such as creating or spreading synthetic misinformation. It also may not work well for tasks unrelated to speech or misinformation detection.
44
+
45
+ ## Dataset Structure
46
+
47
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
48
+ The dataset includes:
49
+ - **Audio files**: 360,611 TTS-generated speech samples.
50
+ - **Labels**: Misinformation, ordinary speech, or synthesized celebrity speech.
51
+ - **Metadata**: Speaker identity, topic, duration, and language.
52
+
53
+ The dataset is divided into the following topics:
54
+ 1. **Politics**: 76,542 samples, 1,740 labeled as misinformation.
55
+ 2. **Medicine**: 21,836 samples, 740 labeled as misinformation.
56
+ 3. **Education**: 177,392 samples, 2,970 labeled as misinformation.
57
+ 4. **Laws**: 11,422 samples, 862 labeled as misinformation.
58
+ 5. **Finance**: 53,011 samples, 2,369 labeled as misinformation.
59
+ 6. **Other**: 20,408 samples with no misinformation labels.
60
+
61
+ ## Dataset Creation
62
+
63
+ ### Curation Rationale
64
+
65
+ <!-- Motivation for the creation of this dataset. -->
66
+ The dataset was created to provide a resource for training models capable of detecting synthetic spoken misinformation, which is becoming an increasing threat in the era of deepfake technologies.
67
+
68
+ ### Source Data
69
+
70
+ #### Data Collection and Processing
71
+
72
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
73
+ The audio was generated using the Amphion and OpenVoice v2 TTS models, utilizing large-scale public corpora from various sources. The data was curated and processed to ensure a balance between topics and labeled misinformation.
74
+
75
+ #### Who are the source data producers?
76
+
77
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
78
+ The data was generated using synthetic voices, and no real-world speakers are associated with the content. All voices were created through TTS systems, using speaker embeddings derived from publicly available corpora.
79
+
80
+ ### Annotations [optional]
81
+
82
+ #### Annotation process
83
+
84
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
85
+ Annotations were applied manually to label whether a given audio clip contains misinformation. The annotation process followed strict guidelines, and multiple annotators verified the accuracy of the labels.
86
+
87
+ #### Who are the annotators?
88
+
89
+ <!-- This section describes the people or systems who created the annotations. -->
90
+ The annotators were experts in misinformation detection, working under supervised conditions to ensure the integrity of the labels.
91
+
92
+ #### Personal and Sensitive Information
93
+
94
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private. -->
95
+ This dataset does not contain any personal or sensitive information. All audio samples are synthetic and not derived from real individuals.
96
+
97
+ ## Bias, Risks, and Limitations
98
+
99
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
100
+ The dataset may not fully represent all types of misinformation, and models trained on this dataset may be biased towards detecting synthetic voices generated by specific TTS systems.
101
+
102
+ ### Recommendations
103
+
104
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
105
+ We recommend using this dataset as part of a larger framework for misinformation detection. It should be combined with real-world data to improve generalization.
106
+
107
+ ## Citation [optional]
108
+
109
+ **BibTeX:**
110
+ ```bibtex
111
+ @inproceedings{liu2024spmis,
112
+ title={SpMis: An Investigation of Synthetic Spoken Misinformation Detection},
113
+ author={Peizhuo Liu, Li Wang, Renqiang He, Haorui He, Lei Wang, Huadi Zheng, Jie Shi, Tong Xiao, Zhizheng Wu},
114
+ booktitle={Proceedings of SLT 2024},
115
+ year={2024},
116
+ }