LanceaKing commited on
Commit
b2d92b1
0 Parent(s):

initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.npy filter=lfs diff=lfs merge=lfs -text
13
+ *.npz filter=lfs diff=lfs merge=lfs -text
14
+ *.onnx filter=lfs diff=lfs merge=lfs -text
15
+ *.ot filter=lfs diff=lfs merge=lfs -text
16
+ *.parquet filter=lfs diff=lfs merge=lfs -text
17
+ *.pb filter=lfs diff=lfs merge=lfs -text
18
+ *.pickle filter=lfs diff=lfs merge=lfs -text
19
+ *.pkl filter=lfs diff=lfs merge=lfs -text
20
+ *.pt filter=lfs diff=lfs merge=lfs -text
21
+ *.pth filter=lfs diff=lfs merge=lfs -text
22
+ *.rar filter=lfs diff=lfs merge=lfs -text
23
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
24
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
25
+ *.tflite filter=lfs diff=lfs merge=lfs -text
26
+ *.tgz filter=lfs diff=lfs merge=lfs -text
27
+ *.wasm filter=lfs diff=lfs merge=lfs -text
28
+ *.xz filter=lfs diff=lfs merge=lfs -text
29
+ *.zip filter=lfs diff=lfs merge=lfs -text
30
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
31
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - uncompressed
33
+ *.pcm filter=lfs diff=lfs merge=lfs -text
34
+ *.sam filter=lfs diff=lfs merge=lfs -text
35
+ *.raw filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - compressed
37
+ *.aac filter=lfs diff=lfs merge=lfs -text
38
+ *.flac filter=lfs diff=lfs merge=lfs -text
39
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
40
+ *.ogg filter=lfs diff=lfs merge=lfs -text
41
+ *.wav filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - other
8
+ license:
9
+ - odc-by
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: asvspoof2019
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - extended|vctk
17
+ tags: []
18
+ task_categories:
19
+ - audio-classification
20
+ task_ids:
21
+ - voice-anti-spoofing
22
+ ---
23
+
24
+ # Dataset Card for asvspoof2019
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** https://datashare.ed.ac.uk/handle/10283/3336
52
+ - **Repository:** [Needs More Information]
53
+ - **Paper:** https://arxiv.org/abs/1911.01601
54
+ - **Leaderboard:** [Needs More Information]
55
+ - **Point of Contact:** [Needs More Information]
56
+
57
+ ### Dataset Summary
58
+
59
+ This is a database used for the Third Automatic Speaker Verification Spoofing
60
+ and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)
61
+ organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor
62
+ Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,
63
+ and Andreas Nautsch in 2019.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ [Needs More Information]
68
+
69
+ ### Languages
70
+
71
+ English
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ ```
78
+ {'speaker_id': 'LA_0091',
79
+ 'audio_file_name': 'LA_T_8529430',
80
+ 'audio': {'path': 'D:/Users/80304531/.cache/huggingface/datasets/downloads/extracted/8cabb6d5c283b0ed94b2219a8d459fea8e972ce098ef14d8e5a97b181f850502/LA/ASVspoof2019_LA_train/flac/LA_T_8529430.flac',
81
+ 'array': array([-0.00201416, -0.00234985, -0.0022583 , ..., 0.01309204,
82
+ 0.01339722, 0.01461792], dtype=float32),
83
+ 'sampling_rate': 16000},
84
+ 'system_id': 'A01',
85
+ 'key': 1}
86
+ ```
87
+
88
+ ### Data Fields
89
+
90
+ Logical access (LA):
91
+ - `speaker_id`: `LA_****`, a 4-digit speaker ID
92
+ - `audio_file_name`: name of the audio file
93
+ - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
94
+ - `system_id`: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-')
95
+ - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
96
+
97
+ Physical access (PA):
98
+ - `speaker_id`: `PA_****`, a 4-digit speaker ID
99
+
100
+ - `audio_file_name`: name of the audio file
101
+
102
+ - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
103
+
104
+ - `environment_id`: a triplet (S,R,D_s), which take one letter in the set {a,b,c} as categorical value, defined as
105
+
106
+ | | a | b | c |
107
+ | -------------------------------- | ------ | ------- | -------- |
108
+ | S: Room size (square meters) | 2-5 | 5-10 | 10-20 |
109
+ | R: T60 (ms) | 50-200 | 200-600 | 600-1000 |
110
+ | D_s: Talker-to-ASV distance (cm) | 10-50 | 50-100 | 100-150 |
111
+
112
+ - `attack_id`: a duple (D_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as
113
+
114
+ | | A | B | C |
115
+ | ----------------------------------- | ------- | ------ | ----- |
116
+ | Z: Attacker-to-talker distance (cm) | 10-50 | 50-100 | > 100 |
117
+ | Q: Replay device quality | perfect | high | low |
118
+
119
+ for bonafide speech, `attack_id` is left blank ('-')
120
+
121
+ - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
122
+
123
+ ### Data Splits
124
+
125
+ | | Training set | Development set | Evaluation set |
126
+ | -------- | ------------ | --------------- | -------------- |
127
+ | Bonafide | 2580 | 2548 | 7355 |
128
+ | Spoof | 22800 | 22296 | 63882 |
129
+ | Total | 25380 | 24844 | 71237 |
130
+
131
+ ## Dataset Creation
132
+
133
+ ### Curation Rationale
134
+
135
+ [Needs More Information]
136
+
137
+ ### Source Data
138
+
139
+ #### Initial Data Collection and Normalization
140
+
141
+ [Needs More Information]
142
+
143
+ #### Who are the source language producers?
144
+
145
+ [Needs More Information]
146
+
147
+ ### Annotations
148
+
149
+ #### Annotation process
150
+
151
+ [Needs More Information]
152
+
153
+ #### Who are the annotators?
154
+
155
+ [Needs More Information]
156
+
157
+ ### Personal and Sensitive Information
158
+
159
+ [Needs More Information]
160
+
161
+ ## Considerations for Using the Data
162
+
163
+ ### Social Impact of Dataset
164
+
165
+ [Needs More Information]
166
+
167
+ ### Discussion of Biases
168
+
169
+ [Needs More Information]
170
+
171
+ ### Other Known Limitations
172
+
173
+ [Needs More Information]
174
+
175
+ ## Additional Information
176
+
177
+ ### Dataset Curators
178
+
179
+ [Needs More Information]
180
+
181
+ ### Licensing Information
182
+
183
+ This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/
184
+
185
+ ### Citation Information
186
+
187
+ ```
188
+ @InProceedings{Todisco2019,
189
+ Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},
190
+ Author = {Todisco, Massimiliano and
191
+ Wang, Xin and
192
+ Sahidullah, Md and
193
+ Delgado, H ́ector and
194
+ Nautsch, Andreas and
195
+ Yamagishi, Junichi and
196
+ Evans, Nicholas and
197
+ Kinnunen, Tomi and
198
+ Lee, Kong Aik},
199
+ booktitle = {Proc. of Interspeech 2019},
200
+ Year = {2019}
201
+ }
202
+ ```
asvspoof2019.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import datasets
4
+
5
+ _CITATION = """\
6
+ @InProceedings{Todisco2019,
7
+ Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},
8
+ Author = {Todisco, Massimiliano and
9
+ Wang, Xin and
10
+ Sahidullah, Md and
11
+ Delgado, H ́ector and
12
+ Nautsch, Andreas and
13
+ Yamagishi, Junichi and
14
+ Evans, Nicholas and
15
+ Kinnunen, Tomi and
16
+ Lee, Kong Aik},
17
+ booktitle = {Proc. of Interspeech 2019},
18
+ Year = {2019}
19
+ }
20
+ """
21
+
22
+ _DESCRIPTION = """\
23
+ This is a database used for the Third Automatic Speaker Verification Spoofing
24
+ and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)
25
+ organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor
26
+ Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,
27
+ and Andreas Nautsch in 2019.
28
+ """
29
+
30
+ _HOMEPAGE = "https://datashare.ed.ac.uk/handle/10283/3336"
31
+
32
+ _LICENSE = "http://opendatacommons.org/licenses/by/1.0/"
33
+
34
+ _URLS = {
35
+ "LA": "https://datashare.ed.ac.uk/bitstream/handle/10283/3336/LA.zip",
36
+ "PA": "https://datashare.ed.ac.uk/bitstream/handle/10283/3336/PA.zip",
37
+ }
38
+
39
+
40
+ class ASVSpoof2019(datasets.GeneratorBasedBuilder):
41
+
42
+ VERSION = datasets.Version("1.0.0")
43
+
44
+ BUILDER_CONFIGS = [
45
+ datasets.BuilderConfig(name="LA", version=VERSION, description="Logical access (LA)"),
46
+ datasets.BuilderConfig(name="PA", version=VERSION, description="Physical access (PA)"),
47
+ ]
48
+
49
+ DEFAULT_CONFIG_NAME = "LA"
50
+
51
+ def _info(self):
52
+ if self.config.name == "LA":
53
+ features = datasets.Features(
54
+ {
55
+ "speaker_id": datasets.Value("string"),
56
+ "audio_file_name": datasets.Value("string"),
57
+ "audio": datasets.Audio(sampling_rate=16_000),
58
+ "system_id": datasets.Value("string"),
59
+ "key": datasets.ClassLabel(names=["bonafide", "spoof"]),
60
+ }
61
+ )
62
+ else:
63
+ features = datasets.Features(
64
+ {
65
+ "speaker_id": datasets.Value("string"),
66
+ "audio_file_name": datasets.Value("string"),
67
+ "audio": datasets.Audio(sampling_rate=16_000),
68
+ "environment_id": datasets.Value("string"),
69
+ "attack_id": datasets.Value("string"),
70
+ "key": datasets.ClassLabel(names=["bonafide", "spoof"]),
71
+ }
72
+ )
73
+ return datasets.DatasetInfo(
74
+ description=_DESCRIPTION,
75
+ features=features,
76
+ supervised_keys=("audio", "key"),
77
+ homepage=_HOMEPAGE,
78
+ license=_LICENSE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ urls = _URLS[self.config.name]
84
+ data_dir = dl_manager.download_and_extract(urls)
85
+ return [
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.TRAIN,
88
+ gen_kwargs={
89
+ "metadata_filepath": os.path.join(
90
+ data_dir,
91
+ self.config.name,
92
+ f"ASVspoof2019_{self.config.name}_cm_protocols",
93
+ f"ASVspoof2019.{self.config.name}.cm.train.trn.txt",
94
+ ),
95
+ "audios_dir": os.path.join(
96
+ data_dir, self.config.name, f"ASVspoof2019_{self.config.name}_train", "flac"
97
+ ),
98
+ },
99
+ ),
100
+ datasets.SplitGenerator(
101
+ name=datasets.Split.VALIDATION,
102
+ gen_kwargs={
103
+ "metadata_filepath": os.path.join(
104
+ data_dir,
105
+ self.config.name,
106
+ f"ASVspoof2019_{self.config.name}_cm_protocols",
107
+ f"ASVspoof2019.{self.config.name}.cm.dev.trl.txt",
108
+ ),
109
+ "audios_dir": os.path.join(
110
+ data_dir, self.config.name, f"ASVspoof2019_{self.config.name}_dev", "flac"
111
+ ),
112
+ },
113
+ ),
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TEST,
116
+ gen_kwargs={
117
+ "metadata_filepath": os.path.join(
118
+ data_dir,
119
+ self.config.name,
120
+ f"ASVspoof2019_{self.config.name}_cm_protocols",
121
+ f"ASVspoof2019.{self.config.name}.cm.eval.trl.txt",
122
+ ),
123
+ "audios_dir": os.path.join(
124
+ data_dir, self.config.name, f"ASVspoof2019_{self.config.name}_eval", "flac"
125
+ ),
126
+ },
127
+ ),
128
+ ]
129
+
130
+ def _generate_examples(self, metadata_filepath, audios_dir):
131
+ with open(metadata_filepath) as f:
132
+ for i, line in enumerate(f.readlines()):
133
+ if self.config.name == "LA":
134
+ speaker_id, audio_file_name, _, system_id, key = line.strip().split()
135
+ result = {
136
+ "speaker_id": speaker_id,
137
+ "audio_file_name": audio_file_name,
138
+ "system_id": system_id,
139
+ "key": key,
140
+ }
141
+ elif self.config.name == "PA":
142
+ speaker_id, audio_file_name, environment_id, attack_id, key = line.strip().split()
143
+ result = {
144
+ "speaker_id": speaker_id,
145
+ "audio_file_name": audio_file_name,
146
+ "environment_id": environment_id,
147
+ "attack_id": attack_id,
148
+ "key": key,
149
+ }
150
+ result["audio"] = os.path.join(audios_dir, audio_file_name + ".flac")
151
+ yield i, result
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"LA": {"description": "This is a database used for the Third Automatic Speaker Verification Spoofing\nand Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)\norganized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, H\u00e9ctor\nDelgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,\nand Andreas Nautsch in 2019.\n", "citation": "@InProceedings{Todisco2019,\n Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},\n Author = {Todisco, Massimiliano and\n Wang, Xin and\n Sahidullah, Md and\n Delgado, H \u0301ector and\n Nautsch, Andreas and\n Yamagishi, Junichi and\n Evans, Nicholas and\n Kinnunen, Tomi and\n Lee, Kong Aik},\n booktitle = {Proc. of Interspeech 2019},\n Year = {2019}\n}\n", "homepage": "https://datashare.ed.ac.uk/handle/10283/3336", "license": "http://opendatacommons.org/licenses/by/1.0/", "features": {"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "audio_file_name": {"dtype": "string", "id": null, "_type": "Value"}, "audio": {"sampling_rate": 16000, "mono": true, "decode": true, "id": null, "_type": "Audio"}, "system_id": {"dtype": "string", "id": null, "_type": "Value"}, "key": {"num_classes": 2, "names": ["bonafide", "spoof"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "audio", "output": "key"}, "task_templates": null, "builder_name": "asvspoof2019", "config_name": "LA", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5784653, "num_examples": 25380, "dataset_name": "asvspoof2019"}, "validation": {"name": "validation", "num_bytes": 5612754, "num_examples": 24844, "dataset_name": "asvspoof2019"}, "test": {"name": "test", "num_bytes": 16164994, "num_examples": 71237, "dataset_name": "asvspoof2019"}}, "download_checksums": {"https://datashare.ed.ac.uk/bitstream/handle/10283/3336/LA.zip": {"num_bytes": 7640952520, "checksum": "208a7e4e3913f8c75ae1afd19bf32a5b29ae68435e9e30e23e5e98b6a155e4ec"}}, "download_size": 7640952520, "post_processing_size": null, "dataset_size": 27562401, "size_in_bytes": 7668514921}, "PA": {"description": "This is a database used for the Third Automatic Speaker Verification Spoofing\nand Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)\norganized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, H\u00e9ctor\nDelgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,\nand Andreas Nautsch in 2019.\n", "citation": "@InProceedings{Todisco2019,\n Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},\n Author = {Todisco, Massimiliano and\n Wang, Xin and\n Sahidullah, Md and\n Delgado, H \u0301ector and\n Nautsch, Andreas and\n Yamagishi, Junichi and\n Evans, Nicholas and\n Kinnunen, Tomi and\n Lee, Kong Aik},\n booktitle = {Proc. of Interspeech 2019},\n Year = {2019}\n}\n", "homepage": "https://datashare.ed.ac.uk/handle/10283/3336", "license": "http://opendatacommons.org/licenses/by/1.0/", "features": {"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "audio_file_name": {"dtype": "string", "id": null, "_type": "Value"}, "audio": {"sampling_rate": 16000, "mono": true, "decode": true, "id": null, "_type": "Audio"}, "environment_id": {"dtype": "string", "id": null, "_type": "Value"}, "attack_id": {"dtype": "string", "id": null, "_type": "Value"}, "key": {"num_classes": 2, "names": ["bonafide", "spoof"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "audio", "output": "key"}, "task_templates": null, "builder_name": "asvspoof2019", "config_name": "PA", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12637350, "num_examples": 54000, "dataset_name": "asvspoof2019"}, "validation": {"name": "validation", "num_bytes": 6888713, "num_examples": 29700, "dataset_name": "asvspoof2019"}, "test": {"name": "test", "num_bytes": 31390842, "num_examples": 134730, "dataset_name": "asvspoof2019"}}, "download_checksums": {"https://datashare.ed.ac.uk/bitstream/handle/10283/3336/PA.zip": {"num_bytes": 17662711934, "checksum": "cb2a2d1bd37527177be6f259339cb3a7558638474e150bf5b086c37d26daad2a"}}, "download_size": 17662711934, "post_processing_size": null, "dataset_size": 50916905, "size_in_bytes": 17713628839}}
dummy/LA/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cce0c6d845191b65c1b72ea207b32b5cb9216b1e05f8c173e9fe2ca684c5d9fc
3
+ size 3753314
dummy/PA/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:198b7ea13e2eb3d4926c0265a699880d9a7a09867c500619ccbc108f17ab8ee9
3
+ size 4003143