File size: 2,045 Bytes
d40c2c8
 
 
90999b7
776b57b
90999b7
eca3555
ad43a85
 
776b57b
90999b7
776b57b
90999b7
282bf7a
 
776b57b
fa5cb67
ad881b0
 
776b57b
90999b7
e167710
 
 
 
 
 
 
 
 
 
 
 
776b57b
bea62f4
d40c2c8
032b161
477575c
8419e19
c6d0feb
 
67f9a1e
69d9e2c
c6d0feb
 
1f1fdb0
 
477575c
 
 
 
 
 
 
 
 
83459d5
e398fde
 
 
 
477575c
 
69d9e2c
 
ce50bb6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: cc-by-sa-3.0
license_name: cc-by-sa
configs:
  - config_name: en
    data_files: en.json
    default: true
  - config_name: ca
    data_files: ca.json
  - config_name: de
    data_files: de.json
  - config_name: es
    data_files: es.json
  - config_name: el
    data_files: el.json
  - config_name: fa
    data_files: fa.json
  - config_name: fi
    data_files: fi.json
  - config_name: fr
    data_files: fr.json
  - config_name: it
    data_files: it.json
  - config_name: pl
    data_files: pl.json
  - config_name: pt
    data_files: pt.json
  - config_name: ru
    data_files: ru.json
  - config_name: sv
    data_files: sv.json
  - config_name: ua
    data_files: ua.json
  - config_name: zh
    data_files: zh.json
---

#

by @mrfakename

~10k items from each language.

Only for training StyleTTS 2-related **open source** models.

processed using: https://huggingface.co/styletts2-community/data-preprocessing-scripts (styletts2 members only)

## License + Credits

Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.

## Processing

We utilized the following process to preprocess the dataset:

1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code
2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language
3. Script: Clean the text
4. Script: Remove ultra-short phrases
5. Script: Phonemize
6. Script: Save JSON
7. Upload dataset

## Note

East-asian languages are experimental and in beta. We do not distinguish between chinese traditional and simplified, the dataset consists mainly of simplified chinese. We recommend converting characters to simplified chinese during inference using a library such as `hanziconv` or `chinese-converter`.