File size: 2,022 Bytes
d40c2c8
 
 
90999b7
5efd1ce
 
 
34744a9
 
5efd1ce
 
 
 
 
c1b20c3
 
d40c2c8
032b161
5efd1ce
2ec9b81
c6d0feb
e596d3a
69d9e2c
4192fd4
 
e596d3a
135779e
4192fd4
e596d3a
01bd3f7
477575c
 
 
 
 
 
 
 
 
83459d5
e398fde
 
 
 
477575c
 
69d9e2c
 
8ab90c9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: cc-by-sa-3.0
license_name: cc-by-sa
configs:
- config_name: en
  data_files: en.json
  default: true
- config_name: en-xl
  data_files: en-xl.json
- config_name: fa
  data_files: fa.json
language:
- en
- fa
tags:
- synthetic
---

# Multilingual Phonemes 10K Alpha


This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows.

## Languages

We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K unique (not included in any other split) additional phonemized pairs.

* English (en)
* English-XL (en-xl): ~100K phonemized pairs, English-only
* Persian (fa): Requested by [@Respair](https://huggingface.co/Respair)
## License + Credits

Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.

## Processing

We utilized the following process to preprocess the dataset:

1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code
2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language
3. Script: Clean the text
4. Script: Remove ultra-short phrases
5. Script: Phonemize
6. Script: Save JSON
7. Upload dataset

## Note

East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the `zh` split. We recommend converting characters to Simplified Chinese during inference, using a library such as `hanziconv` or `chinese-converter`.