--- license: cc-by-sa-3.0 license_name: cc-by-sa configs: - config_name: en data_files: en.json default: true - config_name: en-xl data_files: en-xl.json - config_name: fa data_files: fa.json language: - en - fa tags: - synthetic --- # Multilingual Phonemes 10K Alpha This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows. ## Languages We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K unique (not included in any other split) additional phonemized pairs. * English (en) * English-XL (en-xl): ~100K phonemized pairs, English-only * Persian (fa): Requested by [@Respair](https://huggingface.co/Respair) ## License + Credits Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0. ## Processing We utilized the following process to preprocess the dataset: 1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code 2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language 3. Script: Clean the text 4. Script: Remove ultra-short phrases 5. Script: Phonemize 6. Script: Save JSON 7. Upload dataset ## Note East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the `zh` split. We recommend converting characters to Simplified Chinese during inference, using a library such as `hanziconv` or `chinese-converter`.