anzorq's picture
Update README.md
503cc5a verified
|
raw
history blame
3.19 kB
metadata
library_name: transformers
tags:
  - text-to-speech
  - annotation
license: apache-2.0
language:
  - kbd
pipeline_tag: text-to-speech
inference: false
datasets:
  - anzorq/kbd_speech
  - anzorq/kbd_speech_murat-tagged-for-parler-tts

Parler-TTS Fine-tuned for Kabardian Language (Murat)

This model is a fine-tuned version of the Parler-TTS model trained on a dataset of Kabardian speech from the speaker Murat Sokhov.

Model Details:

  • Model: ParlerTTSForConditionalGeneration
  • Base Model: Parler-TTS mini v0.1
  • Training Data: Kabardian speech dataset from "Murat" (anzorq/kbd_speech_murat)
  • Training Configuration:
    • --train_dataset_name: "anzorq/kbd_speech_murat"
    • --train_metadata_dataset_name: "anzorq/kbd_speech_murat-tagged-for-parler-tts"
    • --num_train_epochs: 4
    • --gradient_accumulation_steps: 18
    • --gradient_checkpointing: True
    • --per_device_train_batch_size: 2
    • --learning_rate: 0.00008
    • --lr_scheduler_type: "constant_with_warmup"
    • --warmup_steps: 50
    • --logging_steps: 2
    • --freeze_text_encoder: True
    • --dtype: "float16"
    • --seed: 456

Usage:

Installation:

pip install git+https://github.com/huggingface/parler-tts.git

Inference:

from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import torch
import soundfile as sf

device = "cuda" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if device != "cpu" else torch.float32

model = ParlerTTSForConditionalGeneration.from_pretrained("anzorq/parler-tts-mini-kbd-Murat", torch_dtype=torch_dtype).to(device)
tokenizer = AutoTokenizer.from_pretrained("anzorq/parler-tts-mini-kbd-Murat")

prompt = "Уэшх нэужьым къиуа псы утхъуар, къэгубжьа хуэдэ, къыпэщӏэхуэр ирихьэхыну хьэзыру йожэх"
description = "Murat's voice is very clear, but it is very confined in terms of pacing and delivery"

# Simple transliteration since the original tokenizer used in Parler-TTS does not support Cyrillic symbols
def transliterate(text):
    char_map = {
        'а': 'a', 'б': 'b', 'в': 'v', 'г': 'g', 'д': 'd', 'е': 'e', 'ж': 'zh', 'з': 'z', 'и': 'i', 'й': 'j',
        'к': 'k', 'л': 'l', 'м': 'm', 'н': 'n', 'о': 'o', 'п': 'p', 'р': 'r', 'с': 's', 'т': 't', 'у': 'u',
        'ф': 'f', 'х': 'x', 'ц': 'c', 'ч': 'ch', 'ш': 'sh', 'щ': 'sx', 'ъ': '2', 'ы': 'y', 'ь': '3', 'э': '4',
        'я': 'ya', 'ӏ': '1'
    }
    for cyrillic_char, latin_char in char_map.items():
        text = text.replace(cyrillic_char, latin_char)
    return text

transliterated_prompt = transliterate(prompt)

# Generate audio
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(transliterated_prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids).to(torch.float32)
audio_arr = generation.cpu().numpy().squeeze()

# Save the audio to a file
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)