speecht5_ng-en1

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5016

Model description

Run inference via the Text-to-Speech (TTS) pipeline. You can access this model via the TTS pipeline in just a few lines of code!

!pip install nemo_text_processing
!wget https://huggingface.co/toyrem/speecht5_ng-en1/resolve/main/naija-en_speaker_embeddings.npz

from transformers import pipeline
import soundfile as sf
import numpy as np
import IPython.display as ipd
import torch
from nemo_text_processing.text_normalization.normalize import Normalizer

normalizer = Normalizer(lang="en", input_case="cased")  # or "lower_cased" if your text is all lowercase

EMBEDDING_FILE = "/content/naija-en_speaker_embeddings.npz"  # Single file for all embeddings
data = np.load(EMBEDDING_FILE) # Load the saved embeddings
speaker_id = 0 # selecting speaker 0 - Female Voice
speaker_embedding = data[str(speaker_id)]
speaker_embedding = torch.tensor(speaker_embedding, dtype=torch.float32).unsqueeze(0)
# print(f"Loaded speaker embedding shape: {speaker_embedding.shape}")

synthesiser = pipeline("text-to-speech", "toyrem/speecht5_ng-en1")

text = "Soldiers rescue 75 civilians from Sambisa forest"
norm_text = normalizer.normalize(text)

speech = synthesiser(norm_text, forward_params={"speaker_embeddings": speaker_embedding})
sf.write("speech.wav", speech["audio"], samplerate=speech["sampling_rate"])
ipd.Audio("speech.wav")

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.7371 0.4619 100 0.6491
0.6596 0.9238 200 0.5757
0.6109 1.3834 300 0.5442
0.5969 1.8453 400 0.5366
0.5778 2.3048 500 0.5223
0.5787 2.7667 600 0.5266
0.5681 3.2263 700 0.5153
0.5544 3.6882 800 0.5088
0.5434 4.1478 900 0.5015
0.5371 4.6097 1000 0.5016

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
2
Safetensors
Model size
144M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for toyrem/speecht5_ng-en1

Finetuned
(1207)
this model