Update README.md
Browse files
README.md
CHANGED
@@ -17,3 +17,29 @@ datasets:
|
|
17 |
This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Transformer](https://arxiv.org/pdf/1809.08895.pdf) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
18 |
|
19 |
The pre-trained model takes in text input and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Transformer](https://arxiv.org/pdf/1809.08895.pdf) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
18 |
|
19 |
The pre-trained model takes in text input and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.
|
20 |
+
|
21 |
+
### Perform Text-to-Speech (TTS)
|
22 |
+
|
23 |
+
```python
|
24 |
+
import torchaudio
|
25 |
+
from speechbrain.inference.vocoders import HIFIGAN
|
26 |
+
|
27 |
+
texts = ["This is a example for synthesis."]
|
28 |
+
|
29 |
+
#initializing my model
|
30 |
+
my_tts_model = TextToSpeech.from_hparams(source="/content/")
|
31 |
+
|
32 |
+
#initializing vocoder(Hifigan) model
|
33 |
+
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder")
|
34 |
+
|
35 |
+
# Running the TTS
|
36 |
+
mel_output = my_tts_model.encode_text(texts)
|
37 |
+
|
38 |
+
# Running Vocoder (spectrogram-to-waveform)
|
39 |
+
waveforms = hifi_gan.decode_batch(mel_output)
|
40 |
+
|
41 |
+
# Save the waverform
|
42 |
+
torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)
|
43 |
+
|
44 |
+
```
|
45 |
+
|