File size: 2,572 Bytes
2c75a52
 
91e99b3
 
 
 
 
2c75a52
91e99b3
 
 
 
 
 
 
 
 
ab970be
 
 
 
91e99b3
 
 
ab970be
 
128ce2c
ab970be
 
91e99b3
 
 
 
ab970be
 
 
 
 
 
 
 
 
 
 
 
 
 
91e99b3
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: bsd-2-clause
tags:
  - tts
  - real-time
  - vocoder
library_name: transformers
---

# HelloSippyRT PostVocoder

## Introduction

The HelloSippyRT model is designed to adapt Microsoft's SpeechT5 Text-to-Speech (TTS) for real-time scenarios.

## Problem Statement

The original vocoder performs optimally only when provided with almost the full Mel sequence produced from the single
text input at once. This is not ideal for real-time applications, where we aim to begin audio output quickly.
Using smaller chunks results in "clicking" distortions between adjacent audio frames.
Fine-tuning attempts on Microsoft's HiFiGAN vocoder were unsuccessful.

## Solution

Our approach involves a smaller model that takes a fixed audio chunk of 8 Mel frames, two pre-frames, and two post-frames.
These frames are processed along with the original vocoder's 12 audio frames of 256 bytes each. The model employs
convolution input layers for both audio and Mel frames to generate hidden dimensions, followed by linear layer and
a final convolution layer. The output is then multiplied with the original 8 audio frames to produce corrected frames.

![HelloSippyRT Model Architecture](https://docs.google.com/drawings/d/e/2PACX-1vTiWxGbEB2MbvHpTJHS22abWNrSt2pHv6XijEDmnQFjAqBewMJyZBQ_5Y9k1P9INQPQmuq56MpLDzJt/pub?w=960&h=720)

## Training Details

We trained the model using a subset of 3,000 audio utterances from the `LJSpeech-1.1` dataset. The SpeechT5's Speech-To-Speech
module was employed to replace voice in each utterance with a voice of speakers randomly selected from the
`Matthijs/cmu-arctic-xvectors` dataset. Such produced reference Mel spectrum were used to feed vocoder and post-vocoder
in chunks. The FFT of generated in "continuous" mode reference waveform was used as a basis for loss-function calculation.

During training, the original vocoder was locked; only our model was trained to mimic the original vocoder as closely as
possible in continuous mode.

## Evaluation

The model has been evaluated by producing TTS output from pure text input using quotes from the "Futurama", "Martix" and
"Space Odyssey 2001" retrieved from the wikiquotes site using purely random speaker vector as well as vectors from the
`Matthijs/cmu-arctic-xvectors` dataset. The quality of output has been found satisfactory for our particular
use. 

## Source Code & Links

* [HelloSippyRT on GitHub](https://github.com/sippy/Infernos.git)
* [Training Code Repository](https://github.com/sobomax/hifi-gan-lsr-rt.git)

---

**License**: BSD-2-Clause  
**Library**: Transformers