File size: 1,768 Bytes
88888bb
 
 
 
02ad57c
 
88888bb
02ad57c
 
 
88888bb
efb610a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3b9d1b
 
88888bb
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Tortoise TTS AR model fine-tuned for German

Trained on 3 speakers; 2 LibriVox readers, and Thorsten Mueller's dataset https://github.com/thorstenMueller/Thorsten-Voice

***THE NEWEST VERSIONS***: v# indicates the number of training sessions, #e is how many epochs.
9/5 training session uploaded

Requires the tokenizer file placed in the  tokenizers/ directory

Voice latents are pre-computed in voices/ for some uploaded versions. Voice samples to recompute latents are uploaded.

For use in MRQ Voice Cloning WebUI:

Requires the tokenizer used in training, and code changes to disable text cleaners. At minimum, change english_cleaners to basic_cleaners.

Code changes:
modules\tortoise-tts\tortoise\utils\tokenizer.py
Change Line 201:           txt = english_cleaners(txt) and replace it
with txt = basic_cleaners(txt)

modules\tortoise-tts\build\lib\tortoise\utils\tokenizer.py
Change Line 201:           txt = english_cleaners(txt) and replace it
with txt = basic_cleaners(txt)

\modules\dlas\dlas\data\audio\paired_voice_audio_dataset.py
Line 133:         return text_to_sequence(txt, ['english_cleaners'])
and replace it with: return text_to_sequence(txt, ['basic_cleaners'])

modules\dlas\dlas\data\audio\voice_tokenizer.py
Line  14: from dlas.models.audio.tts.tacotron2.text.cleaners import
english_cleaners
to: from dlas.models.audio.tts.tacotron2.text.cleaners import
english_cleaners, basic_cleaners
    Line  85:             txt = english_cleaners(txt) to txt =
basic_cleaners(txt)
    Line 134:         word = english_cleaners(word) to basic_cleaners(word)

Copy and paste German text into the tokenizer tester on the utilities
tab, and you should see it tokenized with all of the special
characters, and no [UNK].
---
license: other
language:
- de
---