erogol commited on
Commit
6fa9181
1 Parent(s): 2795376

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md CHANGED
@@ -2,4 +2,96 @@
2
  license: other
3
  license_name: coqui-public-model-license
4
  license_link: https://coqui.ai/cpml
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: other
3
  license_name: coqui-public-model-license
4
  license_link: https://coqui.ai/cpml
5
+ library_name: coqui
6
+ pipeline_tag: text-to-speech
7
  ---
8
+
9
+ # ⓍTTS
10
+ ⓍTTS is a Voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. Built on Tortoise,
11
+ ⓍTTS has important model changes that make cross-language voice cloning and multi-lingual speech generation super easy.
12
+ There is no need for an excessive amount of training data that spans countless hours.
13
+
14
+ This is the same model that powers [Coqui Studio](https://coqui.ai/), and [Coqui API](https://docs.coqui.ai/docs), however we apply
15
+ a few tricks to make it faster and support streaming inference.
16
+
17
+ ### Features
18
+ - Supports 16 languages.
19
+ - Voice cloning with just a 6-second audio clip.
20
+ - Emotion and style transfer by cloning.
21
+ - Cross-language voice cloning.
22
+ - Multi-lingual speech generation.
23
+ - 24khz sampling rate.
24
+
25
+ ### Updates over XTTS-v1
26
+ - 2 new languages; Hungarian and Korean
27
+ - Architectural improvements for speaker conditioning.
28
+ - Enables the use of multiple speaker references and interpolation between speakers.
29
+ - Stability improvements.
30
+ - Better prosody and audio quality across the board.
31
+
32
+ ### Languages
33
+ As of now, XTTS-v1 (v1.1) supports 14 languages: **English, Spanish, French, German, Italian, Portuguese,
34
+ Polish, Turkish, Russian, Dutch, Czech, Arabic, Chinese, Japanese, Hungarian and Korean**.
35
+
36
+ Stay tuned as we continue to add support for more languages. If you have any language requests, feel free to reach out!
37
+
38
+ ### Code
39
+ The [code-base](https://github.com/coqui-ai/TTS) supports inference and [fine-tuning](https://tts.readthedocs.io/en/latest/models/xtts.html#training).
40
+
41
+ ### License
42
+ This model is licensed under [Coqui Public Model License](https://coqui.ai/cpml). There's a lot that goes into a license for generative models, and you can read more of [the origin story of CPML here](https://coqui.ai/blog/tts/cpml).
43
+
44
+ ### Contact
45
+ Come and join in our 🐸Community. We're active on [Discord](https://discord.gg/fBC58unbKE) and [Twitter](https://twitter.com/coqui_ai).
46
+ You can also mail us at [email protected].
47
+
48
+ Using 🐸TTS API:
49
+
50
+ ```python
51
+ from TTS.api import TTS
52
+ tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True)
53
+
54
+ # generate speech by cloning a voice using default settings
55
+ tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
56
+ file_path="output.wav",
57
+ speaker_wav="/path/to/target/speaker.wav",
58
+ language="en")
59
+
60
+ # generate speech by cloning a voice using custom settings
61
+ tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
62
+ file_path="output.wav",
63
+ speaker_wav="/path/to/target/speaker.wav",
64
+ language="en",
65
+ decoder_iterations=30)
66
+ ```
67
+
68
+ Using 🐸TTS Command line:
69
+
70
+ ```console
71
+ tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
72
+ --text "Bugün okula gitmek istemiyorum." \
73
+ --speaker_wav /path/to/target/speaker.wav \
74
+ --language_idx tr \
75
+ --use_cuda true
76
+ ```
77
+
78
+ Using the model directly:
79
+
80
+ ```python
81
+ from TTS.tts.configs.xtts_config import XttsConfig
82
+ from TTS.tts.models.xtts import Xtts
83
+
84
+ config = XttsConfig()
85
+ config.load_json("/path/to/xtts/config.json")
86
+ model = Xtts.init_from_config(config)
87
+ model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
88
+ model.cuda()
89
+
90
+ outputs = model.synthesize(
91
+ "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
92
+ config,
93
+ speaker_wav="/data/TTS-public/_refclips/3.wav",
94
+ gpt_cond_len=3,
95
+ language="en",
96
+ )
97
+ ```