RedbeardNZ RMSnow commited on
Commit
6a7ef95
·
verified ·
0 Parent(s):

Duplicate from amphion/Vevo

Browse files

Co-authored-by: Xueyao Zhang <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - voice-conversion
5
+ - text-to-speech
6
+ - accent-conversion
7
+ - emotion-conversion
8
+ - style-transfer
9
+ datasets:
10
+ - amphion/Emilia-Dataset
11
+ ---
12
+
13
+ # Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement
14
+
15
+ [![arXiv](https://img.shields.io/badge/OpenReview-Paper-COLOR.svg)](https://openreview.net/pdf?id=anQDiQZhDP)
16
+ [![hf](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-model-yellow)](https://huggingface.co/amphion/Vevo)
17
+ [![WebPage](https://img.shields.io/badge/WebPage-Demo-red)](https://versavoice.github.io/) [![readme](https://img.shields.io/badge/README-GitHub-blue)](https://github.com/open-mmlab/Amphion/blob/main/models/vc/vevo/README.md)
18
+
19
+ We present our reproduction of [Vevo](https://openreview.net/pdf?id=anQDiQZhDP), a versatile zero-shot voice imitation framework with controllable timbre and style. We invite you to explore the [audio samples](https://versavoice.github.io/) to experience Vevo's capabilities firsthand.
20
+
21
+ We have included the following pre-trained Vevo models at Amphion:
22
+
23
+ - **Vevo-Timbre**: It can conduct *style-preserved* voice conversion.
24
+ - **Vevo-Style**: It can conduct style conversion, such as *accent conversion* and *emotion conversion*.
25
+ - **Vevo-Voice**: It can conduct *style-converted* voice conversion.
26
+ - **Vevo-TTS**: It can conduct *style and timbre controllable* TTS.
27
+
28
+ Besides, we also release the **content tokenizer** and **content-style tokenizer** proposed by Vevo. Notably, all these pre-trained models are trained on [Emilia](https://huggingface.co/datasets/amphion/Emilia-Dataset), containing 101k hours of speech data among six languages (English, Chinese, German, French, Japanese, and Korean).
29
+
30
+ ## Model Introduction
31
+
32
+ We provide the following pre-trained models:
33
+
34
+
35
+ | Model Name | Description |
36
+ |-------------------|-------------|
37
+ | [Content Tokenizer](https://huggingface.co/amphion/Vevo/tree/main/tokenizer/vq32) | Converting speech to content tokens. It is a single codebook VQ-VAE with a vocabulary size of 32. The frame rate is 50Hz.|
38
+ | [Content-Style Tokenizer](https://huggingface.co/amphion/Vevo/tree/main/tokenizer/vq8192) | Converting speech to content-style tokens. It is a single codebook VQ-VAE with a vocabulary size of 8192. The frame rate is 50Hz.|
39
+ | [Vq32ToVq8192](https://huggingface.co/amphion/Vevo/tree/main/contentstyle_modeling/Vq32ToVq8192) | Predicting content-style tokens from content tokens with an auto-regressive transformer (480M). |
40
+ | [PhoneToVq8192](https://huggingface.co/amphion/Vevo/tree/main/contentstyle_modeling/PhoneToVq8192) | Predicting content-style tokens from phone tokens with an auto-regressive transformer (740M). |
41
+ | [Vq8192ToMels](https://huggingface.co/amphion/Vevo/tree/main/acoustic_modeling/Vq8192ToMels) | Predicting mel-spectrogram from content-style tokens with a flow-matching transformer (330M). |
42
+ | [Vocoder](https://huggingface.co/amphion/Vevo/tree/main/acoustic_modeling/Vocoder) | Predicting audio from mel-spectrogram with a Vocos-based vocoder (250M). |
43
+
44
+ You can download all pretrained checkpoints from [HuggingFace](https://huggingface.co/amphion/MaskGCT/tree/main) or use huggingface API.
45
+
46
+ ## Usage
47
+
48
+ You can refer to our [recipe](https://github.com/open-mmlab/Amphion/blob/main/models/vc/vevo/README.md) at GitHub for more usage details. For example, to use Vevo-TTS, after you clone the Amphion github repository, you can use the script like:
49
+
50
+ ```python
51
+ import os
52
+ from huggingface_hub import snapshot_download
53
+
54
+ from models.vc.vevo.vevo_utils import *
55
+
56
+
57
+ def vevo_tts(
58
+ src_text,
59
+ ref_wav_path,
60
+ timbre_ref_wav_path=None,
61
+ output_path=None,
62
+ ref_text=None,
63
+ src_language="en",
64
+ ref_language="en",
65
+ ):
66
+ if timbre_ref_wav_path is None:
67
+ timbre_ref_wav_path = ref_wav_path
68
+
69
+ gen_audio = inference_pipeline.inference_ar_and_fm(
70
+ src_wav_path=None,
71
+ src_text=src_text,
72
+ style_ref_wav_path=ref_wav_path,
73
+ timbre_ref_wav_path=timbre_ref_wav_path,
74
+ style_ref_wav_text=ref_text,
75
+ src_text_language=src_language,
76
+ style_ref_wav_text_language=ref_language,
77
+ )
78
+
79
+ assert output_path is not None
80
+ save_audio(gen_audio, output_path=output_path)
81
+
82
+
83
+ if __name__ == "__main__":
84
+ # ===== Device =====
85
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
86
+
87
+ # ===== Content-Style Tokenizer =====
88
+ local_dir = snapshot_download(
89
+ repo_id="amphion/Vevo",
90
+ repo_type="model",
91
+ cache_dir="./ckpts/Vevo",
92
+ allow_patterns=["tokenizer/vq8192/*"],
93
+ )
94
+
95
+ content_style_tokenizer_ckpt_path = os.path.join(local_dir, "tokenizer/vq8192")
96
+
97
+ # ===== Autoregressive Transformer =====
98
+ local_dir = snapshot_download(
99
+ repo_id="amphion/Vevo",
100
+ repo_type="model",
101
+ cache_dir="./ckpts/Vevo",
102
+ allow_patterns=["contentstyle_modeling/PhoneToVq8192/*"],
103
+ )
104
+
105
+ ar_cfg_path = "./models/vc/vevo/config/PhoneToVq8192.json"
106
+ ar_ckpt_path = os.path.join(local_dir, "contentstyle_modeling/PhoneToVq8192")
107
+
108
+ # ===== Flow Matching Transformer =====
109
+ local_dir = snapshot_download(
110
+ repo_id="amphion/Vevo",
111
+ repo_type="model",
112
+ cache_dir="./ckpts/Vevo",
113
+ allow_patterns=["acoustic_modeling/Vq8192ToMels/*"],
114
+ )
115
+
116
+ fmt_cfg_path = "./models/vc/vevo/config/Vq8192ToMels.json"
117
+ fmt_ckpt_path = os.path.join(local_dir, "acoustic_modeling/Vq8192ToMels")
118
+
119
+ # ===== Vocoder =====
120
+ local_dir = snapshot_download(
121
+ repo_id="amphion/Vevo",
122
+ repo_type="model",
123
+ cache_dir="./ckpts/Vevo",
124
+ allow_patterns=["acoustic_modeling/Vocoder/*"],
125
+ )
126
+
127
+ vocoder_cfg_path = "./models/vc/vevo/config/Vocoder.json"
128
+ vocoder_ckpt_path = os.path.join(local_dir, "acoustic_modeling/Vocoder")
129
+
130
+ # ===== Inference =====
131
+ inference_pipeline = VevoInferencePipeline(
132
+ content_style_tokenizer_ckpt_path=content_style_tokenizer_ckpt_path,
133
+ ar_cfg_path=ar_cfg_path,
134
+ ar_ckpt_path=ar_ckpt_path,
135
+ fmt_cfg_path=fmt_cfg_path,
136
+ fmt_ckpt_path=fmt_ckpt_path,
137
+ vocoder_cfg_path=vocoder_cfg_path,
138
+ vocoder_ckpt_path=vocoder_ckpt_path,
139
+ device=device,
140
+ )
141
+
142
+ src_text = "I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring. Respect me and I'll nurture you; ignore me and you shall face the consequences."
143
+
144
+ ref_wav_path = "./models/vc/vevo/wav/arabic_male.wav"
145
+ ref_text = "Flip stood undecided, his ears strained to catch the slightest sound."
146
+
147
+ # 1. Zero-Shot TTS (the style reference and timbre reference are same)
148
+ vevo_tts(
149
+ src_text,
150
+ ref_wav_path,
151
+ output_path="./models/vc/vevo/wav/output_vevotts1.wav",
152
+ ref_text=ref_text,
153
+ src_language="en",
154
+ ref_language="en",
155
+ )
156
+
157
+ # 2. Style and Timbre Controllable Zero-Shot TTS (the style reference and timbre reference are different)
158
+ vevo_tts(
159
+ src_text,
160
+ ref_wav_path,
161
+ timbre_ref_wav_path="./models/vc/vevo/wav/mandarin_female.wav",
162
+ output_path="./models/vc/vevo/wav/output_vevotts2.wav",
163
+ ref_text=ref_text,
164
+ src_language="en",
165
+ ref_language="en",
166
+ )
167
+ ```
168
+
169
+ ## Citation
170
+
171
+ If you use Vevo in your research, please cite the following papers:
172
+
173
+ ```bibtex
174
+ @inproceedings{vevo,
175
+ author = {Xueyao Zhang and Xiaohui Zhang and Kainan Peng and Zhenyu Tang and Vimal Manohar and Yingru Liu and Jeff Hwang and Dangna Li and Yuhao Wang and Julian Chan and Yuan Huang and Zhizheng Wu and Mingbo Ma},
176
+ title = {Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement},
177
+ booktitle = {{ICLR}},
178
+ publisher = {OpenReview.net},
179
+ year = {2025}
180
+ }
181
+
182
+ @article{amphion_v0.2,
183
+ title = {Overview of the Amphion Toolkit (v0.2)},
184
+ author = {Jiaqi Li and Xueyao Zhang and Yuancheng Wang and Haorui He and Chaoren Wang and Li Wang and Huan Liao and Junyi Ao and Zeyu Xie and Yiqiao Huang and Junan Zhang and Zhizheng Wu},
185
+ year = {2025},
186
+ journal = {arXiv preprint arXiv:2501.15442},
187
+ }
188
+
189
+ @inproceedings{amphion,
190
+ author={Xueyao Zhang and Liumeng Xue and Yicheng Gu and Yuancheng Wang and Jiaqi Li and Haorui He and Chaoren Wang and Ting Song and Xi Chen and Zihao Fang and Haopeng Chen and Junan Zhang and Tze Ying Tang and Lexiao Zou and Mingxuan Wang and Jun Han and Kai Chen and Haizhou Li and Zhizheng Wu},
191
+ title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
192
+ booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024},
193
+ year={2024}
194
+ }
195
+ ```
acoustic_modeling/Vocoder/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7670d180569fdae986fbf94ede07d6fc4ce8bfcf406cd1aadbe33e08581b5f6a
3
+ size 1020206416
acoustic_modeling/Vocoder/model_1.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:164c3db2a9f41a994ac1c7a7a57aa15aadb97dd3f94a4af9bef95d2f7edfad34
3
+ size 69768280
acoustic_modeling/Vocoder/model_2.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03785c5a3ed7181b22fae11a266d1438cb59333726dfbe8429f00d5543dd8c95
3
+ size 180693296
acoustic_modeling/Vq8192ToMels/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:750f013ac1485855bfbe992ffec8ed5f625e6070b8bc52ce71a6f9ae0229c5c4
3
+ size 1350803704
config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "download_tracking": {
3
+ "query_files": ["config.json", "*.safetensors"]
4
+ }
5
+ }
contentstyle_modeling/PhoneToVq8192/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:033248ccc7879456474134b6c77ea0d82a0e78ba7a09ba1ae7e1d99943d4eeaf
3
+ size 2973225456
contentstyle_modeling/Vq32ToVq8192/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83e4984695487d6feeba9664c95375f9487e3326059b7239db1fe220e1d49b1d
3
+ size 1925991368
tokenizer/vq32/hubert_large_l18_c32.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d13c6f6e34a1ef43cd7d67cef9814df0ca99e9c5b4610c0a0115bb2b1d76045
3
+ size 235174528
tokenizer/vq32/hubert_large_l18_c32.yaml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ bias: true
2
+ code_dim: 1024
3
+ codebook_num: 1
4
+ codebook_size: 32
5
+ dec_block_dilations:
6
+ - 1
7
+ - 1
8
+ dec_block_kernel_size: 3
9
+ dec_kernel_size: 3
10
+ dec_ratios:
11
+ - 1
12
+ - 1
13
+ dec_strides:
14
+ - 1
15
+ - 1
16
+ decode_channels: 1024
17
+ enc_block_dilations:
18
+ - 1
19
+ - 1
20
+ enc_block_kernel_size: 3
21
+ enc_kernel_size: 3
22
+ enc_ratios:
23
+ - 1
24
+ - 1
25
+ enc_strides:
26
+ - 1
27
+ - 1
28
+ encode_channels: 1024
29
+ input_channels: 1024
30
+ output_channels: 1024
tokenizer/vq32/hubert_large_l18_mean_std.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84115dd2ad3a44526b8e75e173956e41805ba540704acd735f302889a83067b4
3
+ size 8692
tokenizer/vq8192/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:660bd48b023e637a786a9c78f404cb979ef9a5d1c93ce24837e0bec942352c4d
3
+ size 177183712