SebastianBodza commited on
Commit
20efefa
·
verified ·
1 Parent(s): 72b2934

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +238 -0
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - de
5
+ base_model:
6
+ - HKUSTAudio/Llasa-1B
7
+ widget:
8
+ - src: examples/no_speaker_example.wav
9
+
10
+ ---
11
+ <img src="https://huggingface.co/SebastianBodza/Kartoffel-1B-v0.3/resolve/main/cover.jpg" alt="Kartoffel German Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
12
+
13
+
14
+ # Kartoffel-1B-v0.3
15
+ <a target="_blank" href="https://huggingface.co/spaces/SebastianBodza/Kartoffel-1B-v0.1-llasa-1b-tts">
16
+ <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
17
+ </a>
18
+
19
+ <Gallery />
20
+
21
+
22
+ > This model was trained on top of [HKUSTAudio/Llasa-1B](https://huggingface.co/HKUSTAudio/Llasa-1B).
23
+
24
+ ## Model Overview
25
+
26
+ This text-to-speech (TTS) model has been trained on a custom dataset representing **7,000 hours** of high-quality audio data. The audio data consisted of permissive podcasts, lectures and other OER data.
27
+
28
+
29
+
30
+ ## Training Details
31
+
32
+ - **Base Model:** HKUSTAudio/Llasa-1B
33
+ - **Dataset:** A custom dataset comprising **7,000 hours** of data.
34
+ - **Compute Resources:** The training was performed using **2x RTX 3090 GPUs**.
35
+ - **Raw Training Time:** Approximately **4 days 13 hours** not included the data preprocessing with xcodec2.
36
+ - The hyperparameters were probably not 100% optimal and with multiple epochs better results could be reached.
37
+
38
+ ## 👨‍💻 Installation
39
+ First install the following pip packages:
40
+ ```bash
41
+ pip install xcodec2 torch torchaudio
42
+ ```
43
+
44
+ ## 🛠️ Usage
45
+ ### 🎲 Random voice
46
+ A basic example using the Hugging Face Transformers:
47
+
48
+ ```python
49
+ import os
50
+ from transformers import AutoTokenizer, AutoModelForCausalLM
51
+ import torch
52
+ import soundfile as sf
53
+
54
+ llasa_1b_german = 'SebastianBodza/Kartoffel-1B-v0.3'
55
+
56
+ # Loading the model
57
+ tokenizer = AutoTokenizer.from_pretrained(llasa_1b_german)
58
+ model = AutoModelForCausalLM.from_pretrained(llasa_1b_german)
59
+ model.to('cuda')
60
+
61
+ # Load XCodec2 model
62
+ from xcodec2.modeling_xcodec2 import XCodec2Model
63
+ model_path = "HKUST-Audio/xcodec2"
64
+ Codec_model = XCodec2Model.from_pretrained(model_path)
65
+ Codec_model.cuda()
66
+
67
+ input_text = "\"Weißt du was, Hoppi\", sagte der weise Uhu, \"manchmal ist es gar nicht so wichtig, das Ende des Regenbogens zu finden. Das Schönste ist doch, dass wir alle zusammen dieses Abenteuer erleben!"
68
+
69
+
70
+ def extract_speech_ids(speech_tokens_str):
71
+ speech_ids = []
72
+ for token_str in speech_tokens_str:
73
+ if token_str.startswith('<|s_') and token_str.endswith('|>'):
74
+ num_str = token_str[4:-2]
75
+ num = int(num_str)
76
+ speech_ids.append(num)
77
+ else:
78
+ print(f"Unexpected token: {token_str}")
79
+ return speech_ids
80
+
81
+ with torch.no_grad():
82
+ formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>"
83
+
84
+ chat = [
85
+ {"role": "user", "content": "Convert the text to speech:" + formatted_text},
86
+ {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>"}
87
+ ]
88
+
89
+ input_ids = tokenizer.apply_chat_template(
90
+ chat,
91
+ tokenize=True,
92
+ return_tensors='pt',
93
+ continue_final_message=True
94
+ )
95
+ input_ids = input_ids.to('cuda')
96
+ speech_end_id = tokenizer.convert_tokens_to_ids('<|SPEECH_GENERATION_END|>')
97
+
98
+ outputs = model.generate(
99
+ input_ids,
100
+ max_length=2048,
101
+ eos_token_id=speech_end_id,
102
+ do_sample=True,
103
+ top_p=1,
104
+ temperature=0.8,
105
+ )
106
+
107
+ generated_ids = outputs[0][input_ids.shape[1]:-1]
108
+ speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
109
+ speech_tokens = extract_speech_ids(speech_tokens)
110
+ speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0)
111
+ gen_wav = Codec_model.decode_code(speech_tokens)
112
+
113
+
114
+ sf.write("generation.wav", gen_wav[0, 0, :].cpu().numpy(), 16000)
115
+
116
+ ```
117
+
118
+ ### 🎯 Using a specific speaker
119
+
120
+ An example with speaker reference:
121
+ ```python
122
+ import torch
123
+ import torchaudio
124
+ import tempfile
125
+ import soundfile as sf
126
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
127
+
128
+ # Input your reference audio and optional the text
129
+ sample_audio_path = "male.wav"
130
+ sample_audio_text = None # Set it to none to use whisper for transcription
131
+ # Input the target text here
132
+ target_text = "Und apropos Spannungen und Unfälle, in Stuttgart gibt es auch einige Schlagzeilen. Die Polizei sucht Zeugen, nachdem in der Stadt mehrere Autoscheiben eingeschlagen wurden. Und gestern kam es im Stuttgarter Osten zu einer Verfolgungsjagd mit einer jungen BMW-Fahrerin, die vor einer Polizeistreife geflüchtet ist."
133
+ output_filename = "no_speaker_example.wav"
134
+
135
+
136
+ #### Do not edit below ####
137
+ llasa_model_name = "SebastianBodza/Kartoffel-1B-v0.3"
138
+ tokenizer = AutoTokenizer.from_pretrained(llasa_model_name)
139
+ model = AutoModelForCausalLM.from_pretrained(llasa_model_name)
140
+ model.to("cuda")
141
+
142
+ from xcodec2.modeling_xcodec2 import XCodec2Model
143
+ codec_model_path = "HKUST-Audio/xcodec2"
144
+ Codec_model = XCodec2Model.from_pretrained(codec_model_path)
145
+ Codec_model.cuda()
146
+
147
+ whisper_turbo_pipe = pipeline(
148
+ "automatic-speech-recognition",
149
+ model="openai/whisper-large-v3-turbo",
150
+ torch_dtype=torch.float16,
151
+ device="cuda",
152
+ )
153
+
154
+ def ids_to_speech_tokens(speech_ids):
155
+ speech_tokens_str = []
156
+ for speech_id in speech_ids:
157
+ speech_tokens_str.append(f"<|s_{speech_id}|>")
158
+ return speech_tokens_str
159
+
160
+ waveform, sample_rate = torchaudio.load(sample_audio_path)
161
+
162
+ max_secs = 15
163
+ if len(waveform[0]) / sample_rate > 15:
164
+ print("Warning: Trimming audio to first 15secs.")
165
+ waveform = waveform[:, : sample_rate * 15]
166
+ waveform = torch.nn.functional.pad( waveform, (0, int(sample_rate * 0.5)), "constant", 0)
167
+
168
+ if waveform.size(0) > 1:
169
+ waveform = torch.mean(waveform, dim=0, keepdim=True)
170
+
171
+ prompt_wav = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)(waveform)
172
+
173
+ if sample_audio_text is None:
174
+ print("Transcribing audio...")
175
+ transcription = whisper_turbo_pipe(waveform[0].numpy())["text"].strip()
176
+ else:
177
+ transcription = sample_audio_text
178
+
179
+ print("Transcription:", transcription)
180
+
181
+ if len(target_text) == 0:
182
+ raise ValueError("Target text must be provided!")
183
+ elif len(target_text) > 500:
184
+ print("Text is too long; trimming to first 500 characters.")
185
+ target_text = target_text[:500]
186
+
187
+ input_text = transcription + " " + target_text
188
+
189
+ with torch.no_grad():
190
+ vq_code_prompt = Codec_model.encode_code(input_waveform=prompt_wav)
191
+ vq_code_prompt = vq_code_prompt[0, 0, :]
192
+ speech_ids_prefix = ids_to_speech_tokens(vq_code_prompt)
193
+
194
+ formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>"
195
+
196
+ chat = [
197
+ {"role": "user", "content": "Convert the text to speech:" + formatted_text},
198
+ {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>" + "".join(speech_ids_prefix)}
199
+ ]
200
+
201
+ input_ids = tokenizer.apply_chat_template(chat, tokenize=True, return_tensors="pt", continue_final_message=True)
202
+ input_ids = input_ids.to("cuda")
203
+ speech_end_id = tokenizer.convert_tokens_to_ids("<|SPEECH_GENERATION_END|>")
204
+
205
+ outputs = model.generate(
206
+ input_ids,
207
+ max_length=2048,
208
+ eos_token_id=speech_end_id,
209
+ do_sample=True,
210
+ top_p=1,
211
+ temperature=0.8,
212
+ min_new_tokens=4, # Fix so the model does not directly stop
213
+ )
214
+
215
+ generated_ids = outputs[0][input_ids.shape[1] - len(speech_ids_prefix) : -1]
216
+
217
+ speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
218
+ speech_tokens = extract_speech_ids(speech_tokens)
219
+ speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0)
220
+
221
+ gen_wav = Codec_model.decode_code(speech_tokens)
222
+ gen_wav = gen_wav[:, :, prompt_wav.shape[1] :]
223
+ sf.write(output_filename, gen_wav[0, 0, :].cpu().numpy(), 16000)
224
+ ```
225
+
226
+
227
+ ## Tips
228
+ - With a reference speaker, audio glitches can happen. Try to increase the temperature to get better results.
229
+
230
+ ## License
231
+
232
+ This project is licensed under the [CC-BY-NC-4.0 license](https://creativecommons.org/licenses/by-nc/4.0/).
233
+
234
+ ## Acknowledgments
235
+
236
+ - **Hugging Face:** Thanks for a GPU grant I could also train with the same hparams on top of the multilingual base model. According to train and val loss, the non multilingual version resulted in better results.
237
+ * [**HKUSTAudio:**](https://huggingface.co/HKUSTAudio/Llasa-1B) for providing the model open source and a great inference, training and preprocessing (xcodec2) script!
238
+