liumaolin commited on
Commit
ffb2e23
·
1 Parent(s): 22452df

Using local model for FunASR.

Browse files
Files changed (38) hide show
  1. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/.mdl +0 -0
  2. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/.msc +0 -0
  3. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/.mv +1 -0
  4. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/README.md +274 -0
  5. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/config.yaml +46 -0
  6. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/configuration.json +15 -0
  7. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/example/punc_example.txt +3 -0
  8. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/fig/struct.png +3 -0
  9. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/jieba.c.dict +3 -0
  10. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/jieba_usr_dict +3 -0
  11. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/model.pt +3 -0
  12. moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/tokens.json +0 -0
  13. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/.mdl +0 -0
  14. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/.msc +0 -0
  15. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/.mv +1 -0
  16. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/README.md +296 -0
  17. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/am.mvn +8 -0
  18. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/config.yaml +56 -0
  19. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/configuration.json +13 -0
  20. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav +3 -0
  21. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/fig/struct.png +3 -0
  22. moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/model.pt +3 -0
  23. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.mdl +0 -0
  24. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.msc +0 -0
  25. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.mv +1 -0
  26. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/README.md +357 -0
  27. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/am.mvn +8 -0
  28. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/asr_example_hotword.wav +3 -0
  29. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/config.yaml +160 -0
  30. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/configuration.json +14 -0
  31. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav +3 -0
  32. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/hotword.txt +1 -0
  33. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/fig/res.png +3 -0
  34. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/fig/seaco.png +3 -0
  35. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model.pt +3 -0
  36. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/seg_dict +0 -0
  37. moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/tokens.json +0 -0
  38. transcribe/helpers/funasr.py +9 -1
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/.mdl ADDED
Binary file (77 Bytes). View file
 
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/.msc ADDED
Binary file (637 Bytes). View file
 
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1707184148
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/README.md ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tasks:
3
+ - punctuation
4
+ domain:
5
+ - audio
6
+ model-type:
7
+ - Classification
8
+ frameworks:
9
+ - pytorch
10
+ metrics:
11
+ - f1_score
12
+ license: Apache License 2.0
13
+ language:
14
+ - cn
15
+ tags:
16
+ - FunASR
17
+ - CT-Transformer
18
+ - Alibaba
19
+ - ICASSP 2020
20
+ datasets:
21
+ train:
22
+ - 100M-samples online data
23
+ test:
24
+ - wikipedia data test
25
+ - 10000 industrial Mandarin sentences test
26
+ widgets:
27
+ - model_revision: v2.0.4
28
+ task: punctuation
29
+ inputs:
30
+ - type: text
31
+ name: input
32
+ title: 文本
33
+ examples:
34
+ - name: 1
35
+ title: 示例1
36
+ inputs:
37
+ - name: input
38
+ data: 那今天的会就到这里吧 happy new year 明年见
39
+ inferencespec:
40
+ cpu: 1 #CPU数量
41
+ memory: 4096
42
+ ---
43
+
44
+ # Controllable Time-delay Transformer模型介绍
45
+
46
+ [//]: # (Controllable Time-delay Transformer 模型是一种端到端标点分类模型。)
47
+
48
+ [//]: # (常规的Transformer会依赖很远的未来信息,导致长时间结果不固定。Controllable Time-delay Transformer 在效果无损的情况下,有效控制标点的延时。)
49
+
50
+ # Highlights
51
+ - 中文标点通用模型:可用于语音识别模型输出文本的标点预测,支持中英文输入。
52
+ - 基于[Paraformer-large长音频模型](https://www.modelscope.cn/models/iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)场景的使用
53
+ - 基于[FunASR框架](https://github.com/alibaba-damo-academy/FunASR),可进行ASR,VAD,标点的自由组合
54
+ - 基于纯文本输入的标点预测
55
+
56
+
57
+ ## <strong>[FunASR开源项目介绍](https://github.com/alibaba-damo-academy/FunASR)</strong>
58
+ <strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
59
+
60
+ [**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
61
+ | [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
62
+ | [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
63
+ | [**服务部署**](https://www.funasr.com)
64
+ | [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
65
+ | [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
66
+
67
+
68
+ ## 模型原理介绍
69
+
70
+ Controllable Time-delay Transformer是达摩院语音团队提出的高效后处理框架中的标点模块。本项目为中文通用标点模型,模型可以被应用于文本类输入的标点预测,也可应用于语音识别结果的后处理步骤,协助语音识别模块输出具有可读性的文本结果。
71
+
72
+ <p align="center">
73
+ <img src="fig/struct.png" alt="Controllable Time-delay Transformer模型结构" width="500" />
74
+
75
+ Controllable Time-delay Transformer 模型结构如上图所示,由 Embedding、Encoder 和 Predictor 三部分组成。Embedding 是词向量叠加位置向量。Encoder可以采用不同的网络结构,例如self-attention,conformer,SAN-M等。Predictor 预测每个token后的标点类型。
76
+
77
+ 在模型的选择上采用了性能优越的Transformer模型。Transformer模型在获得良好性能的同时,由于模型自身序列化输入等特性,会给系统带来较大时延。常规的Transformer可以看到未来的全部信息,导致标点会依赖很远的未来信息。这会给用户带来一种标点一直在变化刷新,长时间结果不固定的不良感受。基于这一问题,我们创新性的提出了可控时延的Transformer模型(Controllable Time-Delay Transformer, CT-Transformer),在模型性能无损失的情况下,有效控制标点的延时。
78
+
79
+ 更详细的细节见:
80
+ - 论文: [CONTROLLABLE TIME-DELAY TRANSFORMER FOR REAL-TIME PUNCTUATION PREDICTION AND DISFLUENCY DETECTION](https://arxiv.org/pdf/2003.01309.pdf)
81
+
82
+
83
+ #### 基于ModelScope进行推理
84
+
85
+ 以下为三种支持格式及api调用方式参考如下范例:
86
+ - text.scp文件路径,例如example/punc_example.txt,格式为: key + "\t" + value
87
+ ```sh
88
+ cat example/punc_example.txt
89
+ 1 跨境河流是养育沿岸人民的生命之源
90
+ 2 从存储上来说仅仅是全景图片它就会是图片的四倍的容量
91
+ 3 那今天的会就到这里吧happy new year明年见
92
+ ```
93
+ ```python
94
+ from modelscope.pipelines import pipeline
95
+ from modelscope.utils.constant import Tasks
96
+
97
+ inference_pipline = pipeline(
98
+ task=Tasks.punctuation,
99
+ model='iic/punc_ct-transformer_cn-en-common-vocab471067-large',
100
+ model_revision="v2.0.4")
101
+
102
+ rec_result = inference_pipline('example/punc_example.txt')
103
+ print(rec_result)
104
+ ```
105
+ - text二进制数据,例如:用户直接从文件里读出bytes数据
106
+ ```python
107
+ rec_result = inference_pipline('我们都是木头人不会讲话不会动')
108
+ ```
109
+ - text文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/punc_example.txt
110
+ ```python
111
+ rec_result = inference_pipline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/punc_example.txt')
112
+ ```
113
+
114
+
115
+ ## 基于FunASR进行推理
116
+
117
+ 下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_en.wav))
118
+
119
+ ### 可执行命令行
120
+ 在命令行终端执行:
121
+
122
+ ```shell
123
+ funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=vad_example.wav
124
+ ```
125
+
126
+ 注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id wav_path`
127
+
128
+ ### python示例
129
+ #### 非实时语音识别
130
+ ```python
131
+ from funasr import AutoModel
132
+ # paraformer-zh is a multi-functional asr model
133
+ # use vad, punc, spk or not as you need
134
+ model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
135
+ vad_model="fsmn-vad", vad_model_revision="v2.0.4",
136
+ punc_model="ct-punc-c", punc_model_revision="v2.0.4",
137
+ # spk_model="cam++", spk_model_revision="v2.0.2",
138
+ )
139
+ res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
140
+ batch_size_s=300,
141
+ hotword='魔搭')
142
+ print(res)
143
+ ```
144
+ 注:`model_hub`:表示模型仓库,`ms`为选择modelscope下载,`hf`为选择huggingface下载。
145
+
146
+ #### 实时语音识别
147
+
148
+ ```python
149
+ from funasr import AutoModel
150
+
151
+ chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
152
+ encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
153
+ decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
154
+
155
+ model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
156
+
157
+ import soundfile
158
+ import os
159
+
160
+ wav_file = os.path.join(model.model_path, "example/asr_example.wav")
161
+ speech, sample_rate = soundfile.read(wav_file)
162
+ chunk_stride = chunk_size[1] * 960 # 600ms
163
+
164
+ cache = {}
165
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
166
+ for i in range(total_chunk_num):
167
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
168
+ is_final = i == total_chunk_num - 1
169
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
170
+ print(res)
171
+ ```
172
+
173
+ 注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。
174
+
175
+ #### 语音端点检测(非实时)
176
+ ```python
177
+ from funasr import AutoModel
178
+
179
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
180
+
181
+ wav_file = f"{model.model_path}/example/asr_example.wav"
182
+ res = model.generate(input=wav_file)
183
+ print(res)
184
+ ```
185
+
186
+ #### 语音端点检测(实时)
187
+ ```python
188
+ from funasr import AutoModel
189
+
190
+ chunk_size = 200 # ms
191
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
192
+
193
+ import soundfile
194
+
195
+ wav_file = f"{model.model_path}/example/vad_example.wav"
196
+ speech, sample_rate = soundfile.read(wav_file)
197
+ chunk_stride = int(chunk_size * sample_rate / 1000)
198
+
199
+ cache = {}
200
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
201
+ for i in range(total_chunk_num):
202
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
203
+ is_final = i == total_chunk_num - 1
204
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
205
+ if len(res[0]["value"]):
206
+ print(res)
207
+ ```
208
+
209
+ #### 标点恢复
210
+ ```python
211
+ from funasr import AutoModel
212
+
213
+ model = AutoModel(model="ct-punc", model_revision="v2.0.4")
214
+
215
+ res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
216
+ print(res)
217
+ ```
218
+
219
+ #### 时间戳预测
220
+ ```python
221
+ from funasr import AutoModel
222
+
223
+ model = AutoModel(model="fa-zh", model_revision="v2.0.4")
224
+
225
+ wav_file = f"{model.model_path}/example/asr_example.wav"
226
+ text_file = f"{model.model_path}/example/text.txt"
227
+ res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
228
+ print(res)
229
+ ```
230
+
231
+ 更多详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
232
+
233
+
234
+ ## 微调
235
+
236
+ 详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
237
+
238
+
239
+
240
+
241
+
242
+ ## Benchmark
243
+ 中文标点预测通用模型在自采集的通用领域业务场景数据上有良好效果。训练数据大约100M个sample,每个sample可能包含1句或多句。
244
+
245
+ ### 自采集数据(20000+ samples)
246
+
247
+ | precision | recall | f1_score |
248
+ |:------------------------------------:|:-------------------------------------:|:-------------------------------------:|
249
+ | <div style="width: 150pt">56.0</div> | <div style="width: 150pt">62.5</div> | <div style="width:i 150pt">58.8</div> |
250
+
251
+ ## 使用方式以及适用范围
252
+
253
+ 运行范围
254
+ - 现阶段只能在Linux-x86_64运行,不支持Mac和Windows。
255
+
256
+ 使用方式
257
+ - 直接推理:可以直接对输入文本进行计算,输出带有标点的目标文字。
258
+
259
+ 使用范围与目标场景
260
+ - 适合对文本数据进行标点预测,文本长度不限。
261
+
262
+ ## 相关论文以及引用信息
263
+
264
+ ```BibTeX
265
+ @inproceedings{chen2020controllable,
266
+ title={Controllable Time-Delay Transformer for Real-Time Punctuation Prediction and Disfluency Detection},
267
+ author={Chen, Qian and Chen, Mengzhe and Li, Bo and Wang, Wen},
268
+ booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
269
+ pages={8069--8073},
270
+ year={2020},
271
+ organization={IEEE}
272
+ }
273
+ ```
274
+
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/config.yaml ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model: CTTransformer
2
+ model_conf:
3
+ ignore_id: 0
4
+ embed_unit: 516
5
+ att_unit: 516
6
+ dropout_rate: 0.1
7
+ punc_list:
8
+ - <unk>
9
+ - _
10
+ - ,
11
+ - 。
12
+ - ?
13
+ - 、
14
+ punc_weight:
15
+ - 1.0
16
+ - 1.0
17
+ - 1.0
18
+ - 1.0
19
+ - 1.0
20
+ - 1.0
21
+ sentence_end_id: 3
22
+
23
+ encoder: SANMEncoder
24
+ encoder_conf:
25
+ input_size: 516
26
+ output_size: 516
27
+ attention_heads: 12
28
+ linear_units: 2048
29
+ num_blocks: 12
30
+ dropout_rate: 0.1
31
+ positional_dropout_rate: 0.1
32
+ attention_dropout_rate: 0.0
33
+ input_layer: pe
34
+ pos_enc_class: SinusoidalPositionEncoder
35
+ normalize_before: true
36
+ kernel_size: 11
37
+ sanm_shfit: 0
38
+ selfattention_layer_type: sanm
39
+ padding_idx: 0
40
+
41
+ tokenizer: CharTokenizer
42
+ tokenizer_conf:
43
+ unk_symbol: <unk>
44
+
45
+
46
+
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/configuration.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task" : "punctuation",
4
+ "model": {"type" : "funasr"},
5
+ "pipeline": {"type":"funasr-pipeline"},
6
+ "model_name_in_hub": {
7
+ "ms":"iic/punc_ct-transformer_cn-en-common-vocab471067-large",
8
+ "hf":""},
9
+ "file_path_metas": {
10
+ "init_param":"model.pt",
11
+ "config":"config.yaml",
12
+ "tokenizer_conf": {"token_list": "tokens.json", "jieba_usr_dict": "jieba_usr_dict"},
13
+ "jieba_usr_dict": "jieba_usr_dict"
14
+ }
15
+ }
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/example/punc_example.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ 1 跨境河流是养育沿岸人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切愿意进一步完善双方联合工作机制凡是中方能做的我们都会去做而且会做得更好我请印度朋友们放心中国在上游的任何开发利用都会经过科学规划和论证兼顾上下游的利益
2
+ 2 从存储上来说仅仅是全景图片它就会是图片的四倍的容量然后全景的视频会是普通视频八倍的这个存储的容要求而三d的模型会是图片的十倍这都对我们今天运行在的云计算的平台存储的平台提出了更高的要求
3
+ 3 那今天的会就到这里吧 happy new year 明年见
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/fig/struct.png ADDED

Git LFS Details

  • SHA256: 90e5bdca14b00d4bbcafcbb1e9e2ca0e3905afb2b3b3129e5fd85d49b059812c
  • Pointer size: 131 Bytes
  • Size of remote file: 155 kB
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/jieba.c.dict ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb0a68f8bfa65e6956d59e8f9652b49f5c91952273e24737bc9a8c5e23055221
3
+ size 41536866
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/jieba_usr_dict ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:059aa2af152734d5fd011a4d69cbab171f62347074db0db5157c17b5649010f5
3
+ size 11280857
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7176cae922a872e130e6b88aef9a1153581711baf79c9124c7c95be383cd6f81
3
+ size 1125507622
moyoyo_asr_models/punc_ct-transformer_cn-en-common-vocab471067-large/tokens.json ADDED
The diff for this file is too large to render. See raw diff
 
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/.mdl ADDED
Binary file (67 Bytes). View file
 
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/.msc ADDED
Binary file (497 Bytes). View file
 
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1707184291
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/README.md ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tasks:
3
+ - voice-activity-detection
4
+ domain:
5
+ - audio
6
+ model-type:
7
+ - VAD model
8
+ frameworks:
9
+ - pytorch
10
+ backbone:
11
+ - fsmn
12
+ metrics:
13
+ - f1_score
14
+ license: Apache License 2.0
15
+ language:
16
+ - cn
17
+ tags:
18
+ - FunASR
19
+ - FSMN
20
+ - Alibaba
21
+ - Online
22
+ datasets:
23
+ train:
24
+ - 20,000 hour industrial Mandarin task
25
+ test:
26
+ - 20,000 hour industrial Mandarin task
27
+ widgets:
28
+ - task: voice-activity-detection
29
+ model_revision: v2.0.4
30
+ inputs:
31
+ - type: audio
32
+ name: input
33
+ title: 音频
34
+ examples:
35
+ - name: 1
36
+ title: 示例1
37
+ inputs:
38
+ - name: input
39
+ data: git://example/vad_example.wav
40
+ inferencespec:
41
+ cpu: 1 #CPU数量
42
+ memory: 4096
43
+ ---
44
+
45
+ # FSMN-Monophone VAD 模型介绍
46
+
47
+ [//]: # (FSMN-Monophone VAD 模型)
48
+
49
+ ## Highlight
50
+ - 16k中文通用VAD模型:可用于检测长语音片段中有效语音的起止时间点。
51
+ - 基于[Paraformer-large长音频模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)场景的使用
52
+ - 基于[FunASR框架](https://github.com/alibaba-damo-academy/FunASR),可进行ASR,VAD,[中文标点](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary)的自由组合
53
+ - 基于音频数据的有效语音片段起止时间点检测
54
+
55
+ ## <strong>[FunASR开源项目介绍](https://github.com/alibaba-damo-academy/FunASR)</strong>
56
+ <strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
57
+
58
+ [**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
59
+ | [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
60
+ | [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
61
+ | [**服务部署**](https://www.funasr.com)
62
+ | [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
63
+ | [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
64
+
65
+
66
+ ## 模型原理介绍
67
+
68
+ FSMN-Monophone VAD是达摩院语音团队提出的高效语音端点检测模型,用于检测输入音频中有效语音的起止时间点信息,并将检测出来的有效音频片段输入识别引擎进行识别,减少无效语音带来的识别错误。
69
+
70
+ <p align="center">
71
+ <img src="fig/struct.png" alt="VAD模型结构" width="500" />
72
+
73
+ FSMN-Monophone VAD模型结构如上图所示:模型结构层面,FSMN模型结构建模时可考虑上下文信息,训练和推理速度快,且时延可控;同时根据VAD模型size以及低时延的要求,对FSMN的网络结构、右看帧数进行了适配。在建模单元层面,speech信息比较丰富,仅用单类来表征学习能力有限,我们将单一speech类升级为Monophone。建模单元细分,可以避免参数平均,抽象学习能力增强,区分性更好。
74
+
75
+ ## 基于ModelScope进行推理
76
+
77
+ - 推理支持音频格式如下:
78
+ - wav文件路径,例如:data/test/audios/vad_example.wav
79
+ - wav文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav
80
+ - wav二进制数据,格式bytes,例如:用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
81
+ - 已解析的audio音频,例如:audio, rate = soundfile.read("vad_example_zh.wav"),类型为numpy.ndarray或者torch.Tensor。
82
+ - wav.scp文件,需符合如下要求:
83
+
84
+ ```sh
85
+ cat wav.scp
86
+ vad_example1 data/test/audios/vad_example1.wav
87
+ vad_example2 data/test/audios/vad_example2.wav
88
+ ...
89
+ ```
90
+
91
+ - 若输入格式wav文件url,api调用方式可参考如下范例:
92
+
93
+ ```python
94
+ from modelscope.pipelines import pipeline
95
+ from modelscope.utils.constant import Tasks
96
+
97
+ inference_pipeline = pipeline(
98
+ task=Tasks.voice_activity_detection,
99
+ model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch',
100
+ model_revision="v2.0.4",
101
+ )
102
+
103
+ segments_result = inference_pipeline(input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav')
104
+ print(segments_result)
105
+ ```
106
+
107
+ - 输入音频为pcm格式,调用api时需要传入音频采样率参数fs,例如:
108
+
109
+ ```python
110
+ segments_result = inference_pipeline(input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.pcm', fs=16000)
111
+ ```
112
+
113
+ - 若输入格式为文件wav.scp(注:文件名需要以.scp结尾),可添加 output_dir 参数将识别结果写入文件中,参考示例如下:
114
+
115
+ ```python
116
+ inference_pipeline(input="wav.scp", output_dir='./output_dir')
117
+ ```
118
+ 识别结果输出路径结构如下:
119
+
120
+ ```sh
121
+ tree output_dir/
122
+ output_dir/
123
+ └── 1best_recog
124
+ └── text
125
+
126
+ 1 directory, 1 files
127
+ ```
128
+ text:VAD检测语音起止时间点结果文件(单位:ms)
129
+
130
+ - 若输入音频为已解析的audio音频,api调用方式可参考如下范例:
131
+
132
+ ```python
133
+ import soundfile
134
+
135
+ waveform, sample_rate = soundfile.read("vad_example_zh.wav")
136
+ segments_result = inference_pipeline(input=waveform)
137
+ print(segments_result)
138
+ ```
139
+
140
+ - VAD常用参数调整说明(参考:vad.yaml文件):
141
+ - max_end_silence_time:尾部连续检测到多长时间静音进行尾点判停,参数范围500ms~6000ms,默认值800ms(该值过低容易出现语音提前截断的情况)。
142
+ - speech_noise_thres:speech的得分减去noise的得分大于此值则判断为speech,参数范围:(-1,1)
143
+ - 取值越趋于-1,噪音被误判定为语音的概率越大,FA越高
144
+ - 取值越趋于+1,语音被误判定为噪音的概率越大,Pmiss越高
145
+ - 通常情况下,该值会根据当前模型在长语音测试集上的效果取balance
146
+
147
+
148
+
149
+
150
+ ## 基于FunASR进行推理
151
+
152
+ 下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_en.wav))
153
+
154
+ ### 可执行命令行
155
+ 在命令行终端执行:
156
+
157
+ ```shell
158
+ funasr ++model=paraformer-zh ++vad_model="fsmn-vad" ++punc_model="ct-punc" ++input=vad_example.wav
159
+ ```
160
+
161
+ 注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id wav_path`
162
+
163
+ ### python示例
164
+ #### 非实时语音识别
165
+ ```python
166
+ from funasr import AutoModel
167
+ # paraformer-zh is a multi-functional asr model
168
+ # use vad, punc, spk or not as you need
169
+ model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
170
+ vad_model="fsmn-vad", vad_model_revision="v2.0.4",
171
+ punc_model="ct-punc-c", punc_model_revision="v2.0.4",
172
+ # spk_model="cam++", spk_model_revision="v2.0.2",
173
+ )
174
+ res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
175
+ batch_size_s=300,
176
+ hotword='魔搭')
177
+ print(res)
178
+ ```
179
+ 注:`model_hub`:表示模型仓库,`ms`为选择modelscope下载,`hf`为选择huggingface下载。
180
+
181
+ #### 实时语音识别
182
+
183
+ ```python
184
+ from funasr import AutoModel
185
+
186
+ chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
187
+ encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
188
+ decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
189
+
190
+ model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
191
+
192
+ import soundfile
193
+ import os
194
+
195
+ wav_file = os.path.join(model.model_path, "example/asr_example.wav")
196
+ speech, sample_rate = soundfile.read(wav_file)
197
+ chunk_stride = chunk_size[1] * 960 # 600ms
198
+
199
+ cache = {}
200
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
201
+ for i in range(total_chunk_num):
202
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
203
+ is_final = i == total_chunk_num - 1
204
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
205
+ print(res)
206
+ ```
207
+
208
+ 注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。
209
+
210
+ #### 语音端点检测(非实时)
211
+ ```python
212
+ from funasr import AutoModel
213
+
214
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
215
+
216
+ wav_file = f"{model.model_path}/example/asr_example.wav"
217
+ res = model.generate(input=wav_file)
218
+ print(res)
219
+ ```
220
+
221
+ #### 语音端点检测(实时)
222
+ ```python
223
+ from funasr import AutoModel
224
+
225
+ chunk_size = 200 # ms
226
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
227
+
228
+ import soundfile
229
+
230
+ wav_file = f"{model.model_path}/example/vad_example.wav"
231
+ speech, sample_rate = soundfile.read(wav_file)
232
+ chunk_stride = int(chunk_size * sample_rate / 1000)
233
+
234
+ cache = {}
235
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
236
+ for i in range(total_chunk_num):
237
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
238
+ is_final = i == total_chunk_num - 1
239
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
240
+ if len(res[0]["value"]):
241
+ print(res)
242
+ ```
243
+
244
+ #### 标点恢复
245
+ ```python
246
+ from funasr import AutoModel
247
+
248
+ model = AutoModel(model="ct-punc", model_revision="v2.0.4")
249
+
250
+ res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
251
+ print(res)
252
+ ```
253
+
254
+ #### 时间戳预测
255
+ ```python
256
+ from funasr import AutoModel
257
+
258
+ model = AutoModel(model="fa-zh", model_revision="v2.0.4")
259
+
260
+ wav_file = f"{model.model_path}/example/asr_example.wav"
261
+ text_file = f"{model.model_path}/example/text.txt"
262
+ res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
263
+ print(res)
264
+ ```
265
+
266
+ 更多详细用法��[示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
267
+
268
+
269
+ ## 微调
270
+
271
+ 详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
272
+
273
+
274
+
275
+
276
+
277
+ ## 使用方式以及适用范围
278
+
279
+ 运行范围
280
+ - 支持Linux-x86_64、Mac和Windows运行。
281
+
282
+ 使用方式
283
+ - 直接推理:可以直接对长语音数据进行计算,有效语音片段的起止时间点信息(单位:ms)。
284
+
285
+ ## 相关论文以及引用信息
286
+
287
+ ```BibTeX
288
+ @inproceedings{zhang2018deep,
289
+ title={Deep-FSMN for large vocabulary continuous speech recognition},
290
+ author={Zhang, Shiliang and Lei, Ming and Yan, Zhijie and Dai, Lirong},
291
+ booktitle={2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
292
+ pages={5869--5873},
293
+ year={2018},
294
+ organization={IEEE}
295
+ }
296
+ ```
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/am.mvn ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ <Nnet>
2
+ <Splice> 400 400
3
+ [ 0 ]
4
+ <AddShift> 400 400
5
+ <LearnRateCoef> 0 [ -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 ]
6
+ <Rescale> 400 400
7
+ <LearnRateCoef> 0 [ 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 ]
8
+ </Nnet>
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/config.yaml ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ frontend: WavFrontendOnline
2
+ frontend_conf:
3
+ fs: 16000
4
+ window: hamming
5
+ n_mels: 80
6
+ frame_length: 25
7
+ frame_shift: 10
8
+ dither: 0.0
9
+ lfr_m: 5
10
+ lfr_n: 1
11
+
12
+ model: FsmnVADStreaming
13
+ model_conf:
14
+ sample_rate: 16000
15
+ detect_mode: 1
16
+ snr_mode: 0
17
+ max_end_silence_time: 800
18
+ max_start_silence_time: 3000
19
+ do_start_point_detection: True
20
+ do_end_point_detection: True
21
+ window_size_ms: 200
22
+ sil_to_speech_time_thres: 150
23
+ speech_to_sil_time_thres: 150
24
+ speech_2_noise_ratio: 1.0
25
+ do_extend: 1
26
+ lookback_time_start_point: 200
27
+ lookahead_time_end_point: 100
28
+ max_single_segment_time: 60000
29
+ snr_thres: -100.0
30
+ noise_frame_num_used_for_snr: 100
31
+ decibel_thres: -100.0
32
+ speech_noise_thres: 0.6
33
+ fe_prior_thres: 0.0001
34
+ silence_pdf_num: 1
35
+ sil_pdf_ids: [0]
36
+ speech_noise_thresh_low: -0.1
37
+ speech_noise_thresh_high: 0.3
38
+ output_frame_probs: False
39
+ frame_in_ms: 10
40
+ frame_length_ms: 25
41
+
42
+ encoder: FSMN
43
+ encoder_conf:
44
+ input_dim: 400
45
+ input_affine_dim: 140
46
+ fsmn_layers: 4
47
+ linear_dim: 250
48
+ proj_dim: 128
49
+ lorder: 20
50
+ rorder: 0
51
+ lstride: 1
52
+ rstride: 0
53
+ output_affine_dim: 140
54
+ output_dim: 248
55
+
56
+
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/configuration.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task" : "voice-activity-detection",
4
+ "pipeline": {"type":"funasr-pipeline"},
5
+ "model": {"type" : "funasr"},
6
+ "file_path_metas": {
7
+ "init_param":"model.pt",
8
+ "config":"config.yaml",
9
+ "frontend_conf":{"cmvn_file": "am.mvn"}},
10
+ "model_name_in_hub": {
11
+ "ms":"iic/speech_fsmn_vad_zh-cn-16k-common-pytorch",
12
+ "hf":""}
13
+ }
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7431f0169ef76ef630c945a1d2c3675d8c8c2df2ae4a6b16f8a88ba1bccfbbb
3
+ size 2261722
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/fig/struct.png ADDED

Git LFS Details

  • SHA256: c745bcc2fd4952fb1fe5a0f7f9fdabda510c3771e7a47dd22be605e26db4ee2c
  • Pointer size: 130 Bytes
  • Size of remote file: 27.9 kB
moyoyo_asr_models/speech_fsmn_vad_zh-cn-16k-common-pytorch/model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3be75be477f0780277f3bae0fe489f48718f585f3a6e45d7dd1fbb1a4255fc5
3
+ size 1721366
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.mdl ADDED
Binary file (99 Bytes). View file
 
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.msc ADDED
Binary file (838 Bytes). View file
 
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1727670560
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/README.md ADDED
@@ -0,0 +1,357 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tasks:
3
+ - auto-speech-recognition
4
+ domain:
5
+ - audio
6
+ model-type:
7
+ - Non-autoregressive
8
+ frameworks:
9
+ - pytorch
10
+ backbone:
11
+ - transformer/conformer
12
+ metrics:
13
+ - CER
14
+ license: Apache License 2.0
15
+ language:
16
+ - cn
17
+ tags:
18
+ - FunASR
19
+ - Paraformer
20
+ - Alibaba
21
+ - ICASSP2024
22
+ - Hotword
23
+ datasets:
24
+ train:
25
+ - 50,000 hour industrial Mandarin task
26
+ test:
27
+ - AISHELL-1-hotword dev/test
28
+ indexing:
29
+ results:
30
+ - task:
31
+ name: Automatic Speech Recognition
32
+ dataset:
33
+ name: 50,000 hour industrial Mandarin task
34
+ type: audio # optional
35
+ args: 16k sampling rate, 8404 characters # optional
36
+ metrics:
37
+ - type: CER
38
+ value: 8.53% # float
39
+ description: greedy search, withou lm, avg.
40
+ args: default
41
+ - type: RTF
42
+ value: 0.0251 # float
43
+ description: GPU inference on V100
44
+ args: batch_size=1
45
+ widgets:
46
+ - task: auto-speech-recognition
47
+ inputs:
48
+ - type: audio
49
+ name: input
50
+ title: 音频
51
+ parameters:
52
+ - name: hotword
53
+ title: 热词
54
+ type: string
55
+ examples:
56
+ - name: 1
57
+ title: 示例1
58
+ inputs:
59
+ - name: input
60
+ data: git://example/asr_example.wav
61
+ parameters:
62
+ - name: hotword
63
+ value: 魔搭
64
+ model_revision: v2.0.4
65
+ inferencespec:
66
+ cpu: 8 #CPU数量
67
+ memory: 4096
68
+ ---
69
+
70
+ # Paraformer-large模型介绍
71
+
72
+ ## Highlights
73
+ Paraformer-large热词版模型支持热词定制功能:实现热词定制化功能,基于提供的热词列表进行激励增强,提升热词的召回率和准确率。
74
+
75
+
76
+ ## <strong>[FunASR开源项目介绍](https://github.com/alibaba-damo-academy/FunASR)</strong>
77
+ <strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
78
+
79
+ [**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
80
+ | [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
81
+ | [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
82
+ | [**服务部署**](https://www.funasr.com)
83
+ | [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
84
+ | [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
85
+
86
+
87
+ ## 模型原理介绍
88
+
89
+ SeACoParaformer是阿里巴巴语音实验室提出的新一代热词定制化非自回归语音识别模型。相比于上一代基于CLAS的热词定制化方案,SeACoParaformer解耦了热词模块与ASR模型,通过后验概率融合的方式进行热词激励,使激励过程可见可控,并且热词召回率显著提升。
90
+
91
+ <p align="center">
92
+ <img src="fig/seaco.png" alt="SeACoParaformer模型结构" width="380" />
93
+
94
+
95
+ SeACoParaformer的模型结构与训练流程如上图所示,通过引入bias encoder进行热词embedding提取,bias decoder进行注意力建模,SeACoParaformer能够捕捉到Predictor输出和Decoder输出的信息与热词的相关性,并且预测与ASR结果同步的热词输出。通过后验概率的融合,实现热词激励。与ContextualParaformer相比,SeACoParaformer有明显的效果提升,如下图所示:
96
+
97
+ <p align="center">
98
+ <img src="fig/res.png" alt="SeACoParaformer模型结构" width="700" />
99
+
100
+ 更详细的细节见:
101
+ - 论文: [SeACo-Paraformer: A Non-Autoregressive ASR System with Flexible and Effective Hotword Customization Ability](https://arxiv.org/abs/2308.03266)
102
+
103
+ ## 复现论文中的结果
104
+ ```python
105
+ from funasr import AutoModel
106
+
107
+ model = AutoModel(model="iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch",
108
+ model_revision="v2.0.4",
109
+ # vad_model="damo/speech_fsmn_vad_zh-cn-16k-common-pytorch",
110
+ # vad_model_revision="v2.0.4",
111
+ # punc_model="damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch",
112
+ # punc_model_revision="v2.0.4",
113
+ # spk_model="damo/speech_campplus_sv_zh-cn_16k-common",
114
+ # spk_model_revision="v2.0.2",
115
+ device="cuda:0"
116
+ )
117
+
118
+ res = model.generate(input="YOUR_PATH/aishell1_hotword_dev.scp",
119
+ hotword='./data/dev/hotword.txt',
120
+ batch_size_s=300,
121
+ )
122
+ fout1 = open("dev.output", 'w')
123
+ for resi in res:
124
+ fout1.write("{}\t{}\n".format(resi['key'], resi['text']))
125
+
126
+ res = model.generate(input="YOUR_PATH/aishell1_hotword_test.scp",
127
+ hotword='./data/test/hotword.txt',
128
+ batch_size_s=300,
129
+ )
130
+ fout2 = open("test.output", 'w')
131
+ for resi in res:
132
+ fout2.write("{}\t{}\n".format(resi['key'], resi['text']))
133
+ ```
134
+
135
+ ## 基于ModelScope进行推理
136
+
137
+ - 推理支��音频格式如下:
138
+ - wav文件路径,例如:data/test/audios/asr_example.wav
139
+ - pcm文件路径,例如:data/test/audios/asr_example.pcm
140
+ - wav文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav
141
+ - wav二进制数据,格式bytes,例如:用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
142
+ - 已解析的audio音频,例如:audio, rate = soundfile.read("asr_example_zh.wav"),类型为numpy.ndarray或者torch.Tensor。
143
+ - wav.scp文件,需符合如下要求:
144
+
145
+ ```sh
146
+ cat wav.scp
147
+ asr_example1 data/test/audios/asr_example1.wav
148
+ asr_example2 data/test/audios/asr_example2.wav
149
+ ...
150
+ ```
151
+
152
+ - 若输入格式wav文件url,api调用方式可参考如下范例:
153
+
154
+ ```python
155
+ from modelscope.pipelines import pipeline
156
+ from modelscope.utils.constant import Tasks
157
+
158
+ inference_pipeline = pipeline(
159
+ task=Tasks.auto_speech_recognition,
160
+ model='iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch', model_revision="v2.0.4")
161
+
162
+ rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav', hotword='达摩院 魔搭')
163
+ print(rec_result)
164
+ ```
165
+
166
+ - 输入音频为pcm格式,调用api时需要传入音频采样率参数audio_fs,例如:
167
+
168
+ ```python
169
+ rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.pcm', fs=16000, hotword='达摩院 魔搭')
170
+ ```
171
+
172
+ - 输入音频为wav格式,api调用方式可参考如下范例:
173
+
174
+ ```python
175
+ rec_result = inference_pipeline('asr_example_zh.wav', hotword='达摩院 魔搭')
176
+ ```
177
+
178
+ - 若输入格式为文件wav.scp(注:文件名需要以.scp结尾),可添加 output_dir 参数将识别结果写入文件中,api调用方式可参考如下范例:
179
+
180
+ ```python
181
+ inference_pipeline("wav.scp", output_dir='./output_dir', hotword='达摩院 魔搭')
182
+ ```
183
+ 识别结果输出路径结构如下:
184
+
185
+ ```sh
186
+ tree output_dir/
187
+ output_dir/
188
+ └── 1best_recog
189
+ ├── score
190
+ └── text
191
+
192
+ 1 directory, 3 files
193
+ ```
194
+
195
+ score:识别路径得分
196
+
197
+ text:语音识别结果文件
198
+
199
+
200
+ - 若输入音频为已解析的audio音频,api调用方式可参考如下范例:
201
+
202
+ ```python
203
+ import soundfile
204
+
205
+ waveform, sample_rate = soundfile.read("asr_example_zh.wav")
206
+ rec_result = inference_pipeline(waveform, hotword='达摩院 魔搭')
207
+ ```
208
+
209
+ - ASR、VAD、PUNC模型自由组合
210
+
211
+ 可根据使用需求对VAD和PUNC标点模型进行自由组合,使用方式如下:
212
+ ```python
213
+ inference_pipeline = pipeline(
214
+ task=Tasks.auto_speech_recognition,
215
+ model='iic/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch', model_revision="v2.0.4",
216
+ vad_model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch', vad_model_revision="v2.0.4",
217
+ punc_model='iic/punc_ct-transformer_zh-cn-common-vocab272727-pytorch', punc_model_revision="v2.0.3",
218
+ # spk_model="iic/speech_campplus_sv_zh-cn_16k-common",
219
+ # spk_model_revision="v2.0.2",
220
+ )
221
+ ```
222
+ 若不使用PUNC模型,可配置punc_model=None,或不传入punc_model参数,如需加入LM模型,可增加配置lm_model='iic/speech_transformer_lm_zh-cn-common-vocab8404-pytorch',并设置lm_weight和beam_size参数。
223
+
224
+ ## 基于FunASR进行推理
225
+
226
+ 下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_en.wav))
227
+
228
+ ### 可执行命令行
229
+ 在命令行终端执行:
230
+
231
+ ```shell
232
+ funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=vad_example.wav
233
+ ```
234
+
235
+ 注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id wav_path`
236
+
237
+ ### python示例
238
+ #### 非实时语音识别
239
+ ```python
240
+ from funasr import AutoModel
241
+ # paraformer-zh is a multi-functional asr model
242
+ # use vad, punc, spk or not as you need
243
+ model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
244
+ vad_model="fsmn-vad", vad_model_revision="v2.0.4",
245
+ punc_model="ct-punc-c", punc_model_revision="v2.0.4",
246
+ # spk_model="cam++", spk_model_revision="v2.0.2",
247
+ )
248
+ res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
249
+ batch_size_s=300,
250
+ hotword='魔搭')
251
+ print(res)
252
+ ```
253
+ 注:`model_hub`:表示模型仓库,`ms`为选择modelscope下载,`hf`为选择huggingface下载。
254
+
255
+ #### 实时语音识别
256
+
257
+ ```python
258
+ from funasr import AutoModel
259
+
260
+ chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
261
+ encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
262
+ decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
263
+
264
+ model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
265
+
266
+ import soundfile
267
+ import os
268
+
269
+ wav_file = os.path.join(model.model_path, "example/asr_example.wav")
270
+ speech, sample_rate = soundfile.read(wav_file)
271
+ chunk_stride = chunk_size[1] * 960 # 600ms
272
+
273
+ cache = {}
274
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
275
+ for i in range(total_chunk_num):
276
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
277
+ is_final = i == total_chunk_num - 1
278
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
279
+ print(res)
280
+ ```
281
+
282
+ 注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。
283
+
284
+ #### 语音端点检测(非实时)
285
+ ```python
286
+ from funasr import AutoModel
287
+
288
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
289
+
290
+ wav_file = f"{model.model_path}/example/asr_example.wav"
291
+ res = model.generate(input=wav_file)
292
+ print(res)
293
+ ```
294
+
295
+ #### 语音端点检测(实时)
296
+ ```python
297
+ from funasr import AutoModel
298
+
299
+ chunk_size = 200 # ms
300
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
301
+
302
+ import soundfile
303
+
304
+ wav_file = f"{model.model_path}/example/vad_example.wav"
305
+ speech, sample_rate = soundfile.read(wav_file)
306
+ chunk_stride = int(chunk_size * sample_rate / 1000)
307
+
308
+ cache = {}
309
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
310
+ for i in range(total_chunk_num):
311
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
312
+ is_final = i == total_chunk_num - 1
313
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
314
+ if len(res[0]["value"]):
315
+ print(res)
316
+ ```
317
+
318
+ #### 标点恢复
319
+ ```python
320
+ from funasr import AutoModel
321
+
322
+ model = AutoModel(model="ct-punc", model_revision="v2.0.4")
323
+
324
+ res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
325
+ print(res)
326
+ ```
327
+
328
+ #### 时间戳预测
329
+ ```python
330
+ from funasr import AutoModel
331
+
332
+ model = AutoModel(model="fa-zh", model_revision="v2.0.4")
333
+
334
+ wav_file = f"{model.model_path}/example/asr_example.wav"
335
+ text_file = f"{model.model_path}/example/text.txt"
336
+ res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
337
+ print(res)
338
+ ```
339
+
340
+ 更多详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
341
+
342
+
343
+ ## 微调
344
+
345
+ 详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
346
+
347
+
348
+ ## 相关论文以及引用信息
349
+
350
+ ```BibTeX
351
+ @article{shi2023seaco,
352
+ title={SeACo-Paraformer: A Non-Autoregressive ASR System with Flexible and Effective Hotword Customization Ability},
353
+ author={Shi, Xian and Yang, Yexin and Li, Zerui and Zhang, Shiliang},
354
+ journal={arXiv preprint arXiv:2308.03266 (accepted by ICASSP2024)},
355
+ year={2023}
356
+ }
357
+ ```
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/am.mvn ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ <Nnet>
2
+ <Splice> 560 560
3
+ [ 0 ]
4
+ <AddShift> 560 560
5
+ <LearnRateCoef> 0 [ -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 ]
6
+ <Rescale> 560 560
7
+ <LearnRateCoef> 0 [ 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 ]
8
+ </Nnet>
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/asr_example_hotword.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51792bc95be33075c1a8abb9afb76ad9f72943e84cd723cc8825b2678799b004
3
+ size 253642
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/config.yaml ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This is an example that demonstrates how to configure a model file.
2
+ # You can modify the configuration according to your own requirements.
3
+
4
+ # to print the register_table:
5
+ # from funasr.utils.register import registry_tables
6
+ # registry_tables.print()
7
+
8
+ # network architecture
9
+ model: SeacoParaformer
10
+ model_conf:
11
+ ctc_weight: 0.0
12
+ lsm_weight: 0.1
13
+ length_normalized_loss: true
14
+ predictor_weight: 1.0
15
+ predictor_bias: 1
16
+ sampling_ratio: 0.75
17
+ inner_dim: 512
18
+ bias_encoder_type: lstm
19
+ bias_encoder_bid: false
20
+ seaco_lsm_weight: 0.1
21
+ seaco_length_normal: true
22
+ train_decoder: true
23
+ NO_BIAS: 8377
24
+
25
+ # encoder
26
+ encoder: SANMEncoder
27
+ encoder_conf:
28
+ output_size: 512
29
+ attention_heads: 4
30
+ linear_units: 2048
31
+ num_blocks: 50
32
+ dropout_rate: 0.1
33
+ positional_dropout_rate: 0.1
34
+ attention_dropout_rate: 0.1
35
+ input_layer: pe
36
+ pos_enc_class: SinusoidalPositionEncoder
37
+ normalize_before: true
38
+ kernel_size: 11
39
+ sanm_shfit: 0
40
+ selfattention_layer_type: sanm
41
+
42
+ # decoder
43
+ decoder: ParaformerSANMDecoder
44
+ decoder_conf:
45
+ attention_heads: 4
46
+ linear_units: 2048
47
+ num_blocks: 16
48
+ dropout_rate: 0.1
49
+ positional_dropout_rate: 0.1
50
+ self_attention_dropout_rate: 0.1
51
+ src_attention_dropout_rate: 0.1
52
+ att_layer_num: 16
53
+ kernel_size: 11
54
+ sanm_shfit: 0
55
+
56
+ # seaco decoder
57
+ seaco_decoder: ParaformerSANMDecoder
58
+ seaco_decoder_conf:
59
+ attention_heads: 4
60
+ linear_units: 1024
61
+ num_blocks: 4
62
+ dropout_rate: 0.1
63
+ positional_dropout_rate: 0.1
64
+ self_attention_dropout_rate: 0.1
65
+ src_attention_dropout_rate: 0.1
66
+ kernel_size: 21
67
+ sanm_shfit: 0
68
+ use_output_layer: false
69
+ wo_input_layer: true
70
+
71
+ predictor: CifPredictorV3
72
+ predictor_conf:
73
+ idim: 512
74
+ threshold: 1.0
75
+ l_order: 1
76
+ r_order: 1
77
+ tail_threshold: 0.45
78
+ smooth_factor2: 0.25
79
+ noise_threshold2: 0.01
80
+ upsample_times: 3
81
+ use_cif1_cnn: false
82
+ upsample_type: cnn_blstm
83
+
84
+ # frontend related
85
+ frontend: WavFrontend
86
+ frontend_conf:
87
+ fs: 16000
88
+ window: hamming
89
+ n_mels: 80
90
+ frame_length: 25
91
+ frame_shift: 10
92
+ lfr_m: 7
93
+ lfr_n: 6
94
+ dither: 0.0
95
+
96
+ specaug: SpecAugLFR
97
+ specaug_conf:
98
+ apply_time_warp: false
99
+ time_warp_window: 5
100
+ time_warp_mode: bicubic
101
+ apply_freq_mask: true
102
+ freq_mask_width_range:
103
+ - 0
104
+ - 30
105
+ lfr_rate: 6
106
+ num_freq_mask: 1
107
+ apply_time_mask: true
108
+ time_mask_width_range:
109
+ - 0
110
+ - 12
111
+ num_time_mask: 1
112
+
113
+ train_conf:
114
+ accum_grad: 1
115
+ grad_clip: 5
116
+ max_epoch: 150
117
+ val_scheduler_criterion:
118
+ - valid
119
+ - acc
120
+ best_model_criterion:
121
+ - - valid
122
+ - acc
123
+ - max
124
+ keep_nbest_models: 10
125
+ log_interval: 50
126
+ unused_parameters: true
127
+
128
+ optim: adam
129
+ optim_conf:
130
+ lr: 0.0005
131
+ scheduler: warmuplr
132
+ scheduler_conf:
133
+ warmup_steps: 30000
134
+
135
+ dataset: AudioDatasetHotword
136
+ dataset_conf:
137
+ seaco_id: 8377
138
+ index_ds: IndexDSJsonl
139
+ batch_sampler: DynamicBatchLocalShuffleSampler
140
+ batch_type: example # example or length
141
+ batch_size: 1 # if batch_type is example, batch_size is the numbers of samples; if length, batch_size is source_token_len+target_token_len;
142
+ max_token_length: 2048 # filter samples if source_token_len+target_token_len > max_token_length,
143
+ buffer_size: 500
144
+ shuffle: True
145
+ num_workers: 0
146
+
147
+ tokenizer: CharTokenizer
148
+ tokenizer_conf:
149
+ unk_symbol: <unk>
150
+ split_with_space: true
151
+
152
+
153
+ ctc_conf:
154
+ dropout_rate: 0.0
155
+ ctc_type: builtin
156
+ reduce: true
157
+ ignore_nan_grad: true
158
+
159
+ normalize: null
160
+
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/configuration.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task" : "auto-speech-recognition",
4
+ "model": {"type" : "funasr"},
5
+ "pipeline": {"type":"funasr-pipeline"},
6
+ "model_name_in_hub": {
7
+ "ms":"iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch",
8
+ "hf":""},
9
+ "file_path_metas": {
10
+ "init_param":"model.pt",
11
+ "config":"config.yaml",
12
+ "tokenizer_conf": {"token_list": "tokens.json", "seg_dict_file": "seg_dict"},
13
+ "frontend_conf":{"cmvn_file": "am.mvn"}}
14
+ }
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ffa478de2cd570dd54e8762008cd6bbde9871fd79757f1cdbbec7d6b7b49274
3
+ size 144770
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/hotword.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ 魔搭
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/fig/res.png ADDED

Git LFS Details

  • SHA256: 1f59ebc6a86733896b2b84110e2aae5625754382762bf1324e017d89a152c2fb
  • Pointer size: 131 Bytes
  • Size of remote file: 197 kB
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/fig/seaco.png ADDED

Git LFS Details

  • SHA256: 6886864eb6bcc6487a17111b5e3353bd72e7c78fda98c27cf47faa35eafbdcaf
  • Pointer size: 131 Bytes
  • Size of remote file: 171 kB
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d491689244ec5dfbf9170ef3827c358aa10f1f20e42a7c59e15e688647946d1
3
+ size 989763045
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/seg_dict ADDED
The diff for this file is too large to render. See raw diff
 
moyoyo_asr_models/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/tokens.json ADDED
The diff for this file is too large to render. See raw diff
 
transcribe/helpers/funasr.py CHANGED
@@ -12,8 +12,16 @@ class FunASR:
12
  def __init__(self, source_lange: str = 'en', warmup=True) -> None:
13
  self.source_lange = source_lange
14
 
 
 
 
 
15
  self.model = AutoModel(
16
- model="paraformer-zh", vad_model="fsmn-vad", punc_model="ct-punc", log_level="ERROR",disable_update=True
 
 
 
 
17
  )
18
  if warmup:
19
  self.warmup()
 
12
  def __init__(self, source_lange: str = 'en', warmup=True) -> None:
13
  self.source_lange = source_lange
14
 
15
+ model_dir = config.MODEL_DIR
16
+ asr_model_path = model_dir / 'speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch'
17
+ vad_model_path = model_dir / 'speech_fsmn_vad_zh-cn-16k-common-pytorch'
18
+ punc_model_path = model_dir / 'punc_ct-transformer_cn-en-common-vocab471067-large'
19
  self.model = AutoModel(
20
+ model=asr_model_path.as_posix(),
21
+ vad_model=vad_model_path.as_posix(),
22
+ punc_model=punc_model_path.as_posix(),
23
+ log_level="ERROR",
24
+ disable_update=True
25
  )
26
  if warmup:
27
  self.warmup()