istupakov commited on
Commit
bde2566
·
verified ·
1 Parent(s): a4f981d

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +7 -8
app.py CHANGED
@@ -14,7 +14,7 @@ logger.info("onnx_asr version: %s", version("onnx_asr"))
14
 
15
  vad = onnx_asr.load_vad("silero")
16
 
17
- whisper = {name: onnx_asr.load_model(name) for name in ["whisper-base"]}
18
 
19
  models_ru = {
20
  name: onnx_asr.load_model(name)
@@ -31,12 +31,11 @@ models_ru = {
31
  models_en = {
32
  name: onnx_asr.load_model(name)
33
  for name in [
34
- "nemo-parakeet-ctc-0.6b",
35
  "nemo-parakeet-tdt-0.6b-v2",
36
  ]
37
  }
38
 
39
- models_vad = whisper | models_ru | models_en
40
 
41
 
42
  def recognize(audio: tuple[int, np.ndarray], models, language):
@@ -69,11 +68,11 @@ def recognize(audio: tuple[int, np.ndarray], models, language):
69
 
70
 
71
  def recognize_ru(audio: tuple[int, np.ndarray]):
72
- return recognize(audio, models_ru | whisper, "ru")
73
 
74
 
75
  def recognize_en(audio: tuple[int, np.ndarray]):
76
- return recognize(audio, models_en | whisper, "en")
77
 
78
 
79
  def recognize_with_vad(audio: tuple[int, np.ndarray], name: str):
@@ -136,8 +135,7 @@ with gr.Blocks(title="onnx-asr demo") as demo:
136
  # ASR demo using onnx-asr
137
  **[onnx-asr](https://github.com/istupakov/onnx-asr)** is a Python package for Automatic Speech Recognition using ONNX models.
138
  The package is written in pure Python with minimal dependencies (no `pytorch` or `transformers`).
139
-
140
- Supports Parakeet TDT 0.6B V2 (En) and GigaAM v2 (Ru) models
141
  (and many other modern [models](https://github.com/istupakov/onnx-asr?tab=readme-ov-file#supported-model-names)).
142
  You can also use it with your own model if it has a supported architecture.
143
  """)
@@ -155,12 +153,13 @@ with gr.Blocks(title="onnx-asr demo") as demo:
155
  * `gigaam-v2-rnnt` - Sber GigaAM v2 RNN-T ([origin](https://github.com/salute-developers/GigaAM), [onnx](https://huggingface.co/istupakov/gigaam-v2-onnx))
156
  * `nemo-fastconformer-ru-ctc` - Nvidia FastConformer-Hybrid Large (ru) with CTC decoder ([origin](https://huggingface.co/nvidia/stt_ru_fastconformer_hybrid_large_pc), [onnx](https://huggingface.co/istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx))
157
  * `nemo-fastconformer-ru-rnnt` - Nvidia FastConformer-Hybrid Large (ru) with RNN-T decoder ([origin](https://huggingface.co/nvidia/stt_ru_fastconformer_hybrid_large_pc), [onnx](https://huggingface.co/istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx))
 
158
  * `whisper-base` - OpenAI Whisper Base exported with onnxruntime ([origin](https://huggingface.co/openai/whisper-base), [onnx](https://huggingface.co/istupakov/whisper-base-onnx))
159
  * `alphacep/vosk-model-ru` - Alpha Cephei Vosk 0.54-ru ([origin](https://huggingface.co/alphacep/vosk-model-ru))
160
  * `alphacep/vosk-model-small-ru` - Alpha Cephei Vosk 0.52-small-ru ([origin](https://huggingface.co/alphacep/vosk-model-small-ru))
161
  ## English ASR models
162
- * `nemo-parakeet-ctc-0.6b` - Nvidia Parakeet CTC 0.6B (en) ([origin](https://huggingface.co/nvidia/parakeet-ctc-0.6b), [onnx](https://huggingface.co/istupakov/parakeet-ctc-0.6b-onnx))
163
  * `nemo-parakeet-tdt-0.6b-v2` - Nvidia Parakeet TDT 0.6B V2 (en) ([origin](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2), [onnx](https://huggingface.co/istupakov/parakeet-tdt-0.6b-v2-onnx))
 
164
  * `whisper-base` - OpenAI Whisper Base exported with onnxruntime ([origin](https://huggingface.co/openai/whisper-base), [onnx](https://huggingface.co/istupakov/whisper-base-onnx))
165
  ## VAD models
166
  * `silero` - Silero VAD ([origin](https://github.com/snakers4/silero-vad), [onnx](https://huggingface.co/onnx-community/silero-vad))
 
14
 
15
  vad = onnx_asr.load_vad("silero")
16
 
17
+ models_multilang = {name: onnx_asr.load_model(name) for name in ["whisper-base", "nemo-parakeet-tdt-0.6b-v3"]}
18
 
19
  models_ru = {
20
  name: onnx_asr.load_model(name)
 
31
  models_en = {
32
  name: onnx_asr.load_model(name)
33
  for name in [
 
34
  "nemo-parakeet-tdt-0.6b-v2",
35
  ]
36
  }
37
 
38
+ models_vad = models_multilang | models_ru | models_en
39
 
40
 
41
  def recognize(audio: tuple[int, np.ndarray], models, language):
 
68
 
69
 
70
  def recognize_ru(audio: tuple[int, np.ndarray]):
71
+ return recognize(audio, models_ru | models_multilang, "ru")
72
 
73
 
74
  def recognize_en(audio: tuple[int, np.ndarray]):
75
+ return recognize(audio, models_en | models_multilang, "en")
76
 
77
 
78
  def recognize_with_vad(audio: tuple[int, np.ndarray], name: str):
 
135
  # ASR demo using onnx-asr
136
  **[onnx-asr](https://github.com/istupakov/onnx-asr)** is a Python package for Automatic Speech Recognition using ONNX models.
137
  The package is written in pure Python with minimal dependencies (no `pytorch` or `transformers`).
138
+ Supports Parakeet TDT 0.6B V2 (En), Parakeet TDT 0.6B V3 (Multilingual) and GigaAM v2 (Ru) models
 
139
  (and many other modern [models](https://github.com/istupakov/onnx-asr?tab=readme-ov-file#supported-model-names)).
140
  You can also use it with your own model if it has a supported architecture.
141
  """)
 
153
  * `gigaam-v2-rnnt` - Sber GigaAM v2 RNN-T ([origin](https://github.com/salute-developers/GigaAM), [onnx](https://huggingface.co/istupakov/gigaam-v2-onnx))
154
  * `nemo-fastconformer-ru-ctc` - Nvidia FastConformer-Hybrid Large (ru) with CTC decoder ([origin](https://huggingface.co/nvidia/stt_ru_fastconformer_hybrid_large_pc), [onnx](https://huggingface.co/istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx))
155
  * `nemo-fastconformer-ru-rnnt` - Nvidia FastConformer-Hybrid Large (ru) with RNN-T decoder ([origin](https://huggingface.co/nvidia/stt_ru_fastconformer_hybrid_large_pc), [onnx](https://huggingface.co/istupakov/stt_ru_fastconformer_hybrid_large_pc_onnx))
156
+ * `nemo-parakeet-tdt-0.6b-v3` - Nvidia Parakeet TDT 0.6B V3 (multilingual) ([origin](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v3), [onnx](https://huggingface.co/istupakov/parakeet-tdt-0.6b-v3-onnx))
157
  * `whisper-base` - OpenAI Whisper Base exported with onnxruntime ([origin](https://huggingface.co/openai/whisper-base), [onnx](https://huggingface.co/istupakov/whisper-base-onnx))
158
  * `alphacep/vosk-model-ru` - Alpha Cephei Vosk 0.54-ru ([origin](https://huggingface.co/alphacep/vosk-model-ru))
159
  * `alphacep/vosk-model-small-ru` - Alpha Cephei Vosk 0.52-small-ru ([origin](https://huggingface.co/alphacep/vosk-model-small-ru))
160
  ## English ASR models
 
161
  * `nemo-parakeet-tdt-0.6b-v2` - Nvidia Parakeet TDT 0.6B V2 (en) ([origin](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2), [onnx](https://huggingface.co/istupakov/parakeet-tdt-0.6b-v2-onnx))
162
+ * `nemo-parakeet-tdt-0.6b-v3` - Nvidia Parakeet TDT 0.6B V3 (multilingual) ([origin](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v3), [onnx](https://huggingface.co/istupakov/parakeet-tdt-0.6b-v3-onnx))
163
  * `whisper-base` - OpenAI Whisper Base exported with onnxruntime ([origin](https://huggingface.co/openai/whisper-base), [onnx](https://huggingface.co/istupakov/whisper-base-onnx))
164
  ## VAD models
165
  * `silero` - Silero VAD ([origin](https://github.com/snakers4/silero-vad), [onnx](https://huggingface.co/onnx-community/silero-vad))