Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,8 @@ tags:
|
|
12 |
- ASR
|
13 |
---
|
14 |
## Introduce
|
15 |
-
<
|
|
|
16 |
[Paraformer](https://arxiv.org/abs/2206.08317) is a non-autoregressive end-to-end speech recognition model. Compared to the currently mainstream autoregressive models, non-autoregressive models can output the target text for the entire sentence in parallel, making them particularly suitable for parallel inference using GPUs. Paraformer is currently the first known non-autoregressive model that can achieve the same performance as autoregressive end-to-end models on industrial-scale data. When combined with GPU inference, it can improve inference efficiency by 10 times, thereby reducing machine costs for speech recognition cloud services by nearly 10 times.
|
17 |
|
18 |
This repo shows how to use Paraformer with `funasr_onnx` runtime, the model comes from [FunASR](https://github.com/alibaba-damo-academy/FunASR), which trained from 60000 hours Mandarin data. The performance of Paraformer obtained the first place in [SpeechIO Leadboard](https://github.com/SpeechColab/Leaderboard).
|
|
|
12 |
- ASR
|
13 |
---
|
14 |
## Introduce
|
15 |
+
<p align="center">
|
16 |
+
<img src="./struct.png" alt="Paraformer structure" width="500" />
|
17 |
[Paraformer](https://arxiv.org/abs/2206.08317) is a non-autoregressive end-to-end speech recognition model. Compared to the currently mainstream autoregressive models, non-autoregressive models can output the target text for the entire sentence in parallel, making them particularly suitable for parallel inference using GPUs. Paraformer is currently the first known non-autoregressive model that can achieve the same performance as autoregressive end-to-end models on industrial-scale data. When combined with GPU inference, it can improve inference efficiency by 10 times, thereby reducing machine costs for speech recognition cloud services by nearly 10 times.
|
18 |
|
19 |
This repo shows how to use Paraformer with `funasr_onnx` runtime, the model comes from [FunASR](https://github.com/alibaba-damo-academy/FunASR), which trained from 60000 hours Mandarin data. The performance of Paraformer obtained the first place in [SpeechIO Leadboard](https://github.com/SpeechColab/Leaderboard).
|