rima-shahbazyan
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -67,7 +67,7 @@ img {
|
|
67 |
|
68 |
|
69 |
This model transcribes text in upper and lower case Uzbek alphabet with spaces, commas, question marks, and dashes.
|
70 |
-
It is a "large" version of FastConformer Transducer-CTC (around 115M parameters) model. This is a hybrid model trained on two losses:
|
71 |
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.
|
72 |
|
73 |
## NVIDIA NeMo: Training
|
@@ -121,7 +121,7 @@ This model provides transcribed speech as a string for a given audio sample.
|
|
121 |
|
122 |
## Model Architecture
|
123 |
|
124 |
-
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with
|
125 |
|
126 |
## Training
|
127 |
|
|
|
67 |
|
68 |
|
69 |
This model transcribes text in upper and lower case Uzbek alphabet with spaces, commas, question marks, and dashes.
|
70 |
+
It is a "large" version of FastConformer Transducer-CTC (around 115M parameters) model. This is a hybrid model trained on two losses: Transducer (default) and CTC.
|
71 |
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.
|
72 |
|
73 |
## NVIDIA NeMo: Training
|
|
|
121 |
|
122 |
## Model Architecture
|
123 |
|
124 |
+
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with a Transducer decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
|
125 |
|
126 |
## Training
|
127 |
|