Datasets:

Modalities:
Audio
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
rassulya commited on
Commit
9c6da2f
·
verified ·
1 Parent(s): 5e56379

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -2,11 +2,11 @@
2
 
3
  **Dataset Name:** Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English
4
 
5
- **Repository:** [https://github.com/your-github-username/MultilingualASR](replace with actual github link)
6
 
7
  **Summary:** This repository contains the recipe for reproducing the experiments detailed in the paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" ([https://arxiv.org/abs/2108.01280](https://arxiv.org/abs/2108.01280)). The work focuses on training a single end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The research compares monolingual and multilingual models (with combined and independent output grapheme sets), investigates the impact of language models (LMs) and data augmentation techniques, and achieves comparable performance to monolingual baselines (20.9% and 20.5% average word error rates for the best monolingual and multilingual models respectively on the combined test set). Pre-trained models are provided.
8
 
9
- **Table of Pre-trained Models:**
10
 
11
  | Model | Large Transformer | Large Transformer with Speed Perturbation (SP) | Large Transformer with SP and SpecAugment |
12
  |--------------------------|-------------------------------------------------|-------------------------------------------------|---------------------------------------------------|
@@ -16,9 +16,9 @@
16
  | Multilingual (Combined) | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
17
  | Multilingual (Independent) | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
18
 
19
-
20
  **Citation:**
21
 
22
- Please cite the original paper: [Add proper citation here based on the arxiv paper]
23
 
24
 
 
2
 
3
  **Dataset Name:** Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English
4
 
5
+ **Repository:** https://github.com/IS2AI/MultilingualASR
6
 
7
  **Summary:** This repository contains the recipe for reproducing the experiments detailed in the paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" ([https://arxiv.org/abs/2108.01280](https://arxiv.org/abs/2108.01280)). The work focuses on training a single end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The research compares monolingual and multilingual models (with combined and independent output grapheme sets), investigates the impact of language models (LMs) and data augmentation techniques, and achieves comparable performance to monolingual baselines (20.9% and 20.5% average word error rates for the best monolingual and multilingual models respectively on the combined test set). Pre-trained models are provided.
8
 
9
+ <!-- **Table of Pre-trained Models:**
10
 
11
  | Model | Large Transformer | Large Transformer with Speed Perturbation (SP) | Large Transformer with SP and SpecAugment |
12
  |--------------------------|-------------------------------------------------|-------------------------------------------------|---------------------------------------------------|
 
16
  | Multilingual (Combined) | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
17
  | Multilingual (Independent) | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
18
 
19
+ -->
20
  **Citation:**
21
 
22
+ Please cite the original paper: https://arxiv.org/pdf/2108.01280
23
 
24