ksingla025 commited on
Commit
1563ecd
Β·
verified Β·
1 Parent(s): 2ff38ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -147
README.md CHANGED
@@ -1,3 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Meta Speech Recognition Slavic Languages Dataset (Common Voice)
2
 
3
  This dataset contains metadata for Slavic language speech recognition samples from Common Voice.
@@ -110,150 +132,4 @@ tar -xzf {lang}.tar.gz
110
  β”œβ”€β”€ sl/
111
  β”œβ”€β”€ sr/
112
  └── uk/
113
- ```
114
-
115
- ## Training NeMo Conformer ASR for Slavic Languages
116
-
117
- ### 1. Pull and Run NeMo Docker
118
- ```bash
119
- # Pull the NeMo Docker image
120
- docker pull nvcr.io/nvidia/nemo:24.05
121
-
122
- # Run the container with GPU support
123
- docker run --gpus all -it --rm \
124
- -v /external1:/external1 \
125
- -v /external2:/external2 \
126
- -v /external3:/external3 \
127
- -v /cv:/cv \
128
- --shm-size=8g \
129
- -p 8888:8888 -p 6006:6006 \
130
- --ulimit memlock=-1 \
131
- --ulimit stack=67108864 \
132
- nvcr.io/nvidia/nemo:24.05
133
- ```
134
-
135
- ### 2. Create Training Script
136
- Create a script `train_nemo_asr_slavic.py`:
137
- ```python
138
- from nemo.collections.asr.models import EncDecCTCModel
139
- from nemo.collections.asr.data.audio_to_text import TarredAudioToTextDataset
140
- import pytorch_lightning as pl
141
- from omegaconf import OmegaConf
142
- import os
143
-
144
- # Load the dataset from Hugging Face
145
- from datasets import load_dataset
146
- dataset = load_dataset("WhissleAI/Meta_STT_SLAVIC_CommonVoice")
147
-
148
- # Create config
149
- config = OmegaConf.create({
150
- 'model': {
151
- 'name': 'EncDecCTCModel',
152
- 'train_ds': {
153
- 'manifest_filepath': None, # Will be set dynamically
154
- 'batch_size': 32,
155
- 'shuffle': True,
156
- 'num_workers': 4,
157
- 'pin_memory': True,
158
- 'use_start_end_token': False,
159
- },
160
- 'validation_ds': {
161
- 'manifest_filepath': None, # Will be set dynamically
162
- 'batch_size': 32,
163
- 'shuffle': False,
164
- 'num_workers': 4,
165
- 'pin_memory': True,
166
- 'use_start_end_token': False,
167
- },
168
- 'optim': {
169
- 'name': 'adamw',
170
- 'lr': 0.001,
171
- 'weight_decay': 0.01,
172
- },
173
- 'trainer': {
174
- 'devices': 1,
175
- 'accelerator': 'gpu',
176
- 'max_epochs': 100,
177
- 'precision': 16,
178
- }
179
- }
180
- })
181
-
182
- # Initialize model
183
- model = EncDecCTCModel(cfg=config.model)
184
-
185
- # Create trainer
186
- trainer = pl.Trainer(**config.model.trainer)
187
-
188
- # Train
189
- trainer.fit(model)
190
- ```
191
-
192
- ### 3. Create Config File
193
- Create a config file `config_slavic.yaml`:
194
- ```yaml
195
- model:
196
- name: "EncDecCTCModel"
197
- train_ds:
198
- manifest_filepath: "train.json"
199
- batch_size: 32
200
- shuffle: true
201
- num_workers: 4
202
- pin_memory: true
203
- use_start_end_token: false
204
-
205
- validation_ds:
206
- manifest_filepath: "valid.json"
207
- batch_size: 32
208
- shuffle: false
209
- num_workers: 4
210
- pin_memory: true
211
- use_start_end_token: false
212
-
213
- optim:
214
- name: adamw
215
- lr: 0.001
216
- weight_decay: 0.01
217
-
218
- trainer:
219
- devices: 1
220
- accelerator: "gpu"
221
- max_epochs: 100
222
- precision: 16
223
- ```
224
-
225
- ### 4. Start Training
226
- ```bash
227
- # Inside the NeMo container
228
- python -m torch.distributed.launch --nproc_per_node=1 \
229
- train_nemo_asr_slavic.py \
230
- --config-path=. \
231
- --config-name=config_slavic.yaml
232
- ```
233
-
234
- ## Usage Notes
235
-
236
- 1. The dataset includes only metadata. Audio files must be downloaded separately from Common Voice.
237
- 2. Audio files should be placed in the `/cv/cv-corpus-15.0-2023-09-08/{lang}/` directory structure.
238
- 3. For optimal performance:
239
- - Use a GPU with at least 16GB VRAM
240
- - Adjust batch size based on your GPU memory
241
- - Consider gradient accumulation for larger effective batch sizes
242
- - Monitor training with TensorBoard (accessible via port 6006)
243
-
244
- ## Common Issues and Solutions
245
-
246
- 1. **Memory Issues**:
247
- - Reduce batch size if you encounter OOM errors
248
- - Use gradient accumulation for larger effective batch sizes
249
- - Enable mixed precision training (fp16)
250
-
251
- 2. **Training Speed**:
252
- - Increase num_workers based on your CPU cores
253
- - Use pin_memory=True for faster data transfer to GPU
254
- - Consider using tarred datasets for faster I/O
255
-
256
- 3. **Model Performance**:
257
- - Adjust learning rate based on your batch size
258
- - Use learning rate warmup for better convergence
259
- - Consider using a pretrained model as initialization
 
1
+ ---
2
+ task_categories:
3
+ - automatic-speech-recognition
4
+ - audio-classification
5
+ language:
6
+ - ru
7
+ - be
8
+ - cs
9
+ - bg
10
+ - ka
11
+ - mk
12
+ - pl
13
+ - sr
14
+ - sl
15
+ - uk
16
+ tags:
17
+ - speech-recogniton
18
+ - entity-tagging
19
+ - intent-classification
20
+ - age-prediction
21
+ - emotion-classication
22
+ ---
23
  # Meta Speech Recognition Slavic Languages Dataset (Common Voice)
24
 
25
  This dataset contains metadata for Slavic language speech recognition samples from Common Voice.
 
132
  β”œβ”€β”€ sl/
133
  β”œβ”€β”€ sr/
134
  └── uk/
135
+ ```