Datasets:
Update documentation
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ This directory contains the audio files (.wav format) of every example in the da
|
|
31 |
- **train/**: Contains audio files used for training.
|
32 |
- **test/**: Contains audio files used for testing.
|
33 |
|
34 |
-
The audio files vary in length and correspond to each entry in the manifest files. They are referenced by file paths in the manifest files.
|
35 |
|
36 |
### 2. **manifests/**
|
37 |
This directory contains the manifest files used for training speech recognition (ASR) models. There are two JSON files:
|
@@ -57,25 +57,23 @@ This directory contains French equivalent manifest files for the dataset. The st
|
|
57 |
|
58 |
### 4. **scripts/**
|
59 |
This directory contains scripts used to process the data and create manifest files:
|
60 |
-
- **create_manifest.py**: A script used to create manifest files for training and testing. It samples the audio files and generates the corresponding JSON manifest files.
|
61 |
-
- **clean_tsv.py**: Script to remove some of the most common issues in the .tsv transcription files, such as unwanted characters (", <>), consecutive tabs (making some rows incositent) and spacing errors
|
62 |
|
63 |
## Dataset Overview
|
64 |
|
65 |
-
The dataset consists of 11,
|
66 |
-
- **Training set**: 9,
|
67 |
-
- **Test set**: 1,
|
68 |
|
69 |
-
Each audio file is paired with a transcription in Bambara, and the corresponding French transcriptions are available in the `french-manifests/` directory.
|
70 |
|
71 |
## Usage
|
72 |
|
73 |
-
The manifest files are specifically created for training Automatic Speech Recognition (ASR) models in NVIDIA NeMo, but they can be used with any other framework that supports manifest-based input formats.
|
74 |
|
75 |
To use the dataset, simply load the manifest files (`train_manifest.json` and `test_manifest.json`) in your training script. The file paths for the audio files and the corresponding transcriptions are already provided in these manifest files.
|
76 |
|
77 |
-
### Reconstructing the Directory Locally
|
78 |
-
|
79 |
Downloading the dataset:
|
80 |
|
81 |
```python
|
@@ -93,18 +91,23 @@ dataset = load_dataset("jeli-data-manifest/manifests/train_manifest.json")
|
|
93 |
### Example NeMo Usage
|
94 |
```
|
95 |
|
|
|
|
|
96 |
```python
|
97 |
from nemo.collections.asr.models import ASRModel
|
98 |
train_manifest = 'jeli-data-manifest/manifests/train_manifest.json'
|
99 |
test_manifest = 'jeli-data-manifest/manifests/test_manifest.json'
|
100 |
|
101 |
asr_model = ASRModel.from_pretrained("QuartzNet15x5Base-En")
|
|
|
|
|
|
|
102 |
asr_model.setup_training_data(train_data_config={'manifest_filepath': train_manifest})
|
103 |
asr_model.setup_validation_data(val_data_config={'manifest_filepath': test_manifest})
|
104 |
```
|
105 |
|
106 |
## Issues
|
107 |
-
This version
|
108 |
|
109 |
- **Misaligned / Invalid segmentation**
|
110 |
- **Language / Incorrect transcriptions**
|
@@ -112,6 +115,6 @@ This version has just performed some shallow cleaning on the transcriptions and
|
|
112 |
|
113 |
## Citation
|
114 |
|
115 |
-
If you use this dataset in your research or project, please give credit to the creators of the original Jeli-ASR dataset.
|
116 |
|
117 |
---
|
|
|
31 |
- **train/**: Contains audio files used for training.
|
32 |
- **test/**: Contains audio files used for testing.
|
33 |
|
34 |
+
The audio files vary in length and correspond to each entry in the manifest files. They are referenced by file paths in the manifest files.
|
35 |
|
36 |
### 2. **manifests/**
|
37 |
This directory contains the manifest files used for training speech recognition (ASR) models. There are two JSON files:
|
|
|
57 |
|
58 |
### 4. **scripts/**
|
59 |
This directory contains scripts used to process the data and create manifest files:
|
60 |
+
- **create_manifest.py**: A script used to create manifest files for training and testing. It re-samples the audio files published as the first version of Jeli-ASR dataset and generates the corresponding JSON manifest files.
|
61 |
+
- **clean_tsv.py**: Script to remove some of the most common issues in the .tsv transcription files created during the last revision work on the dataset in January 2023, such as unwanted characters (", <>), consecutive tabs (making some rows incositent) and spacing errors
|
62 |
|
63 |
## Dataset Overview
|
64 |
|
65 |
+
The dataset consists of 11,533 audio-transcription pairs:
|
66 |
+
- **Training set**: 9,803 examples (85%)
|
67 |
+
- **Test set**: 1,730 examples (15%)
|
68 |
|
69 |
+
Each audio file is paired with a transcription in Bambara in the manifest files, and the corresponding French transcriptions are available in the `french-manifests/` directory.
|
70 |
|
71 |
## Usage
|
72 |
|
73 |
+
The manifest files are specifically created for training Automatic Speech Recognition (ASR) models in NVIDIA NeMo framework, but they can be used with any other framework that supports manifest-based input formats or reformated for any other use or framework.
|
74 |
|
75 |
To use the dataset, simply load the manifest files (`train_manifest.json` and `test_manifest.json`) in your training script. The file paths for the audio files and the corresponding transcriptions are already provided in these manifest files.
|
76 |
|
|
|
|
|
77 |
Downloading the dataset:
|
78 |
|
79 |
```python
|
|
|
91 |
### Example NeMo Usage
|
92 |
```
|
93 |
|
94 |
+
Finetuning with Nemo:
|
95 |
+
|
96 |
```python
|
97 |
from nemo.collections.asr.models import ASRModel
|
98 |
train_manifest = 'jeli-data-manifest/manifests/train_manifest.json'
|
99 |
test_manifest = 'jeli-data-manifest/manifests/test_manifest.json'
|
100 |
|
101 |
asr_model = ASRModel.from_pretrained("QuartzNet15x5Base-En")
|
102 |
+
|
103 |
+
# Adapt the model's vocab before training
|
104 |
+
|
105 |
asr_model.setup_training_data(train_data_config={'manifest_filepath': train_manifest})
|
106 |
asr_model.setup_validation_data(val_data_config={'manifest_filepath': test_manifest})
|
107 |
```
|
108 |
|
109 |
## Issues
|
110 |
+
This version was created after some shallow cleaning on the transcriptions and resamplimg work. It has conserved most of the issues of the original dataset such as:
|
111 |
|
112 |
- **Misaligned / Invalid segmentation**
|
113 |
- **Language / Incorrect transcriptions**
|
|
|
115 |
|
116 |
## Citation
|
117 |
|
118 |
+
If you use this dataset in your research or project, please give credit to the creators of the original .
|
119 |
|
120 |
---
|