zohirjonsharipov
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ model-index:
|
|
38 |
|
39 |
# XLS-R-300M Uzbek CV8
|
40 |
|
41 |
-
Ushbu model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) asosida MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UZ datasetidan foydalangan holda fine-tuning qilingan.
|
42 |
Model quydagi natijalarga erishgan:
|
43 |
- Loss: 0.3063
|
44 |
- Wer: 0.3852
|
@@ -48,28 +48,29 @@ Model quydagi natijalarga erishgan:
|
|
48 |
|
49 |
Model arxitekturasi haqida ko'prom ma'lumot olish uchun ushbu [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) havola orqali o'ting
|
50 |
|
51 |
-
|
52 |
Note that the characters <‘> and <’> do not count as punctuation, as <‘> modifies \<o\> and \<g\>, and <’> indicates the glottal stop or a long vowel.
|
|
|
53 |
|
54 |
-
|
55 |
|
56 |
-
##
|
57 |
|
58 |
-
|
59 |
-
-
|
60 |
-
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
## Training
|
65 |
|
66 |
The 50% of the `train` common voice official split was used as training data. The 50% of the official `dev` split was used as validation data, and the full `test` set was used for final evaluation of the model without LM, while the model with LM was evaluated only on 500 examples from the `test` set.
|
67 |
|
68 |
The kenlm language model was compiled from the target sentences of the train + other dataset splits.
|
69 |
|
70 |
-
### Training
|
71 |
|
72 |
-
|
73 |
- learning_rate: 3e-05
|
74 |
- train_batch_size: 32
|
75 |
- eval_batch_size: 8
|
@@ -82,7 +83,7 @@ The following hyperparameters were used during training:
|
|
82 |
- num_epochs: 100.0
|
83 |
- mixed_precision_training: Native AMP
|
84 |
|
85 |
-
### Training
|
86 |
|
87 |
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|
88 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
|
|
|
38 |
|
39 |
# XLS-R-300M Uzbek CV8
|
40 |
|
41 |
+
Ushbu model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) asosida MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UZ datasetidan foydalangan holda Transfer Learning usuli orqali ngramm modeli asosida o'zbek tili uchun fine-tuning qilingan.
|
42 |
Model quydagi natijalarga erishgan:
|
43 |
- Loss: 0.3063
|
44 |
- Wer: 0.3852
|
|
|
48 |
|
49 |
Model arxitekturasi haqida ko'prom ma'lumot olish uchun ushbu [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) havola orqali o'ting
|
50 |
|
51 |
+
Ushbu modelning lugʻati oʻzbek tili zamonaviy lotin alifbosidan iborat boʻlib, tinish belgilari olib tashlangan(https://en.wikipedia.org/wiki/Uzbek_alphabet).
|
52 |
Note that the characters <‘> and <’> do not count as punctuation, as <‘> modifies \<o\> and \<g\>, and <’> indicates the glottal stop or a long vowel.
|
53 |
+
Shuni ta'kidlash kerakki, <‘> va <’> belgilar tinish belgisi sifatida hisoblanmaydi, qachonki mana shunday belgilar \<o\> va \<g\> dan so'ng kelganda ularni <‘> bilan o‘zgartirilgan.
|
54 |
|
55 |
+
Dekoder common_voice matniga asoslangan kenlm tili modelidan foydalanadi.
|
56 |
|
57 |
+
## Foydalanish yo'nalishilari va cheklovlar
|
58 |
|
59 |
+
Ushbu model quyidagi foydalanish holatlari uchun foydali bo'lishi kutilmoqda:
|
60 |
+
- Video subtitr uchun
|
61 |
+
- yozib olingan eshittirishlarni indekslash
|
62 |
|
63 |
+
Model jonli efirdagi uchrashuvlar yoki ko'rsatuvlarni subtitrini aniqlash uchun kerakli ravishda mos emas va undan Common Voice maʼlumotlar toʻplamiga yoki boshqa hissa qoʻshuvchilarning shaxsiy hayotini xafvga qo'yadigan holatlar uchun ishlatilmasligi kerak.
|
64 |
|
65 |
+
## Training va baholash ma'lumotlari
|
66 |
|
67 |
The 50% of the `train` common voice official split was used as training data. The 50% of the official `dev` split was used as validation data, and the full `test` set was used for final evaluation of the model without LM, while the model with LM was evaluated only on 500 examples from the `test` set.
|
68 |
|
69 |
The kenlm language model was compiled from the target sentences of the train + other dataset splits.
|
70 |
|
71 |
+
### Training giperparametrlari
|
72 |
|
73 |
+
Training jarayonida quyidagi giperparametrlardan foydalanildi:
|
74 |
- learning_rate: 3e-05
|
75 |
- train_batch_size: 32
|
76 |
- eval_batch_size: 8
|
|
|
83 |
- num_epochs: 100.0
|
84 |
- mixed_precision_training: Native AMP
|
85 |
|
86 |
+
### Training natijalari
|
87 |
|
88 |
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|
89 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
|