FineTuned-hindi-to-english-V8
This model is a fine-tuned version of Helsinki-NLP/opus-mt-hi-en on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.4699
- Rouge1: 77.4859
- Rouge2: 54.5463
- Rougel: 70.7586
- Rougelsum: 72.9591
- Gen Len: 80.2678
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
3.2542 | 1.0 | 750 | 2.4128 | 66.5333 | 38.7744 | 58.3682 | 61.5066 | 77.9543 |
2.2929 | 2.0 | 1500 | 2.0835 | 70.0324 | 43.5622 | 62.3036 | 65.2299 | 79.0161 |
2.0351 | 3.0 | 2250 | 1.8982 | 72.069 | 46.4931 | 64.6929 | 67.3732 | 80.1859 |
1.7712 | 4.0 | 3000 | 1.7791 | 73.5869 | 48.6666 | 66.3649 | 68.8242 | 79.9375 |
1.6431 | 5.0 | 3750 | 1.6935 | 74.5655 | 50.0947 | 67.4126 | 69.8943 | 80.2959 |
1.4994 | 6.0 | 4500 | 1.6316 | 75.3132 | 51.2895 | 68.3122 | 70.7019 | 80.1347 |
1.4154 | 7.0 | 5250 | 1.5856 | 75.9486 | 52.1892 | 69.0828 | 71.4087 | 80.0847 |
1.333 | 8.0 | 6000 | 1.5516 | 76.2849 | 52.724 | 69.4646 | 71.7547 | 80.0536 |
1.2793 | 9.0 | 6750 | 1.5234 | 76.7711 | 53.4504 | 69.9626 | 72.215 | 80.2093 |
1.2219 | 10.0 | 7500 | 1.5043 | 76.9627 | 53.7785 | 70.2119 | 72.4254 | 80.1563 |
1.196 | 11.0 | 8250 | 1.4870 | 77.2081 | 54.1476 | 70.4764 | 72.681 | 80.4921 |
1.1612 | 12.0 | 9000 | 1.4771 | 77.4156 | 54.445 | 70.7305 | 72.9198 | 80.2194 |
1.1388 | 13.0 | 9750 | 1.4715 | 77.4179 | 54.4518 | 70.6541 | 72.8583 | 80.2815 |
1.1277 | 14.0 | 10500 | 1.4699 | 77.4859 | 54.5463 | 70.7586 | 72.9591 | 80.2678 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
- Downloads last month
- 72
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.