madatnlp/gamza-bart-for-kormath128
This model is a fine-tuned version of gogamza/kobart-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.1429
- Validation Loss: 0.3575
- Epoch: 42
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
5.9513 | 3.2241 | 0 |
2.6808 | 1.8567 | 1 |
1.6770 | 1.2966 | 2 |
1.2253 | 1.0402 | 3 |
1.0279 | 0.9159 | 4 |
0.9241 | 0.8158 | 5 |
0.8570 | 0.8047 | 6 |
0.8130 | 0.7684 | 7 |
0.7771 | 0.7817 | 8 |
0.7522 | 0.7653 | 9 |
0.7318 | 0.6813 | 10 |
0.7111 | 0.6535 | 11 |
0.6916 | 0.6719 | 12 |
0.6901 | 0.7191 | 13 |
0.6551 | 0.6330 | 14 |
0.6495 | 0.6242 | 15 |
0.6258 | 0.6048 | 16 |
0.6184 | 0.6590 | 17 |
0.6055 | 0.6622 | 18 |
0.5946 | 0.6377 | 19 |
0.5807 | 0.5994 | 20 |
0.5781 | 0.5797 | 21 |
0.5644 | 0.6154 | 22 |
0.5466 | 0.5777 | 23 |
0.5417 | 0.6324 | 24 |
0.5204 | 0.5763 | 25 |
0.5081 | 0.5751 | 26 |
0.4923 | 0.5908 | 27 |
0.4616 | 0.5433 | 28 |
0.4238 | 0.4823 | 29 |
0.3765 | 0.4474 | 30 |
0.3447 | 0.4306 | 31 |
0.3156 | 0.3817 | 32 |
0.2832 | 0.3824 | 33 |
0.2632 | 0.3204 | 34 |
0.2365 | 0.3539 | 35 |
0.2179 | 0.3162 | 36 |
0.2024 | 0.3385 | 37 |
0.1860 | 0.3367 | 38 |
0.1801 | 0.3019 | 39 |
0.1629 | 0.3045 | 40 |
0.1533 | 0.2567 | 41 |
0.1429 | 0.3575 | 42 |
Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.