File size: 3,107 Bytes
eb0cef4 5486fb3 eb0cef4 31f9c23 bc83b35 679605a c9a1a67 1a34193 6b5b79b 81cc82a a11ecdd d5963e1 6fc0d94 09612b5 c9d199b fcbabc1 623a994 8fcbb3b 1caf59a e2ed2b7 23f69cd eff2d22 c29d45c 5486fb3 eb0cef4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: pijarcandra22/BartIndo2Bali
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pijarcandra22/BartIndo2Bali
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7819
- Validation Loss: 2.1042
- Epoch: 43
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.3767 | 3.6194 | 0 |
| 3.5364 | 3.1996 | 1 |
| 3.1525 | 2.9458 | 2 |
| 2.8777 | 2.8118 | 3 |
| 2.6993 | 2.6979 | 4 |
| 2.5550 | 2.6071 | 5 |
| 2.4536 | 2.5362 | 6 |
| 2.3338 | 2.4572 | 7 |
| 2.2394 | 2.3878 | 8 |
| 2.1466 | 2.3692 | 9 |
| 2.0795 | 2.3189 | 10 |
| 2.0061 | 2.2674 | 11 |
| 1.9321 | 2.2393 | 12 |
| 1.8837 | 2.2181 | 13 |
| 1.8224 | 2.2002 | 14 |
| 1.7626 | 2.1671 | 15 |
| 1.7251 | 2.1386 | 16 |
| 1.6624 | 2.1245 | 17 |
| 1.6191 | 2.1134 | 18 |
| 1.6177 | 2.1061 | 19 |
| 1.5524 | 2.0845 | 20 |
| 1.4965 | 2.0750 | 21 |
| 1.4618 | 2.0527 | 22 |
| 1.4188 | 2.0584 | 23 |
| 1.3774 | 2.0359 | 24 |
| 1.3469 | 2.0567 | 25 |
| 1.3113 | 2.0295 | 26 |
| 1.2791 | 2.0134 | 27 |
| 1.2436 | 2.0431 | 28 |
| 1.1915 | 2.0201 | 29 |
| 1.1815 | 2.0283 | 30 |
| 1.1314 | 2.0230 | 31 |
| 1.1071 | 2.0424 | 32 |
| 1.0781 | 2.0357 | 33 |
| 1.0429 | 2.0208 | 34 |
| 1.0134 | 2.0458 | 35 |
| 0.9799 | 2.0466 | 36 |
| 0.9567 | 2.0592 | 37 |
| 0.9261 | 2.0278 | 38 |
| 0.8931 | 2.0641 | 39 |
| 0.8742 | 2.0783 | 40 |
| 0.8397 | 2.0781 | 41 |
| 0.8228 | 2.1010 | 42 |
| 0.7819 | 2.1042 | 43 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|