metadata
language:
- ja
- ko
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: jako_mbartLarge_13p_run1
results: []
jako_mbartLarge_13p_run1
This model is a fine-tuned version of facebook/mbart-large-50-many-to-many-mmt on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0819
- Bleu: 29.1055
- Gen Len: 18.2731
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
1.3826 | 0.39 | 1500 | 1.2989 | 22.0729 | 18.9717 |
1.1964 | 0.78 | 3000 | 1.1630 | 25.4863 | 19.1908 |
0.9449 | 1.17 | 4500 | 1.1125 | 27.385 | 18.2955 |
0.8102 | 1.56 | 6000 | 1.0920 | 28.0041 | 18.6572 |
0.7692 | 1.95 | 7500 | 1.0819 | 29.1055 | 18.2731 |
0.5741 | 2.34 | 9000 | 1.1369 | 28.1574 | 18.3485 |
0.5198 | 2.73 | 10500 | 1.1538 | 28.657 | 18.4527 |
0.4532 | 3.12 | 12000 | 1.1582 | 28.6914 | 18.4562 |
0.3466 | 3.51 | 13500 | 1.2048 | 28.8955 | 18.427 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1