En-Tn_update

This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-tn on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.13002
  • Bleu: 39.1470

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Epoch Training Loss Validation Loss Bleu
1 1.929300 1.884056 29.762382
2 1.637300 1.605588 32.846868
3 1.500000 1.457442 34.307484
4 1.402400 1.356578 35.423774
5 1.324000 1.276492 36.553368
6 1.251300 1.221768 37.464270
7 1.224700 1.181320 38.157490
8 1.193200 1.152997 38.800566
9 1.166700 1.136147 38.985707
10 1.142500 1.130020 39.209327

Framework versions

  • Transformers 4.21.0
  • Pytorch 1.12.0+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
117
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.