Edit model card

opus-mt-tc-big-en-ar-finetuned-franco-to-arabic-1

This model is a fine-tuned version of Helsinki-NLP/opus-mt-tc-big-en-ar on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1589
  • Bleu: 27.737
  • Gen Len: 20.9565

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 13 3.2371 9.9987 21.0
No log 2.0 26 3.1023 10.8123 19.7826
No log 3.0 39 2.9474 9.0815 20.4783
No log 4.0 52 2.8274 11.6434 21.3478
No log 5.0 65 2.6879 11.6494 21.4348
No log 6.0 78 2.6243 16.1661 21.2174
No log 7.0 91 2.5342 16.5707 22.1304
No log 8.0 104 2.5489 17.885 20.2609
No log 9.0 117 2.4488 17.6415 20.4348
No log 10.0 130 2.4177 18.5943 20.5652
No log 11.0 143 2.3239 20.0857 21.3043
No log 12.0 156 2.2723 22.9544 21.4348
No log 13.0 169 2.2456 25.4455 21.5217
No log 14.0 182 2.2628 24.8757 21.2609
No log 15.0 195 2.2229 26.0051 21.3478
No log 16.0 208 2.2090 27.6742 20.8261
No log 17.0 221 2.1762 26.7048 21.3913
No log 18.0 234 2.1786 27.7328 21.1304
No log 19.0 247 2.1631 27.7187 21.2174
No log 20.0 260 2.1589 27.737 20.9565

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Tokenizers 0.13.3
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mohamedtolba/opus-mt-tc-big-en-ar-finetuned-franco-to-arabic-1

Finetuned
(6)
this model