Edit model card

ind-to-bbc-nmt-v6

This model is a fine-tuned version of facebook/nllb-200-distilled-600M on the nusatranslation_mt dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1587
  • Sacrebleu: 31.0331
  • Gen Len: 45.1815

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Sacrebleu Gen Len
4.3415 1.0 825 1.6110 27.0363 45.267
1.415 2.0 1650 1.2550 30.5956 45.5
1.1044 3.0 2475 1.1769 31.2342 45.4315
0.951 4.0 3300 1.1532 31.8633 45.149
0.8409 5.0 4125 1.1340 31.5171 45.355
0.7582 6.0 4950 1.1273 31.0686 45.222
0.6937 7.0 5775 1.1387 31.3129 45.1355
0.6433 8.0 6600 1.1479 31.444 45.233
0.6056 9.0 7425 1.1521 31.3122 45.0945
0.5819 10.0 8250 1.1587 31.0331 45.1815

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.14.6
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
615M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kepinsam/ind-to-bbc-nmt-v6

Finetuned
(72)
this model

Evaluation results