nllb-indo-en

This model is a fine-tuned version of facebook/nllb-200-distilled-600M on Fleurs Dataset without duplication of ids. It achieves the following results on the evaluation set:

  • Loss: 0.3048

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 96 4.3318
53.1274 2.0 192 2.0475
25.7634 3.0 288 0.4936
8.4388 4.0 384 0.2444
1.7896 5.0 480 0.2407
0.8853 6.0 576 0.2626
0.5583 7.0 672 0.2793
0.4353 8.0 768 0.2936
0.3497 9.0 864 0.2992
0.2969 10.0 960 0.3038
0.2713 10.4199 1000 0.3048

Model Evaluation

The performance of this model was evaluated using BLEU and CHRF++ metrics on validation dataset.

BLEU CHRF CHRF++
81.53 89.48 89.67

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
24
Safetensors
Model size
615M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support transformers models with pipeline type audio-text-to-text

Model tree for cobrayyxx/nllb-indo-en

Finetuned
(102)
this model

Dataset used to train cobrayyxx/nllb-indo-en