nllb-indo-en / README.md
cobrayyxx's picture
Update README.md
9cc2a63 verified
metadata
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
  - generated_from_trainer
model-index:
  - name: nllb-indo-en-cleaned
    results: []
datasets:
  - cobrayyxx/FLEURS_INDO-ENG_Speech_Translation_No_Duplicate
language:
  - id
  - en
metrics:
  - bleu
  - chrf
pipeline_tag: audio-text-to-text

nllb-indo-en

This model is a fine-tuned version of facebook/nllb-200-distilled-600M on Fleurs Dataset without duplication of ids. It achieves the following results on the evaluation set:

  • Loss: 0.3048

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 96 4.3318
53.1274 2.0 192 2.0475
25.7634 3.0 288 0.4936
8.4388 4.0 384 0.2444
1.7896 5.0 480 0.2407
0.8853 6.0 576 0.2626
0.5583 7.0 672 0.2793
0.4353 8.0 768 0.2936
0.3497 9.0 864 0.2992
0.2969 10.0 960 0.3038
0.2713 10.4199 1000 0.3048

Model Evaluation

The performance of this model was evaluated using BLEU and CHRF metrics on validation dataset.

BLEU CHRF
40.94 66.46

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0