--- base_model: facebook/nllb-200-distilled-600M tags: - generated_from_trainer metrics: - bleu model-index: - name: nllb-200-distilled-600M_dyu-fra results: [] --- [Visualize in Weights & Biases](https://wandb.ai/tanihaha/huggingface/runs/3q6o5id1) # nllb-200-distilled-600M_dyu-fra This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1351 - Bleu: 8.0857 - Gen Len: 12.4534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 127 | 2.2543 | 6.9492 | 12.5554 | | No log | 2.0 | 254 | 2.1730 | 7.8245 | 12.363 | | No log | 3.0 | 381 | 2.1442 | 8.0013 | 12.4174 | | 2.4578 | 4.0 | 508 | 2.1351 | 8.0857 | 12.4534 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1