SmolLM-135M-ft-ary
Model Description
This model is a fine-tuned version of HuggingFaceTB/SmolLM-135M on the sawalni-ai/fw-darija dataset.
- Developed by: EL MAJJODI Abdeljalil & Omneity Labs team
- Model type: Text Generation
- Language(s) (NLP): Darija (Arabic-ary)
- Finetuned from model: HuggingFaceTB/SmolLM-135M
It achieves the following results on the evaluation set:
- Loss: 1.7018
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with:
- betas=(0.9,0.999)
- epsilon=1e-08
- optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.7026 | 1.0 | 68699 | 1.7018 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.1.1+cu121
- Datasets 3.1.0
- Tokenizers 0.21.0
- Downloads last month
- 90
Model tree for sawalni-ai/smollm-fw-darija
Base model
HuggingFaceTB/SmolLM-135M