|
--- |
|
datasets: |
|
- sawalni-ai/fw-darija |
|
language: |
|
- ar |
|
base_model: |
|
- HuggingFaceTB/SmolLM-135M |
|
pipeline_tag: text-generation |
|
--- |
|
# SmolLM-135M-ft-ary |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M) on the [sawalni-ai/fw-darija](https://huggingface.co/datasets/sawalni-ai/fw-darija) dataset. |
|
|
|
- **Developed by:** EL MAJJODI Abdeljalil & Omneity Labs team |
|
- **Model type:** Text Generation |
|
- **Language(s) (NLP):** Darija (Arabic-ary) |
|
- **Finetuned from model:** HuggingFaceTB/SmolLM-135M |
|
|
|
It achieves the following results on the evaluation set: |
|
- **Loss**: 1.7018 |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
- lr_scheduler_type: linear |
|
- num_epochs: 1 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:-----:|:---------------:| |
|
| 1.7026 | 1.0 | 68699 | 1.7018 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.47.0 |
|
- Pytorch 2.1.1+cu121 |
|
- Datasets 3.1.0 |
|
- Tokenizers 0.21.0 |