--- license: apache-2.0 language: - is base_model: - HuggingFaceTB/SmolLM2-135M-Instruct datasets: - HuggingFaceFW/fineweb-2 library_name: peft pipeline_tag: text-generation --- This model is a SmolLM2-135M-Instruct model fine-tuned on (so far, a part of) the Swedish portion of Fineweb-2. It is intended for my research and has not been evaluated more broadly yet. LoRA setup: - Rank: 256 - Alpha: 512 - Target modules: ["up_proj", "down_proj", "gate_proj", "o_proj"] Training: - 1 Epoch - Learning rate: 8e-4 - LR scheduler: Cosine - Warmup ratio: 0.05 - Batch size: 1 - 4 A100 (40GB) GPUs - Gradient accumulation steps: 64 - Effective batch size: 256 - Max. context length: 8192 tokens