--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: internlm/internlm2_5-20b-chat model-index: - name: internlm2_5-20b-chat results: [] --- # internlm2_5-20b-chat This model is a fine-tuned version of [internlm/internlm2_5-20b-chat](https://huggingface.co/internlm/internlm2_5-20b-chat) on the alpaca_mgtv_p2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2401 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4016 | 0.1990 | 35 | 0.3346 | | 0.3059 | 0.3980 | 70 | 0.2825 | | 0.2736 | 0.5970 | 105 | 0.2758 | | 0.2667 | 0.7960 | 140 | 0.2577 | | 0.2765 | 0.9950 | 175 | 0.2499 | | 0.2345 | 1.1940 | 210 | 0.2490 | | 0.2714 | 1.3930 | 245 | 0.2440 | | 0.2428 | 1.5920 | 280 | 0.2438 | | 0.2244 | 1.7910 | 315 | 0.2411 | | 0.2306 | 1.9900 | 350 | 0.2401 | ### Framework versions - PEFT 0.11.1 - Transformers 4.43.3 - Pytorch 2.4.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1