finetuning parameters
hi, can you share the finetuning hyper-parameters?
I have finetuned the https://huggingface.co/mistralai/Mistral-7B-v0.1 with your dataset, but the metric of ARC and hellaswag decreases significantly during the training
here are some information of my hyper-parameters
- full parameters finetuning
- learning rate = 5e-6
- batch_size=64
- epoch=3
The batch_size is too huge and you should set to 4 or 6 is better.
I would prefer set the learning rate as 2e-5
Thanks for your response~
I will try!
@xDAN2099 and others, I'm trying to finetune Mistral 7B with SlimOrca and the MT-bench score is consistently well-below the OpenOrca benchmarked 6.84. Could you please share the exact training script along with any other details on hardware used to get on-par score on MT-bench? Any help in that regard is much appreciated, thank you!