File size: 968 Bytes
de821b5 5ab7214 4c71592 de821b5 77974b7 b45bad5 77974b7 9253029 77974b7 de821b5 4c71592 de821b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: alpindale/Mistral-7B-v0.2
---
# Mistral-7B-v0.2-OpenHermes
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/AbagOgU056oIB7S31XESC.webp)
SFT Training Params:
+ Learning Rate: 2e-4
+ Batch Size: 8
+ Gradient Accumulation steps: 4
+ Dataset: teknium/OpenHermes-2.5 (200k split contains a slight bias towards rp and theory of life)
+ r: 16
+ Lora Alpha: 16
Training Time: 13 hours on A100
- **Developed by:** macadeliccc
- **License:** apache-2.0
- **Finetuned from model :** alpindale/Mistral-7B-v0.2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|