Yoko-7B-Japanese-v0 / README.md
ganchengguang's picture
Update README.md
f93dde7
|
raw
history blame
460 Bytes
---
license: mit
---
This model is traned with [guanaco](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) dataset.
Improved performance in Chinese and Japanese.
Use the QLoRA to fine-tune the vanilla [LLaMA2-7B](https://huggingface.co/NousResearch/Llama-2-7b-hf).
### Recommend Generation parameters:
* temperature: 0.5~0.7
* top p: 0.65~1.0
* top k: 30~50
* repeat penalty: 1.03~1.17
Contribute by Yokohama Nationaly University Mori Lab.