--- license: mit language: - ja - en - zh tags: - LLaMA2 - Japanese - LLM --- This model is traned with [llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) dataset. And this model used a few of dataset by 50000 chat samples and 280000 non chat samples. Improved performance in Chinese and Japanese. Use the QLoRA to fine-tune the vanilla [Llama-2-13b-chat-hf](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf). And you can use test.py to test the model. ### Recommend Generation parameters: * temperature: 0.5~0.7 * top p: 0.65~1.0 * top k: 30~50 * repeat penalty: 1.03~1.17 Contribute by Yokohama Nationaly University Mori Lab.