metadata
license: apache-2.0
language:
- zh
- en
Llama-3-Chinese-8B-LoRA
This repository contains Llama-3-Chinese-8B-LoRA, which is further pre-trained on Meta-Llama-3-8B with 120 GB Chinese text corpora.
Note: You must combine LoRA with the original Meta-Llama-3-8B to obtain full weight.
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
Others
For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b
For GGUF model (llama.cpp compatible), please see: https://huggingface.co/hfl/llama-3-chinese-8b-gguf
If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3