--- license: openrail datasets: - JosephusCheung/GuanacoDataset language: - en - zh - ja --- # Guanaco: A Multilingual Instruction-Following Language Model Based on LLaMA 7B This model is trained with modified [alpaca-lora](https://github.com/tloen/alpaca-lora) with lora + embed_tokens + lm_head be trained. The dataset is from alpaca-lora (the cleaned version of alpaca) and [guanaco](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset). With trained embed and head, the model perform better at Chinese and Japanese then original LLaMA, and with instruction based prompt. You can use this model more easily. Since this model is trained by guanaco dataset, you can also use this as chatbot. just use this format: ``` ### Instruction: User: Assistant: ### Input: System: User: ### Response: ``` **Tips: I just removed the first line of original prompt to reduce token comsumption, plz consider remove it when you want to use this model**