This is the LLaMAfied version of Qwen-14B model by Alibaba Cloud.
This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py
The tokenizer is borrowed from https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory
- Developed by: Alibaba Cloud.
- Language(s) (NLP): Chinese/English
- License: Tongyi Qianwen License
Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("imdatta0/qwen_14b_llamafied")
model = AutoModelForCausalLM.from_pretrained("imdatta0/qwen_14b_llamafied", torch_dtype="auto", device_map="auto")
Thanks to : hiyouga/Qwen-14B-Chat-LLaMAfied
- Downloads last month
- 7