--- license: apache-2.0 inference: false datasets: - PengQu/langchain-MRKL-finetune - fnlp/moss-003-sft-data - anon8231489123/ShareGPT_Vicuna_unfiltered --- **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL for instructions.

# vicuna-13b-finetuned-langchain-MRKL ## Model details **Model type:** vicuna-13b-finetuned-langchain-MRKL is an open-source chatbot trained by fine-tuning vicuna-13b on 15 examples with langchain-MRKL format. **Where to send questions or comments about the model:** https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL/issues ## Intended use **Primary intended uses:** The primary use of Vicuna is research on large language models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training dataset train only one epoch on mix data (sharegpt + 32*my.json + moss-003-sft-data) ## Evaluation demo code: https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL/blob/main/demo.ipynb No evaluation set. Because we don't improve the ability of model. we just make model fit langchain-MRKL strictly. We just want to show vicuna-13b's powerful ability about thinking and action. This is the first step. We hope if we get more samples about more tools, we can support more complicate plugins too. ## Major Improvement - support langchain-MRKL(agent= "zero-shot-react-description") - very fast because of stritcly format(it doesn't generate redundant tokens)