license: apache-2.0
inference: false
NOTE: This GGML conversion is primarily for use with llama.cpp.
13B parameters
4-bit quantized
Based on version 1.1
Used PR "More accurate Q4_0 and Q4_1 quantizations #896" (should be closer in quality to unquantized)
For q4_2, "Q4_2 ARM #1046" was used. Will update regularly if new changes are made.
Choosing between q4_0, q4_1, and q4_2:
- 4_0 is the fastest. The quality is the poorest.
- 4_1 is a lot slower. The quality is noticeably better.
- 4_2 is almost as fast as 4_0 and about as good as 4_1 on Apple Silicon. On Intel/AMD it's hardly better or faster than 4_1.
7B version of this can be found here: https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
Vicuna Model Card
Model details
Model type: Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. It is an auto-regressive language model, based on the transformer architecture.
Model date: Vicuna was trained between March 2023 and April 2023.
Organizations developing the model: The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
Paper or resources for more information: https://vicuna.lmsys.org/
License: Apache License 2.0
Where to send questions or comments about the model: https://github.com/lm-sys/FastChat/issues
Intended use
Primary intended uses: The primary use of Vicuna is research on large language models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
Training dataset
70K conversations collected from ShareGPT.com.
Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
Major updates of weights v1.1
- Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from
"###"
to the EOS token"</s>"
. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries. - Fix the supervised fine-tuning loss computation for better model quality.