license: unknown | |
## Llamacpp Quantizations of THUDM/glm-4-9b-chat | |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> commit hash 7d0e23d72ef4540d0d4409cb63ae682c17d53926 for quantization. Notably this includes b3333, the first official llama.cpp release that supports GLM-3 and GLM-4. | |
Original model: https://huggingface.co/THUDM/glm-4-9b-chat | |
As of writing, this is probably the only GLM-4 llama.cpp quant that is created with llama.cpp b3333+ | |
I have tested the gguf files with some simple prompts and they seem to work fine. | |
## Prompt format | |
``` | |
[gMASK]<sop><|user|> | |
{prompt} | |
<|assistant|> | |
``` | |
Apparently the model supports function calling as well if you supply a more elaborate system prompt. The original chat template is provided in https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/tokenizer_config.json , and it is too complicated if you don't want that functionality. (If you don't read Chinese, you're advised to translate it to a language you understand and read it first before adopting that prompt for your purposes.) | |
## Quantizations | |
Due to resource limitations we only have a select handful of quantizations. Hopefully they are useful for your purposes. | |
## Legal / License | |
*"Built with glm-4"* | |
I just copied the LICENSE file from https://huggingface.co/THUDM/glm-4-9b-chat as required for redistribution. | |