gguf weights for llama.cpp?
#1
by
segmond
- opened
Will you publish gguf weights?
failed to convert safetensors to gguf by llama.cpp
~/llama.cpp/convert_hf_to_gguf.py --outfile ./minmax-m1-80k.gguf --outtype q8_0 ./MiniMax-M1-80k/
INFO:hf-to-gguf:Loading model: MiniMax-M1-80k
WARNING:hf-to-gguf:Failed to load model config from MiniMax-M1-80k: The repository `MiniMax-M1-80k` contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/MiniMax-M1-80k.
Please pass the argument `trust_remote_code=True` to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: MiniMaxM1ForCausalLM
ERROR:hf-to-gguf:Model MiniMaxM1ForCausalLM is not supported