Conversion to GGUF?
#4
by
davidpfarrell
- opened
Greetings! I thought it might be a good exercise to try to convert this to GGUF so I could load it into LM Studio and work on a Chat Template, but my llm-foo is pretty weak ...
Trying:
python convert_hf_to_gguf.py ../chess-llama/ --outfile ../chess-llama/chess-llama.gguf --outtype f32
And getting the BPE pre-tokenizer error:
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:** There are 2 possible reasons for this:
WARNING:hf-to-gguf:** - the model has not been added to convert_hf_to_gguf_update.py yet
WARNING:hf-to-gguf:** - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
WARNING:hf-to-gguf:** ref: https://github.com/ggml-org/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh: 1d31edaec3792849684a17d700a4036411e90feab86f1ad1525b2a414ce9fd19
WARNING:hf-to-gguf:**************************************************************************************
I tried hard-coding llama-bpe
and llama4
which allows the model the compile but includes the warning:
WARNING:gguf.vocab:Adding merges requested but no merges found, output may be non-functional.
Which is reflected by LM Studio when trying to load the model:
lmstudio-llama-cpp: failed to load model. Error: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file
Wondering if you had any guidance / direction to offer?