why do you have split gguf's? ' invalid magic characters'

#1
by LaferriereJC - opened

(textgen) [root@pve-m7330 qwen]# /home/user/text-generation-webui/llama.cpp/llama-gguf-split --merge qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf qwen2.5-7b-instruct-q6_k-00002-of-00002.gguf
gguf_merge: qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf -> qwen2.5-7b-instruct-q6_k-00002-of-00002.gguf
gguf_merge: reading metadata qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf done
gguf_merge: reading metadata qwen2.5-7b-instruct-q6_k-00002-of-00002.gguf ...gguf_init_from_file: invalid magic characters ''

gguf_merge: failed to load input GGUF from qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf

Qwen org

Hi, please refer to the modelcard.

The command should be llama-gguf-split --merge [path-to-first-shard] [path-to-outfile], that is, in your case:

llama-gguf-split --merge qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf qwen2.5-7b-instruct-q6_k.gguf

It appears that you have overwritten the second shard with your command such that it is not valid split anymore.

Merging is also optional. Running with the first shard should be fine:

llama-cli --model qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf [[other parameters]]
This comment has been hidden
This comment has been hidden

Hi, please refer to the modelcard.

The command should be llama-gguf-split --merge [path-to-first-shard] [path-to-outfile], that is, in your case:

llama-gguf-split --merge qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf qwen2.5-7b-instruct-q6_k.gguf

It appears that you have overwritten the second shard with your command such that it is not valid split anymore.

Merging is also optional. Running with the first shard should be fine:

llama-cli --model qwen2.5-7b-instruct-q6_k-00001-of-00002.gguf [[other parameters]]

You should upload unsplit weights.

Sign up or log in to comment