Cannot process gemma-3-27b-it-abliterated

#159
by NovNovikov - opened

Error converting to fp16: INFO:hf-to-gguf:Loading model: gemma-3-27b-it-abliterated
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Has vision encoder, but it will be ignored
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00012.safetensors'
Traceback (most recent call last):
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 5197, in
main()
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 5191, in main
model_instance.write()
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 3345, in write
super().write()
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 439, in write
self.prepare_tensors()
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 298, in prepare_tensors
for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 3393, in modify_tensors
vocab = self._create_vocab_sentencepiece()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 812, in _create_vocab_sentencepiece
raise FileNotFoundError(f"File not found: {tokenizer_path}")
FileNotFoundError: File not found: downloads/tmpqi09h_xv/gemma-3-27b-it-abliterated/tokenizer.model

NovNovikov changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment