Where is tokenizer.model?

#1
by NightFox - opened

Without it it is impossible to make gguf, and the file from the base model does not fit or some other files are broken.

ValueError: Can not map tensor 'lm_head.weight'

uploading it in a moment

Without it it is impossible to make gguf, and the file from the base model does not fit or some other files are broken.

ValueError: Can not map tensor 'lm_head.weight'

check again, i have uploaded it

also check the adapter repo in case anything else is needed , "https://huggingface.co/xsanskarx/thinkgemma-4b"

Unfortunately that didn't help, still the same error. The base model is converted to gguf without problems.
I tried converting the lora-adapter with the base model but there script is broken for gemma3.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment