Error when converting gemma3 lora

#2
by jgayed - opened

Hi Ggml-org, I think the transformers version has to be updated? Can you point me in the right direction on how to do that? Or is it possible to update the version being used by this space?
Error converting to GGUF F32: b'INFO:lora-to-gguf:Loading base model from Hugging Face: google/gemma-3-27b-it\nTraceback (most recent call last):\n File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1038, in from_pretrained\n config_class = CONFIG_MAPPING[config_dict["model_type"]]\n File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 740, in __getitem__\n raise KeyError(key)\nKeyError: 'gemma3'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_lora_to_gguf.py", line 332, in \n hparams = load_hparams_from_hf(model_id)\n File "/home/user/app/llama.cpp/convert_lora_to_gguf.py", line 280, in load_hparams_from_hf\n config = AutoConfig.from_pretrained(hf_model_id)\n File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1040, in from_pretrained\n raise ValueError(\nValueError: The checkpoint you are trying to load has model type gemma3 but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.\n'

Similar issue on Llama.cpp github repo: https://github.com/ggml-org/llama.cpp/issues/12551. The following PR should fix it when it is merged: https://github.com/ggml-org/llama.cpp/pull/12571

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment