Spaces:
Running
on
A10G
Failed to convert my repo to gguf
I tried to use gguf_my_repo and it failed:
Error converting to fp16: INFO:hf-to-gguf:Loading model: llama-2-7b-chat_fine_tuned-F16-GGUF
Traceback (most recent call last):
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 5511, in
main()
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 5479, in main
hparams = Model.load_hparams(dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 469, in load_hparams
with open(dir_model / "config.json", "r", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'downloads/tmpy6_bq99w/llama-2-7b-chat_fine_tuned-F16-GGUF/config.json'
Where can I get this file?
How to fix this?
Copy it from the base model's repo, and upload it to your own repo.