Problems downloading models after feb 19 update
Hm, did something happen to this repo in the update ~2 days ago? I am no longer able to pull from it with ollama. For example:
: ollama pull hf.co/bartowski/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-GGUF:Q6_K_L
pulling manifest
Error: pull model manifest: 400: Repository is not GGUF or is not compatible with llama.cpp
I have pulled these models before, but after the update it seems to no longer work. HF are also not generating the "Use this model" links you get if you click a particular quant.
Ah yeah the name changed :) gotta make it DeepSeek now
ollama pull hf.co/bartowski/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-GGUF:Q6_K_L
Ah, sorry, I didn't notice the typo there. When I made the example I copied from my old models list. However, I also tried copying just the repo name and adding hf.co and the quant, so pretty much what you gave me there. But that sadly also doesn't work:
ollama pull hf.co/bartowski/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-GGUF:Q6_K_L
pulling manifest
Error: pull model manifest: 400: Repository is not GGUF or is not compatible with llama.cpp
oh hmm.. maybe ollama doesn't like the rename.. i'll escalate and get back to you
Thanks for your help (and for everything else you do for the community). Not sure it's ollama related btw, HF also does not manage to parse the GGUFs. The normal buttons for Train/Deploy/Use this model are not visible in the top right of the model card, and HF is not managing to parse out the model parameters and architecture like it usually displays on the GGUF line above all the quants.