help on running this model locally on mobile.
Hi thanks a lot providing the .gguf files for Janus pro 7B, I actually want run it locally on my mobile using termux and apk. can please suggest me any repo or document to set it up. I generally use llama.cpp to run .gguf files but i saw that currently it doesn't have support for janus pro model. So can anyone please help me with this. and also please guide me on how can i convert janus-pro-1B also to .gguf format.
Hmm, if it's here it should be supported by llama.cpp, let me see... It seems to work fine with llama.cpp (if you use --jinja for the chat template), so you should be able to use it on your phone as well.
convert janus-pro-1B also to .gguf format.
It would likely convert the same way as any other hf model, using convert_hf_to_gguf.py, optionally followed by llama-quantize, if it were supported. For any vision etc. models, some other tools would be required, but support for that in llama.cpp tends to be spotty.
But you could try janus-pro-1b-lm, maybe, if you only need the llm? see https://hf.tst.eu/model#Janus-Pro-1B-LM-i1-GGUF
Thinking about it, this model is not Janus-Pro-7B, but Janus-Pro-7B-LM, which maybe explains some of the confusion.